Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization

Artificial bee colony (ABC) algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithmbased onparticle swarm searchmechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize themechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.


Introduction
Optimization problems play a very important role in many scientific and engineering fields.In the last two decades, several swarm intelligence algorithms, such as ant colony optimization (ACO) [1,2], particle swarm optimization (PSO) [3,4], and artificial bee colony (ABC) algoritm [5,6], have been developed for solving difficult optimization problem.Researchers have shown that algorithms based on swarm intelligent have great potential [7][8][9] and have attracted much attention.
The ABC algorithm was first proposed by Karaboga in 2005, inspired by the intelligent foraging behavior of honey bee [5].Since the invention of the ABC algorithm, it has been used to solve both numerical and nonnumerical optimization problems.The performance of ABC algorithm has been compared with some other intelligent algorithms, such as GA [10], differential evolution algorithm (DE) [11].The results show that ABC algorithm is better than or at least comparable to the other methods.Recently, for improving the performance of ABC algorithm, many variant ABC algorithms have been developed.Alatas proposed a ABC algorithm by using chaotic map as efficient alternatives to generate pseudorandom sequence [12].To improve the exploitation ability, Zhu and Kwong presented a global-bestsolution-guided ABC (GABC) algortihm by incorporating the information of global best solution into the solution search equation [13].By combing Powell's method, Gao et al. proposed an improved ABC algorithm-Powell ABC (PABC) algorithm [14].In order to improve the exploitation ability, a converge-onlookers ABC (COABC) was developed by applying the best solution of the previous iteration in search equation at the onlooker stage [15].More extensive review of ABC can refer to [16].
In addition, considering PSO has good exploitation ability, a few of hybrid ABC algorithms have been presented based on PSO algorithm.For example, a novel hybrid approach referred to as IABAP based on the PSO and ABC is presented in [17].In this algorithm, the flow of information from bee colony to particle swarm is exchanged based on scout bees.Another hybrid approach is the ABC-SPSO algorithm based on the ABC and PSO [18].In ABC-SPSO algorithm, the update rule (solution updating equation) of the ABC algorithm is executed among personal best solutions of

The Original ABC Algorithm
ABC algorithm contains three groups of bees: employed bees, onlookers, and scouts.The numbers of employed bees and onlookers are set equally.Employed bees are responsible for searching available food sources and gathering required information.They also pass their food information to onlookers.Onlookers select good food source from those found by employed bees to further search the foods.When the quality of the food source is not improved through a predetermined number of cycles, the food source is abandoned by its employed bee.At the same time, the employed bee becomes a scout and start to search for a new food source.In ABC algorithm, each food source represents a feasible solution of the optimization problem and the nectar amount of a food source is evaluated by the fitness value (quality) of the associated solution.The number of employed bees is set to that of food sources.
Assume that the search space is -dimension, the position of the th food source (solution) can be expressed as a -dimension vector   = ( ,1 ,  ,2 , . . .,  , ),  = 1, 2, . . ., ,  is the number of food sources.The detail of the orginal ABC algorithm is given as follows.
At the initialization stage, a set of food source positions is randomly selected by the bees as in (1) and their nectar amounts are determined: where  ∈ {1, 2, . . ., },  ∈ {1, 2, . . ., },   and   are the lower bound and upper bound of the th dimension, respectively.
An onlooker bee evaluates the nectar information taken from all employed bees and chooses a food source with a probability related to its nectar amount.The food source with higher quality would have a larger opportunity to be selected by onlookers.The probability could be obtained from the following equation: where fit(  ) is the nectar amount of the th food source and it is associated with the objective function value (  ) of the th food source.Once a food source   is selected, she utilizes (3) to produce a modification on the position (solution) in her memory and checks the nectar amount of the candidate source (solution) where ,  ∈ {1, 2, . . ., },  ̸ = ,    is a new feasible solution that produced from its previous solution   and the randomly selected neighboring solution   ;  is a random number between [−1, 1], which controls the production of a neighbor food source position around   ;  and  are randomly chosen indexes.In each iteration, only one dimension of each position is changed.Providing that its nectar is higher than that of the previous one, the bee memorizes the new position and forgets the old one.
In ABC algorithm, there is a control parameter called limit in the original ABC algorithm.If a food source is not improved anymore when limit is exceeded, it is assumed to be abandoned by its employed bee and the employed bee associated with that food source becomes a scout to search for a new food source randomly, which would help avoiding local optima.

Particle Swarm Optimization (PSO)
As a swarm-based stochastic optimization method, the PSO algorithm was developed by Kennedy and Eberhart [19], which is based on social behavior of bird flocking or fish schooling.The original PSO maintains a population of particles   = ( ,1 ,  ,2 , . . .,  , ),  = 1, 2, . . .,  which distribute uniformly around search space at first.Each particle represents a potential solution to an optimization problem.After randomly produced solutions are assigned to the particles, velocities of the particles are updated by using self-best solution of the particle obtained previous in iterations and global best solution obtained by the particles so far at each iteration.This is formulated as follows: where which represents the rate of the position change for the particle;    () is the th particle in th dimension at step ; V   () is the velocity of the th particle in th dimension at step ;  best , is the personal best position of the th particle in th dimension at time step ;  best is the global best position obtained by the population at step ;  1 and  2 are the positive acceleration constants used to scale the contribution of cognitive and social components, respectively;  1 and  2 which are stochastic elements of the algorithm are random numbers in the range [0, 1].For each particle , Kennedy and Eberhart [19] proposed that the position   can be updated in the following manner: Considering the minimization problem, the personal best solution of the particle at the next step  + 1 is calculated as The global best position  best is determined by using (7) ( is the number of the particles): where  is the objective function.
In (4), to control the exploration and exploitation abilities of the swarm, Shi and Eberhart proposed a new parameter called as "inertia weight " [20].The inertia weight controls the momentum of the particle by weighing the contribution of the previous velocity.By adding the inertia weight , (4) is changed Based on the description of PSO, we can see that the particles have a tendency to fly towards the better and better search area over the course of search process.So, the PSO algorithm can enforce a steady improvement in solution quality.

Hybrid Approach (ABC-PS)
From the above discussion of ABC and PSO, it is clear that the global best solution of the population does not be directly used in ABC algorithm; at the same time, it can be concluded that when the particles in the PSO get stuck in the local minima, it may not get rid of the local minima.For overcoming these disadvantages of two algorithms, we propose a hybrid global optimization approach by combing ABC algorithm and PSO searching mechanism.In this algorithm, the initial population is generated by using good point set theory.
where   is a -dimensional Banach space of functions  with norm ‖ • ‖, V() = ‖ − ‖ measures the variability of the function .
where  is an absolute constant, when we want to estimate the integral of a function  over the -dimensional unit hypercube   , namely  = ∫ ∈  (), by the average value off over any point set   () (1 ≤  ≤ ),   = ∑((  ())/), then the integration error   =  −   is not smaller than ( −1 ).
By Theorems 4-6, it can be seen that if we estimate the integral based on good point set, the degree of discrepancy () = (, ) −1+ is only related with .This is a good idea for high dimensional approximation computation.In other words, the idea of good point set is to take the point set more evenly than random point.
For the -dimensional local search space , the so-called good point set which contains  points can be found as follows: where   = 2 cos(2/), 1 ≤  ≤ ,  is the minimum prime number which content with  ≥ 2 + 3, and {  * } is the decimal fraction of   *  (or with   =   , 1 ≤  ≤ ).
Since good point set principle is based on unit hypercube or hypersphere, in order to map  good points from space  : [0, 1]  to the search space  : [, ]  , we define the following transformation: In the following, for two-dimensional space [−1, 1], we generate 100 points by using goog point set method and random method, respectively, and give the distribution effect of them (see Figures 1 and 2).It can be seen that good point set is uniform, and as long as the sampling number is certain, the income distribution effect is the same at every time; so good point set method has good stability.

Chaotic Search Operation.
In our algorithm, assume that  best is the best solution of the current iteration.To enrich the searching behavior in  best and to avoid being trapped into local optimum, chaotic dynamics is incorporated into our algorithm and the detail is given as follows.Firstly, the wellknown logistic equation is employed to generate a chaotic sequence, which is defined as follows: where  is the length of chaotic sequence.Then map ch  to a chaotic vector in the interval [, ]: where  and  are the lower bound and upper bound of variable , respectively.Finally, adopt the following equation to generate the new candidate solution x: x = (1 − ) *  best +  * CH  ,  = 1, . . ., , where  is the shrinking factor, which is defined as follows: where maxcycle is the maximum number of iterations and iter is the number of current iteration.By (15), it can be seen that  will become small with the evolution generations increasing.Furthermore, combing (13), it is easy to see that  is smaller, less chaotic search is needed.Thus, from the above discussion, we know that the local search range becomes smaller with the process of evolution.

The Statement of ABC-PS Algorithm.
Based on the above, the ABC-PC algorithm is given in this subsection.

Algorithm 7.
(1) Set the population size , give the maximum number of iteration maxcycle,  max ,  min , V max , and V min .( 2) Use (11) to creat an initial population {  |  = 1, . . ., }.Calculatethe function value of the population {  |  = 1, . . ., }, find the best solution  best and the personal bests of the population  best  .(3) While the stopping criterion is not meet do (4) For  = 1 to  do % the employed bee phase (5) Update the velocities of the particles and the positions of the particlesby using ( 8) and ( 5), respectively.(6) Determine personal bests of the particles by using (6), and update trail.(7) End if (8) Determine the  best of the population.(9) End for (10) For  = 1 to  do % the onlooker phase (11) If rand < Prob() (12) Update the velocity of the food source  and its the position by using ( 8) and ( 5). ( 13) Determine personal bests of the particles by using (6), and update trail.( 14) End if (15) If trail  = max(trail) > limit, then % the scout phase replace   with a new solution produced by ( 2). ( 16) End if (17) Determine the  best of the population.

Comparison of the ABC-PS with the Other Hybrid Methods Based on the ABC
In this section, ABC-PS is applied to minimize a set of benchmark functions.In all simulations, the inertia weight in ( 4) is defined as follows: where  max and  min are the maximum inertia weight and minimum weight, respectively, iter denotes the times of current iteration.From the above formula, it can be seen that, with the iteration increasing, the velocity V  of the particle   becomes important more and more. max and  min are set to 0.9 and 0.4, respectively.V min = −1, V max = 1, and Experiment 1.In order to evaluate the performance of ABC-PS algorithm, we have used a test bed of four traditional numerical benchmarks as illustrated in Table 1, which include Matyas, Booth, 6 Hump Camelback, and GoldsteinCPrice functions.The characteristics, dimensions, initial range, and formulations of these functions are given in Table 1.Empirical results of the proposed hybrid method have been compared with results obtained with that of basic ABC algorithm and a latest algorithm COABC [15].The values of the common parameters used in three algorithms such as population size and total evaluation number are chosen in the same.Population size is 50 for all functions, the limit is 10.For each function, all the methods were run 30 times independently.In order to make comparison clear, the global minimums, maximum number of iterations, mean best values, standard deviations are given in Table 2.For ABC and ABC-PS, Figures 3, 4, 5, and 6 illustrate the change of the best value of each iteration.The experiment shows that the ABC-PS method is much better than the initial ABC algorithm and COABC.Experiment 2. To further verify the performance of ABC-PS, 12 numerical benchmark functions are selected from the literatures [13][14][15].This set consists of many different kinds of problems such as unimodal, multimodal, regular, irregular, separable, nonseparable, and multidimensional.The characteristics, dimensions, initial range, and formulations of these functions are listed in Table 3.
In order to fairly compare the performance of ABC-PS, COAB, GABC [13], and PABC [14], the experiments are conducted the same way as described [13][14][15].The minimums, max iterations, mean best values, and standard deviations found after 30 runs are given in Table 4.The bold font in Table 4 is the optimum value among different methods.From Table 4, it can be see that the method ABC-PS is superior to other algorithms in most cases, expect to  5 and  6 .

Conclusion
In this paper, a hybrid ABC algorithm based on particle swarm searching mechanism (ABC-PS) was presented.For overcoming the disadvantage of ABC algorithm, we adopted good point set theory to generate the initial food source; then, the mechanism of PSO was utilized to search new candidate solutions for improving the exploitation ability of bee swarm; finally, the chaotic search operator was adopted in the best solution of the current iteration to increase the searching ability.The experimental results show that the ABC-PS exhibits a magnificent performance and outperforms other algorithms such as ABC, GABC, COABC, and PABC in most case.

( 18 )
Chaotic search  times in  best , and redetermine the  best of the population.

Figure 3 :
Figure 3: The relation of the best value and each iteration (the function Matyas).

Figure 4 :Figure 5 :
Figure 4: The relation of the best value and each iteration (the function Booth).

Figure 6 :
Figure 6: The relation of the best value and each iteration (the function Goldstein).

Table 3 :
Benchmark functions used in Experiment 2.