A Modified Artificial Bee Colony Algorithm Based on Search Space Division and Disruptive Selection Strategy

,


Introduction
Optimization problems exist extensively in information science, control engineering, industrial design, operational research, and so on.Therefore, optimization algorithms, which are of great importance to engineering and scientific research, have attracted more and more of the researchers' attention and many achievements have been got.However, conventional optimization algorithms fail to process problems characterized as nonconvex, nondifferentiable, discontinuous, high dimensional, and so forth; swarm intelligence algorithms that have some advantages, such as scalability, fault tolerance, adaptation, speed, modularity, autonomy, and parallelism [1], provide a promising way to solve the complex optimization problems mentioned above.Some excellent algorithms, such as genetic algorithm (GA) [2], particle swarm optimization (PSO) [3], differential evolutionary (DE) algorithm [4], ant colony optimization (ACO) [5], simulate anneal arithmetic (SAA) [6], artificial bee colony (ABC) algorithm [7], have been proposed and successful applications have been made in many fields.Among them, by simulating foraging behavior of honey bee swarm, ABC developed by Karaboga is competitive to other swarm intelligence algorithms [8].As a result, it has been widely used in multiobjective optimization, circuit design, SAR image segmentation, flow shop scheduling, and related optimization problems [9][10][11].Nevertheless, as research on ABC is at the primary phase, many problems such as poor quality of initial solutions, slow convergence, premature, and low precision are still hampering the development and application of ABC.
To overcome the above mentioned shortages, researchers have presented many kinds of ABC variants.Zhu and Kwong [12] proposed a gbest-guided ABC (GABC) algorithm by incorporating the information of global best (gbest) solution

Standard Artificial Bee Colony Algorithm
The ABC algorithm which was originally introduced by Karaboga is a recently proposed optimization algorithm inspired by simulating the foraging behavior of a honey bee swarm [1].In the minimal model, a colony of artificial bees consists of three kinds of bees [7,8]: employed bees, onlooker bees, and scout bees.Both of the number of employed bees and onlooker bees are equal to half of the bee colony.There is a one-to-one correspondence between each employed bee and every food source.Employed bees are responsible for searching food sources, gathering information, and sharing food information with onlooker bees around the hive.The employed bee whose food source has been exhausted by the bees becomes a scout.Scout bee, just as its name implies, could have the fast discovery of the group of feasible solutions as a task.In the ABC algorithm, the position of a food source represents a possible solution to the optimization problem and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution.The ABC algorithm can be concluded into four phases: initialization phase, employed bee phase, onlooker bee phase, and scout bee phase, as described below.
(1) Initialization Phase.To begin with, a population of  individuals is generated randomly.Each initial food source   ( = 1, 2, . . ., ) as a -dimensional vector is produced by (1), where  denotes food source, namely, solution where  = 1, 2, . . ., ,  = 1, 2, . . ., , and  is the number of optimization parameters; ub  and lb  are the upper and lower bounds for the dimension , respectively.And then, each solution is evaluated by where fitness  denotes fitness value of solution  and   represents cost function of solution .
After the initialization, solutions repeat processes of employed bee phase, onlooker bee phase, and scout bee phase until stop conditions are satisfied.
(2) Employed Bee Phase.At this phase, each employed bee produces a new food source (solution) in the neighborhood of the previous selected solution.The position of the new food source is calculated by where  ∈ {1, 2, . . ., } and  ∈ {1, 2, . . ., } are randomly chosen indexes and  has to be different from  and V  is a new solution and   is a random number between [−1, 1].
After that, fitness value of the new food source is evaluated, and compares with that of the old food source   .The old food source will be replaced by the new one if the new
Step 6.A greedy selection strategy is applied to select the better food source.
Step 8. Update the position of the onlooker bee by (3).
Step 9. A greedy selection strategy is applied to select the better food source.
Step 10.Determine the abandoned position, and generate a new position by (1).
Step 11.Memorize the best food source achieved till now.cycle = cycle + 1 Step 12. Until (cycle = maximum cycle number) End.
Algorithm 1: Standard artificial bee colony algorithm.one has a better fitness value.Otherwise, the old one will be stored.
(3) Onlooker Bee Phase.When all employed bees have finished their searching process, they share positions and fitness information of the food sources with onlooker bees, each of which selects a food source depending on probability   calculated by (4).After an onlooker bee chooses a food source with a probability, a new food source is generated and fitness value is calculated.Subsequently, a greedy selection is applied between the selected food source and the old one: (4) Scout Bee Phase.If the fitness value of a food source does not improve for a certain number of cycles, the food source will be abandoned and the corresponding employed bee becomes a scout bee.Then, the scout bee produces a new position randomly as in (1) to replace the position of abandoned food source.On the basis of the above analysis, the pseudocode of standard artificial bee colony algorithm is simply described in Algorithm 1.

Our Proposed Algorithm: SDABC
Attracted by the prospect and potential of the ABC algorithm, researchers have focused on its variants.However, there are still some problems which need to be solved.In this section, to further improve its performance, we introduce the modified artificial bee colony from three aspects, namely, population initialization based on search space division, disruptive selection strategy, and the improved scout bee phase.The detailed descriptions are as follows.

Population Initialization Based on Search Space Division.
An initialization method capable of providing a better exploration of the search-space and presenting only high-quality solutions should improve the performance of a metaheuristic algorithm [20].Specifically, population initialization influences convergence speed, final solution accuracy, and stability.Generally speaking, in absence of a prior knowledge about the solution, the random initialization is the most commonly used way to produce initial population.However, because of its randomness, it is impossible for random initialization to get high quality of initial solution steadily.That is to say, once in a while, an initial solution produced by random initialization close to optimal solution could result in a fast convergence.Otherwise, it will take considerably more time.Logically, we should be looking in all directions simultaneously [21], in other words, initial solutions should be distributed evenly across the whole search space.
Accordingly, we proposed a novel initialization algorithm called search space division (SSD).In order to make it easy to understand, we take one-dimensional space as an example, as shown in Figure 1.The basic ideal of SSD is given as follows: to begin with, SSD divides the search space into  segments (e.g.,  = 4).There is one and only one initial solution in every segment (searching subspace), which makes sure initial solutions should be distributed relatively evenly across the whole search space.After that, SSD uses each midpoint in searching subspace as a base point, and initial solutions are random bias from base points which can move from left or right for up to half the length of segment.Here, random numbers generated in a limited scope are used to produce different initial solutions at each run, which make a chance for algorithm to succeed in the next run while it failed in the previous run.It is necessary to emphasize that SSD can easily work on the high dimensions.Based on the research described above, the main steps of SSD can be summarized as in Figure 1.
Let   ∈ [ min  ,  max  ] be a real number, which is the th dimension of th solution.Where  ∈ [1, 2, . . ., ] ( denotes the number of initial solutions) and  ∈ [1, 2, . . ., ] ( represents the number of parameters),  max  and  min  are the upper and lower bounds for the th dimension of th

Begin
Step 1. Set the number of employed bees FN and the number of parameters  Step 2. for  = 1 to FN then do Step 3. for  = 1 to  then do solution, respectively.Then, base points are generated by (5), and initial solutions are produced by using ( 6) where  center is the base point corresponding to the th dimension of th solution Through the above analysis, we use SSD instead of pure random initialization to produce initial solutions, and its pseudocode is given in Algorithm 2.

Disruptive Selection Strategy.
Fitness value is of crucial importance to the ABC algorithm because it is the sole criterion of nectar amount.In the ABC algorithm, fitness value is generated by (2).As described in [22], there is a vital problem which needs to be solved.For example, when finding the minimum value, ( 2) is used to evaluate function value.However, when the function value is very close to zero, for example, 1 − 20, fitness value calculated by ( 2) is rounded up to be 1 (1 − 20 is ignored).Subsequently, the fitness of all solutions will be equal to 1 in later iterations.That is to say, a new solution that gives a better fitness value than the old solution will be ignored and the solution will stagnate at the old solution [22].In order to solve this problem, literature [22] directly used the objective value of function for comparison and selection of the better solution.To some extent, the issue has been solved by the above amendment, but it has raised a new problem: onlooker bees lose search direction, which results in falling into local optimum.As the number of the iterations increases, difference of the objective function value becomes smaller and smaller, which requires fitness value to have a sensitive response to the slight change.Otherwise, onlooker bees cannot select appropriate food sources.Through the above analysis, we proposed a new equation to calculate the fitness value, which is given as follows: where   is the objective value of function and fitness  is the fitness value.
After that, greedy selection strategy is applied to select food source.However, as is known to all, greedy selection strategy often falls into locally optimal solution, which results from lack of population diversity.In order to solve this issue,  we introduced the disruptive selection strategy to maintain population diversity.The disruptive selection strategy gives more chances for higher and lower individuals to be selected by changing the definition of the fitness function as in ( 8) and ( 9) [23] fit where fitness  is the average value of the fitness value fitness  of the individuals in the population.Based on the above amendments, the main steps of disruptive selection strategy are shown in Algorithm 3. The guidance of the best solution will rapidly accelerate convergence speed, which has been proved in literature [12].It means that the scout bee has yet to take advantage of Begin {--Population initialization--} Step 1. Set population size CS, the number of food sources FN, the non-improvement number of food source , the number of maximum iterations max , and the number of parameters .

Improved
Step 2. Generate initial solutions using population initialization based on SSD as in ( 5) and ( 6), and calculate their fitness values by ( 7) and ( 8).
Step 3. cycle = 1 Step 4. While cycle ≤ max  do {--Employed bee phase--} For  = 1 : FN do Update the position of the employed bee based on (3).Check whether it is out of boundaries or not.Calculate fitness value by ( 7) and (8).
A greedy selection strategy is applied to select the better food source.
While  < FN then do If random <   then do Update the position of the onlooker bee by (3).Check whether it is out of boundaries or not.Calculate fitness value by ( 7) and (8).
A greedy selection strategy is applied to select the better food source.
if max() >  then do Generate a new food source for the scout bee by (10).

End if
Step 7.
the information from the optimal solution achieved so far.Moreover, it is a hard work to determine limit value because we are facing a big dilemma: setting the value of limit that is too large results in poor population diversity, and the value of limit that is too small renders slow convergence.As a matter of fact, the sole purpose of a scout bee is to maintain the population diversity, which can be replaced by other strategies.As a result, in this paper, we adopted the disruptive selection strategy to keep population diversity.In addition, for the purpose of accelerating convergence speed, we assigned a new task to the scout bee, and that is to further exploit the promising position.Therefore, a new equation generating the new position for scout bee is given as follows: where  ∈ [1, 2, . . ., ],  best is the best solution achieved till now, and  abandon is the solution abandoned by employed bee.Through the above operations, the main steps of SDABC can be simply presented in Algorithm 4.

Simulation Experiments
In this section, a series of experiments were conducted to certify the effectiveness of our proposed algorithm SDABC in standard benchmark functions.Well-defined benchmark functions which are based on mathematical functions benefit the testing of our proposed algorithm in this paper.As a result, we selected fourteen benchmark functions to fulfill such experiments.The formulations and properties of these benchmark functions were summarized in Table

Effects of Each Modification on the Performance of SDABC.
In this section, we conducted two types of experiments to analyze each modification, respectively.On the one hand, in order to further certify the superiority of our proposed initialization algorithm, we compared SSD with other initialization algorithms including opposition-based learning (OBL for short) proposed in [21] and random initialization (RI for short).For more clarity, we called the original ABC with our proposed population initialization algorithm (i.e., replacing RI in the original ABC with SSD) as ABC-ssd and the original ABC with OBL as ABC-obl.Note that ABC utilizes RI to generate initial population.In the experiments, the population size was set to be 20, the maximum number of cycles was set to be 600, and the value of limit was set to be 150.And then, we compared the convergence performance and final solution accuracy of different initialization methods on a set of six benchmark functions including three unimodal functions and three multimodal functions with  = 10, 60, and 100, respectively.For all functions, each algorithm was performed 100 independent runs.The results were shown in Table 2 in terms of mean and standard deviation (Std for short) of solutions obtained by each algorithm.Moreover, the best and the second best solutions were marked in bold and a two-tailed t-test at a 0.05 level of significance was used.Note that the value "+" denotes our proposed algorithm significantly better than other algorithms.It can be observed that ABC-ssd has higher solution accuracy on all functions.
In addition, convergence process of the best solution values over 100 runs on six functions is presented in Figure 2. Hereinto, 1, 2, and 3 denote the best initial solutions generated by SSD, OBL, and RI, respectively.Moreover, some partial enlarged drawings are also provided in Figure 2 to see how much the different population initialization methods make contribution to improving quality of initial solutions.It can be seen clearly that SSD provides the highest quality of initial solution 1.For the six benchmark functions with  = 30, 60 and 100, SSD is significantly superior to OBL and RI during the whole progress of finding global minimum.There is a tendency that the more the number of optimization parameters is, the more obvious the superiority of SSD shows.Eventually, OBL performs the second best at finding global minimum.Specifically, for unimodal functions, OBL is somewhat superior to RI in terms of initial solution and convergence performance.However, for multimodal functions, the advantages may disappear.
On the other hand, because of correlative dependence, we combined disruptive selection strategy and the improved scout bee to investigate their effects on SDABC in terms of population diversity and convergence performance.To begin with, population diversity is defined as follows [24]: where  is the number of optimization parameters and  is the average point which is defined as follows: A comparison was conducted between ABC and the standard ABC algorithm with both the disruptive selection strategy and the improved scout bee (DSS for short) on the above six benchmark functions with  = 30.The number of population size was set to be 20 and the maximum number of cycles was set to be 1000.Note that the value of limit was set to be 150 (0.5 ×  × ) for ABC [25] and 20 for DSS.Because of changing the definition of the scout bee, we obtained the value of limit through testing again and again several times.For all functions, each algorithm was performed 100 independent runs.The convergence performance of different ABCs and their corresponding population diversity are presented in Figure 3.It can be observed that both the population diversity of ABC and DSS are almost the same as each other in the early stage, this is because the same initialization method is applied to obtain initial solutions.It means that they start with almost the same diversity towards global optimum.However, with the increasing iteration times, DSS shows higher population diversity and better convergence performance than that of ABC.For Rosenbrock function, we note that ABC has a fast convergence initially towards the known local optimum.As the procedure proceeds, ABC falls into the local minimum, while DSS gets closer and closer to the minimum.In a word, the better results are obtained by the combined effects of disruptive selection strategy and the improved scout bee.

Comparison between ABC and SDABC.
In this section, a set of experiments were conducted to testify the efficiency of our proposed algorithm on 14 benchmark functions with  = 30 and 100; as presented in Table 1, the experiment results were compared with those of ABC.To allow a fair comparison, all the experiments were performed using the following parameter settings: population size was set to be 20; namely,  = 10, and the maximum number of cycles was set to be 3000.It is necessary to emphasize that the value of limit was set to be 200 for ABC and 20 for SDABC, and the reason has been shown in the previous section.For all functions, the algorithms carry out 50 independent runs.For more clarity, the best solutions were marked in bold.The results are shown in Table 3 in terms of the best, worst, mean, and standard deviation of solutions obtained by each algorithm over 50 independent runs.In Table 3, "Std" and "AT" denote the standard deviation of solutions and the average elapsed time, respectively.It can be observed that SDABC has higher accuracy on all the functions than ABC.In particular, SDABC can find optimum solutions on functions  6 ,  8 ,  10 ,  13 , and  14 with  = 30.Moreover, the average elapsed time of SDABC on almost all functions is less than that of ABC except functions  3 with  = 100,  8 and  10 with  = 30 and 100.However, the superiority of ABC to SDABC is not significantly obvious in terms of the computational time.In other words, SDABC has higher convergence rate and accuracy.
In addition, the statistical comparison of SDABC with ABC uses a two-tailed -test at a 0.05 level of significance.Note that value "+" in 9th column in Table 3 represents that our proposed algorithm performs significantly better than

Algorithm 2 :
Population initialization based on search space division.

Figure 2 :
Figure 2: Convergence process of convergence process of the best solution values over 100 runs.
∑ =1 (  sin (√           )) [−500, 500]  0 Ackley  12 () = −20 exp (−0.2√ 1   ∑  ∑ =1 1. Specifically,  1 ∼  9 are unimodal functions and  10 ∼  14 are multimodal functions.Hereinto,  6 is discontinuous step function,  11 is bound-constrained function, and  3 is a nonconvex function with multiple minima in high dimension case.It is necessary to emphasize that all the experiments were implemented in the same hardware and software environment.Specifically, computer's hardware configuration is given as follows: an Intel Core 2 Duo processor running at 2.20 GHz, 512 M of RAM, and an 80 G hard driver.The operating system is Microsoft Windows sp3.Our execution was compiled by the default setting of Matlab R2010a.

Table 2 :
Mean and Std deviation of solutions obtained by the original ABC with different initialization algorithms.

Table 3 :
Best, worst, mean, Std, and AT deviation of solutions obtained by ABC and SDABC through 50 independent runs on 14 functions.