DDNS Discrete Dynamics in Nature and Society 1607-887X 1026-0226 Hindawi Publishing Corporation 10.1155/2015/674595 674595 Research Article Artificial Bee Colony Algorithm with Time-Varying Strategy Qin Quande 1, 2 http://orcid.org/0000-0002-5129-995X Cheng Shi 3, 4 Zhang Qingyu 1, 2 Li Li 1 Shi Yuhui 5 López-Ruiz Ricardo 1 Department of Management Science College of Management Shenzhen University Shenzhen 518060 China szu.edu.cn 2 Research Institute of Business Analytics & Supply Chain Management Shenzhen University Shenzhen 518060 China szu.edu.cn 3 Division of Computer Science University of Nottingham Ningbo China Ningbo 315100 China nottingham.edu.cn 4 International Doctoral Innovation Centre University of Nottingham Ningbo China Ningbo 315100 China nottingham.edu.cn 5 Department of Electrical & Electronic Engineering Xi’an Jiaotong-Liverpool University Suzhou 215123 China xjtlu.edu.cn 2015 742015 2015 29 10 2014 13 03 2015 17 03 2015 742015 2015 Copyright © 2015 Quande Qin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Artificial bee colony (ABC) is one of the newest additions to the class of swarm intelligence. ABC algorithm has been shown to be competitive with some other population-based algorithms. However, there is still an insufficiency that ABC is good at exploration but poor at exploitation. To make a proper balance between these two conflictive factors, this paper proposed a novel ABC variant with a time-varying strategy where the ratio between the number of employed bees and the number of onlooker bees varies with time. The linear and nonlinear time-varying strategies can be incorporated into the basic ABC algorithm, yielding ABC-LTVS and ABC-NTVS algorithms, respectively. The effects of the added parameters in the two new ABC algorithms are also studied through solving some representative benchmark functions. The proposed ABC algorithm is a simple and easy modification to the structure of the basic ABC algorithm. Moreover, the proposed approach is general and can be incorporated in other ABC variants. A set of 21 benchmark functions in 30 and 50 dimensions are utilized in the experimental studies. The experimental results show the effectiveness of the proposed time-varying strategy.

1. Introduction

Swarm intelligence concerns the design of intelligent optimization algorithms by taking inspirations from the collective behavior of social insects . During the past decades, swarm intelligence has shown great success for solving complicated problems. The problems to be optimized by swarm intelligence algorithms do not need to be mathematically represented as continuous, convex, and/or differentiable functions; they can be represented in any form . Artificial bee colony (ABC) algorithm developed by Karaboga in 2005 is a recent addition into this category .

ABC algorithm is inspired by the intelligent behavior of honey bees seeking quality food sources [3, 68]. In a short span of less than 10 years, ABC has already been demonstrated as a promising technique for solving global optimization problems [9, 10]. Numerical comparisons and many engineering applications demonstrated that ABC could obtain good search results . Due to its simplicity, flexibility, and outstanding performance, ABC has captured increasing interests of the swarm intelligence research community and been applied to many real-world areas, such as numerical optimization [6, 10], neural network training [7, 12], finance forecasting [13, 14], production scheduling , data clustering [18, 19], image segmentation [20, 21], service selection problem , and power system optimization .

However, like its counterpart population-based stochastic algorithms, it still has some inherent pitfalls . The convergence speed of ABC is slower than those of the representative population-based stochastic algorithms, such as DE and PSO . Moreover, ABC suffers from premature convergence while dealing with certain complicated problems . Several ABC variants have been proposed with the aim of overcoming these pitfalls. These ABC variants can be generally categorized into three groups: (1) first, the parameters of configuration tuning ; (2) second, the ABC algorithm hybridizing with other evolutionary optimization operators to enhance performance [10, 32, 33]; (3) third, the design of new learning strategy by modifying the search equation of the basic ABC algorithm [24, 25, 34].

It is recognized that exploration and exploitation are the two most important factors affecting a population-based optimization algorithm’s performance [6, 35]. Exploration refers to the ability of a search algorithm to investigate the unknown regions in the search space in order to have high probability to discover good promising solutions. Exploitation, on the other hand, refers to the ability to concentrate the search around a promising region in order to refine a candidate solution [36, 37]. A good population-based optimization algorithm should properly balance these two conflictive objectives . It is observed that ABC has good exploration ability but poor exploitation ability , which may impede ABC algorithm from proceeding towards a global optimum even though the population has not converged to a local optimum.

The colony of the ABC algorithm contains three groups of bees: employed bees, onlookers, and scouts . In the employed bees phase of ABC algorithm, the algorithm focuses on the explorative search indicated by the solution updating scheme, which uses the current solution and a randomly chosen solution. A fitness-based probabilistic selection scheme is used in the onlooker phase, which indicates the exploitation tendency of the algorithm. In the original ABC algorithm, it is assumed that half of the colony consists of the employed bees, and the rest half consists of the onlookers . In other words, the ratio of the employed and onlooker bees is the same, 1 : 1.

The basic ABC algorithm is easy to tune with few parameters but lacks effective and efficient ways to control the exploration ability and exploitation ability. In the present study, we propose an easy modification to the structure of the basic ABC algorithms in an attempt to balance their exploration and exploitation abilities. Compared with the basic ABC algorithm, we added a new parameter. The core idea of the modification is to design a proper mechanism, adjusting the ratio of the employed and onlooker bees with time. The proposed algorithms are called ABC with linear versus nonlinear time-varying strategy (ABC-LTVS and ABC-NTVS, resp.). The objective of this development is to enhance the global exploration in the early search and to encourage the solution to converge toward the global optima at the later search.

The reminder of this paper is organized as follows: Section 2 briefly introduces the basic ABC algorithm. The proposed algorithms, ABC-LTVS and ABC-NTVS, are elaborated in Section 3. In Section 4, comprehensive experimental studies are conducted on 21 benchmark functions in 30 -dimension and 50 -dimension problems to verify the effectiveness of the proposed algorithms. Finally, conclusions are presented in Section 5.

2. Basic ABC Algorithm

The ABC algorithm is a recently introduced optimization algorithm proposed by Karaboga [3, 39, 40], which is inspired by the intelligent foraging behavior of the honeybee swarm. In the ABC algorithm, there are two components: the foraging artificial bees and the food source [3, 39]. The position of a food source represents a possible solution. The nectar amount of a food source corresponds to the fitness of the associated solution. In the basic ABC algorithm, the colony of artificial bees contains three groups of bees: employed bees, onlookers, and scouts. The employed bees are responsible for searching available food sources and pass the food information to the onlooker bees [3, 40]. The onlookers select good sources from those found by the employed bees to further search the foods. When the fitness of the food sources is not improved through a predetermined number of cycles, denoted as limit, the food source is abandoned by its employed bee, and then, the employed bee becomes a scout and starts to search for a new food source in the vicinity of the hive.

The ABC algorithm consists of four phases: initialization, employed bees, onlooker bees, and scout bees [3, 39, 40]. In the initialization phase of the ABC, SN food source positions are randomly produced in the D -dimensional search space using the following equation [3, 6]: (1) v i d = l d + r 1 u d - l d , where x i = [ x i 1 , x i 2 , , x i D ] is the i th food source, d { 1,2 , , D } ; i { 1,2 , , SN } , where SN denotes the number of food source; l d and u d are the lower and upper bounds for dimension d , respectively; r 1 is a random number uniformly distributed within range [ 0,1 ] .

After producing food sources and assigning them to the employed bees. There is only one employed bee on each food source. In the employed bee phase of ABC, each employed bee tries to find a better quality food source based on x i . A new trial food source, denoted as u i = [ u i 1 , u i 2 , , u i D ] , is calculated from the equation below [3, 6]: (2) u i j = x i j + ϕ x i j - x k j , where j is a randomly generated whole number in the range [ 1 , D ] ; ϕ is a random number uniformly distributed in the range [ - 1,1 ] ; and k is the index of a randomly chosen food source satisfying k i . After u i is obtained, it will be evaluated and compared to x i . If the fitness of u i is better than that of x i , the bee will forget the old food source u i and memorize the new one. Otherwise, it keeps x i in her memory. After all employed bees have finished their search, they share the nectar and position information of their food sources with the onlookers.

Each onlooker bee selects a food source of an employed bee to improve it. The roulette selection mechanism is performed by using (3) [3, 6] (3) p i = fit ( x i ) j = 1 SN fit ( x j ) , where p i is the probability of food source x i being selected by an onlooker bee and fit ( x i ) is the fitness of x i . The fitness of the food sources are defined as [3, 6] (4) fit ( x i ) = 1 1 + f ( x i ) , f ( x i ) 0 , 1 + f x i , f ( x i ) < 0 , where f ( x i ) is the objective function value of x i . Once the onlooker has selected a food source, a new candidate food source can be obtained by (2). As in the employed bees, the greedy selection between these two food sources was performed.

If a food source, x i , cannot be improved for a predetermined number of cycles, referred to as limit, this food source is abandoned. Then, the scout produces a new food source randomly according to (1) to replace x i .

The detailed pseudocode of the ABC algorithm is shown in Algorithm 1 [3, 40].

<bold>Algorithm 1: </bold>The pseudocode of artificial bee colony algorithm.

(1) Initialize the set of food sources x i , i = 1,2 , , SN ;

(2) Evaluate each x i , i = 1,2 , , SN ;

(3) while  have not found “good enough” solution or not reached the predetermined

maximum number of iterations  do

(4)  for   i = 1 to SN  do              /    Employed bees phase    /

(5)   Generate u i with x i using (2);

(6)   Evaluate u i ;

(7)   if   fit ( u i ) fit ( x i )   then

(8)     x i = u i ;

(9)  for   i = 1 to SN  do              /    Onlooker bees phase    /

(10)      Select an employed bee using (3);

(11)       Try to improve food source quality according to Steps (5)–(8);

(12)     Generate a new randomly food source for those does not improve with successive l i m i t

iterations         /    Scout bees phase    /;

(13)     Memorize the best food source achieved so far;

3. The Proposed Algorithm 3.1. Time-Varying Strategy

During the process of the employed bees, each food source is assigned with one employed bee. The onlooker bees select the food source based on the fitness values, which is similar to “route wheel selection” in genetic algorithm. Due to this selection scheme, it is assured that the food sources with higher fitness value have more chance to be selected by the onlooker bees. It facilitates improving the quality of these food sources. Thus, the onlooker bees are more concentrated on the exploitation ability than the employed bees. In general, for population-based optimization algorithms, the exploration is typically preferred at the early stages of the search but is required to gradually give ways to exploitation in order to find the optimum solution efficiently . In the ABC algorithm, the population size is fixed. With a large number of employed bees and relatively a small number of onlookers at the early search, it is helpful to move around the search space and enhance the exploration. On the other hand, the small number of the employed bees and the large number of the onlookers allow the individuals to converge to the global optima in the later part of the optimization. In the basic ABC algorithm, the employed bees and onlooker bees are equal to each other; that is, the ratio between the number of the employed bees and the number of the onlookers is 1 : 1. Considering those concerns, we proposed a time-varying strategy, in which the ratio between the number of the employed bees and the number of the onlookers varies with time in this paper.

The total number of the colony size, employed bees, and onlooker denoted as NP , NP e , and NP o respectively, and it holds that NP = NP e + NP o . The ratio between NP e and NP o is denoted as r c , r c = NP e / NP o . A large value of r c is conducive to global exploration. Conversely, it facilitates local exploitation for fine-tuning a local search. Intuitively, a linear-decreasing time-varying strategy (LTVS) may contribute to balance the exploration and exploitation during the entire search. A LTVS is proposed as the follows: (5) r c = r max - r max - r min × fitc FEs , where r max and r min are the maximum and minimum value of r c , respectively; fitc is the current number of function evaluations; and FEs is the total number of function evaluations. The number of employed bees is set to the round value of r c × NP , NP e = round ( r c × NP ) , where round ( ) is the round function with zero decimal place. If the number of employed bees NP e is greater than the number of food source SN , each employed bee is assigned to one food source randomly in the first SN employed bees, and each of the rest of employed bees is placed at a randomly selected food source. When NP e is smaller than SN , all employed bees randomly select food sources to search.

To investigate the dynamics of population distribution in basic ABC and ABC with LVTS, we herein take a typical multimodal function, Rastrigin function, with 2D dimension as an example. Figures 1 and 2 show the population distribution of Rastrigin function observed at various search phases of basic ABC and ABC with LTVS, respectively. It is noted that the value of r max and r min is set to 0.8 and 0.3 in ABC with LTVS in this experiment. Figures 1 and 2 show the population distribution at 5 th , 10 th , and 20 th iteration when the 2-dimensional Rastrigin function was optimized by the original ABC and ABC with LTVS, respectively. Compared with the original ABC, we can obtain that the population in ABC with LTVS distributed in a wider range of search space at a relatively small iteration and gradually gathered around the global optimum. In other words, LVTS can explore in the search space in the early search stage, while converging to the global optimum with fast speed in the later search.

Population distribution observed at various stages in ABC.

Iterations = 5

Iterations = 10

Iterations = 20

Population distribution observed at various stages in ABC with time-varying strategy.

Iterations = 5

Iterations = 10

Iterations = 20

It would be interesting to know whether the nonlinear variation of r c can enhance the performance of the ABC algorithm. In the preset study, we proposed a nonlinear-decreasing time-varying strategy (NTVS). This strategy is given as the following equation: (6) r c = r max - r max - r min × fitc FEs α , where α is the nonlinear modulation index. Figure 3 shows typical variations of r c with function evaluations for different settings of α . With α = 1 , this strategy becomes a special case of LTVS. With α > 1 , r c varied in a convex function manner. Compared with LTVS, it can be seen that r c decreases in a relative-slow speed in the early search while in a faster speed in the later search. Conversely, with α < 1 , r c varied in a concave function manner.

Variations of r c with functions evaluations for different settings of α .

3.2. Parameters Tuning of Time-Varying Strategy

In order to investigate the influence of r max and r min in LTVS, six combinations for setting the values of r max and r min are tested on 8 relevant functions including 3 typical unimodal functions, f 1 , f 2 , and f 3 , and 5 typical multimodal functions, f 6 , f 7 , f 8 , f 9 , and f 10 , with 30 dimensions. The colony size is set to 60 , the maximum functions evaluations are 7 × 1 0 4 , and the limit is set to 200 . Experimental results of 25 independent trials are presented in Table 1. In Table 1, “Mean” indicates the mean of function values, “SD” stands for the standard deviation, and “Rank” refers to the performance level of the certain function in the six combinations of r max and r min . The best mean function value of six combinations is marked in boldface. From the rank results given in Table 2, it shows that the settings of r max = 0.7 and r min = 0.2 are the best choice. Therefore, the parameters of r max = 0.7 and r min = 0.2 are used in LTVS. We try to obtain better performance with the NTVS of α 1 having r max and r min kept fixed to their values obtained for the linear case. Table 3 shows the performance evaluation of the resultant system with different values of α , keeping FEs = 7 × 1 0 4 . From Tables 3 and 4, we can observe that the best result is obtained with α = 1.2 .

Experimental results of different combination of r max and r min in LTVS.

Parameter setting f 1 f 2 f 3 f 6 f 7 f 8 f 9 f 10
r max = 0.8 r min = 0.2 Mean 2.28e − 015 1.41e − 007 4.04e − 010 1.3601 1.72e − 002 1.33e − 003 2.82e − 007 2.12e + 002
SD 1.57e − 015 8.37e − 008 6.84e − 010 1.4594 8.59e − 002 4.63e − 003 2.04e − 007 8.10e + 001
Rank 1 1 1 2 6 5 2 5

r max = 0.8 r min = 0.3 Mean 3.20e − 015 1.79e − 007 7.54e − 010 1.5393 6.82e − 003 6.85e − 007 3.24e − 007 2.10e + 002
SD 2.59e − 015 7.97e − 008 1.41e − 009 1.7766 3.42e − 002 3.40e − 006 1.83e − 007 1.01e + 002
Rank 2 2 2 5 5 2 4 4

r max = 0.7 r min = 0.2 Mean 1.34e − 014 3.38e − 007 7.28e − 009 1.3620 1.73e − 006 6.48e − 010 2.62e − 007 1.80e + 002
SD 1.81e − 014 1.60e − 007 1.47e − 008 2.0555 5.34e − 006 1.72e − 009 1.39e − 007 7.84e + 001
Rank 4 4 3 3 1 1 1 2

r max = 0.7 r min = 0.3 Mean 6.12e − 015 3.41e − 007 1.49e − 008 1.2644 1.93e − 003 3.02e − 004 3.35e − 007 2.19e + 002
SD 4.41e − 015 1.64e − 007 4.59e − 008 1.1504 9.44e − 003 1.52e − 003 1.26e − 007 1.27e + 002
Rank 3 3 4 1 4 3 5 6

r max = 0.6 r min = 0.2 Mean 4.86e − 014 6.36e − 007 8.76e − 008 1.4883 8.55e − 006 9.79e − 004 2.85e − 007 1.67e + 002
SD 5.41e − 014 2.57e − 007 1.76e − 007 1.7028 3.02e − 005 2.77e − 003 1.51e − 007 1.02e + 002
Rank 6 5 6 4 2 4 3 1

r max = 0.6 r min = 0.3 Mean 4.59e − 014 8.45e − 007 2.66e − 008 1.6454 1.33e − 003 1.43e − 003 4.17e − 007 1.83e + 002
SD 3.49e − 015 4.49e − 007 5.57e − 008 1.6193 6.75e − 003 3.97e − 003 2.17e − 007 8.42e + 001
Rank 5 6 5 6 3 6 6 3

Rank results of the different combinations of r max and r min in LTVS.

r max = 0.8 r max = 0.8 r max = 0.7 r max = 0.7 r max = 0.6 r max = 0.6
r min = 0.2 r min = 0.3 r min = 0.2 r min = 0.3 r min = 0.2 r min = 0.3
Average rank 2.875 3.25 2.375 3.625 3.875 5
Final rank 2 3 1 4 5 6

Experimental results of different setting of α in NTVS.

Parameter setting f 1 f 2 f 3 f 6 f 7 f 8 f 9 f 10
α = 1.6
Mean 2.16e − 017 1.82e − 007 8.41e − 010 1.3859 1.49e − 006 1.73e − 003 3.10e − 007 2.03e + 002
SD 2.83e − 017 7.58e − 008 1.27e − 009 1.1617 6.63e − 006 4.82e − 003 1.15e − 007 1.06e + 002
Rank 3 2 4 3 5 6 5 5
α = 1.4
Mean 7.04e − 018 2.36e − 007 1.59e − 012 1.6515 8.40e − 010 6.48e − 004 4.78e − 009 1.41e + 002
SD 6.61e − 018 1.36e − 007 3.38e − 012 1.6594 2.33e − 009 2.35e − 003 2.83e − 009 1.05e + 002
Rank 2 3 1 5 2 5 1 2
α = 1.2
Mean 1.46e − 018 4.76e − 009 4.21e − 012 1.5987 6.25e − 010 1.01e − 005 5.58e − 009 9.60e + 001
SD 2.66e − 018 3.43e − 009 9.92e − 012 1.4570 1.61e − 009 4.69e − 005 2.94e − 009 8.84e + 001
Rank 1 1 2 4 1 2 2 1
α = 1
Mean 1.34e − 014 3.38e − 007 7.28e − 009 1.3620 1.73e − 006 6.48e − 010 2.62e − 007 1.80e + 002
SD 1.81e − 014 1.60e − 007 1.47e − 008 2.0555 5.34e − 006 1.72e − 009 1.39e − 007 7.84e + 001
Rank 4 4 6 2 6 1 3 4
α = 0.8
Mean 1.77e − 014 3.93e − 007 9.21e − 011 1.0221 1.75e − 007 3.633e − 004 2.68e − 007 2.09e + 002
SD 2.04e − 014 1.74e − 007 2.49e − 010 1.0185 4.57e − 007 1.82e − 003 1.47e − 007 9.24e + 001
Rank 5 5 3 1 4 4 4 6
α = 0.6
Mean 6.06e − 014 8.31e − 007 1.35e − 009 1.9156 1.90e − 008 3.14e − 004 3.63e − 007 1.45e + 002
SD 7.52e − 014 3.58e − 007 2.45e − 009 2.0691 7.87e − 007 1.64e − 003 2.17e − 007 9.87e + 001
Rank 6 6 5 6 3 3 6 3

Rank results of different setting of α in NTVS.

α = 1.6 α = 1.4 α = 1.2 α = 1.0 α = 0.8 α = 0.6
Average rank 4.125 2.625 1.75 3.75 4 4.75
Final rank 5 2 1 3 4 6
4. Experimental Study 4.1. Benchmark Functions and Parameters Settings

In order to test the proposed algorithm, a diverse set of 21 benchmark functions are used to conduct the experiments. These benchmark functions can be classified into three groups: Group 1, Group 2, and Group 3. The first five functions f 1 f 5 are unimodal functions in Group 1. The next group includes ten multimodal functions with many local optima which are used to test the global search capability in avoiding premature convergence. Note that f 6 (Rosenbrock) is a unimodal problem in 2 D or 3 D search space but is a multimodal problem in higher dimensions. Rotated and/or shifted functions belong to Group 3. f 16 and f 17 are rotated functions, in which the original variable x is rotated by left multiplying the orthogonal matrix M , y = M × x . M is used to increase the complexity of the function by changing separable functions to nonseparable functions without affecting the shape of the functions. The global optima of f 18 and f 19 are shifted to different numerical values for different dimensions ( z = x - o ) , where o is employed to shift the global optimal solution of the original function from the center of the search space to a new location. f 20 and f 21 are complicated which are shifted and rotated. We used x to represent global optimum. In each benchmark function, x and f ( x ) represent global optimum and the corresponding function value, respectively. The function value of the best solution found by an algorithm in a run is denoted by f ( x best ) . The error of this run is denoted as error = f ( x best ) - f ( x ) . The parameters of all benchmark functions are described in Appendix.

To validate the effectiveness of the proposed time-varying strategy, experiments were conducted to compare ABC with ABC-LTVS and ABC-NTVS on 21 benchmark functions. In order to make a fair comparison, the colony size for all algorithms was set to 60 . FEs are used as the stop criteria for all algorithms and it is set to be 7 × 1 0 4 for solving 30 D problems and 1.2 × 1 0 5 for solving 50 D problems, respectively. The parameter, limit, is set to 200 . All experiments on each benchmark function were run 25 times independently.

4.2. Experimental Results for 30D Problems

The comparative results obtained by ABC, ABC-LTVS, and ABC-NTVS are presented in Table 5. The best mean results on each problem among all algorithms are given in bold. In order to determine whether the results obtained by ABC-LTVS and ABC-NTVS are statistically different from the results generated by ABC algorithms, a two-tailed t -test with 48 degrees of freedom is used at a significant level of 0.05 . Values of “1,” “0,” and “−1” in columns “ h 1 ” and “ h 2 ” in Table 5, respectively, denote that ABC-LTVS and ABC-NTVS perform significantly better than, almost the same as, and significantly worse than ABC algorithm. In order to give visualized comparisons of the involved algorithms, the convergence graphs of the best and mean function values for each ABC algorithm regarding all benchmark functions are shown in Figures 4, 5, and 6. In these figures, each curve represents the variation of mean value of error over the FEs for a specific ABC algorithm.

Experimental results for 30-dimension problem.

Function ABC ABC-LTVS ABC-NTVS h 1 h 2
Mean SD Mean SD Mean SD
f 1 7.36e − 010 5.41e − 010 1.34e − 014 1.81e − 014 1.46e − 018 2.66e − 018 1 1
f 2 1.26e − 004 7.27e − 005 3.38e − 007 1.60e − 007 4.76e − 009 3.43e − 009 1 1
f 3 4.37e + 000 4.05e + 000 7.28e − 009 1.47e − 009 4.21e − 012 9.92e − 012 1 1
f 4 0.1463 0.0309 0.1385 0.0307 0.1409 0.0412 0 0
f 5 5.32e + 002 7.20e + 001 5.66e + 002 5.30e + 001 5.29e + 002 7.51e + 001 0 0
f 6 3.4901 3.5885 1.3620 2.0555 1.5987 1.4570 1 1
f 7 6.39e − 001 7.12e − 001 1.73e − 006 5.34e − 006 6.25e − 010 1.61e − 009 1 1
f 8 1.23e − 003 3.52e − 003 6.48e − 010 1.72e − 009 1.01e − 005 4.69e − 005 0 0
f 9 5.79e − 005 2.52e − 005 2.62e − 007 1.39e − 007 5.58e − 009 2.94e − 009 1 1
f 10 3.62e + 002 1.01e + 002 1.80e + 002 7.84e + 001 9.60e + 001 8.84e + 001 1 1
f 11 1.3891 0.7813 0.1919 0.3751 0.0421 0.1998 1 1
f 12 5.02e − 005 5.74e − 005 1.54e − 006 3.83e − 006 8.17e − 007 1.81e − 006 1 1
f 13 1.64e − 003 3.33e − 003 6.65e − 005 5.28e − 005 7.51e − 005 1.12e − 004 1 1
f 14 1.42e − 008 6.02e − 010 1.37e − 008 0 1.37e − 008 0 1 1
f 15 1.30e − 010 8.08e − 011 3.96e − 015 3.52e − 015 2.11e − 016 1.59e − 016 1 1
f 16 47.5969 29.6407 45.9323 27.7604 42.1560 23.4833 0 0
f 17 80.7968 12.1854 80.0519 10.6073 77.9708 15.6648 0 0
f 18 6.5476 5.8738 1.7831 1.9028 2.2656 2.2663 1 1
f 19 0.6250 0.6287 0.0399 0.1990 5.24e − 005 2.62e − 004 1 1
f 20 73.2093 16.1789 76.5697 16.2155 73.4801 12.5874 0 0
f 21 8.40e − 005 8.07e − 005 7.74e − 006 1.40e − 005 9.23e − 006 3.07e − 005 1 1

Convergence curves of ABC variants solving unimodal functions f 1 f 5 and multimodal function Rosenbrock f 6 .

Sphere f 1

Schwefel’s P2.22 f 2

Elliptic f 3

Noise f 4

Zakharov f 5

Rosenbrock f 6

Convergence curves of ABC variants solving multimodal functions f 7 f 15 .

Rastrigin f 7

Griewank f 8

Ackley f 9

Schwefel f 10

Noncontinuous Rastrigin f 11

Levy f 12

Alpine f 13

2 D minima f 14

Generalized penalized f 15

Convergence curves of ABC variants solving rotated and/or shifted functions f 16 f 21 .

Rotated Rosenbrock f 16

Rotated Rastrigin f 17

Shifted Rosenbrock f 18

Shifted Rastrigin f 19

Shifted Rotated Rastrigin f 20

Shifted Rotated Griewank f 21

For solving unimodal functions, ABC-NTVS achieves the highest solution accuracy on f 1 , f 2 , f 3 , and f 5 , and ABC-LTVS obtains the best solution on f 4 . ABC-LTVS and ABC-NTVS perform significantly better than ABC algorithm on f 1 , f 2 , and f 3 . On multimodal problems, there are many local minima and it is not easy to find the global optima. ABC-LTVS and ABC-NTVS are shown to offer better performance than ABC algorithm on these problems. ABC-LTVS can find the best solution on f 6 , f 8 , and f 13 and ABC-NTVS performs the best on f 7 , f 9 , f 10 , f 11 , and f 12 . ABC, ABC-LTVS, and ABC-NTVS obtain the similar performance on f 14 . The performance of ABC-LTVS and ABC-NTVS are significantly better than that of ABC algorithm on these multimodal problems except ABC-NTVS for solving f 8 .

For most of optimization algorithms, their performance will sharply decrease when solving the shifted and/or rotated problems. Table 5 shows the results on the rotated and/or shifted functions. The results appear that these three ABC algorithms are affected. We can observe that ABC-LTVS and ABC-NTVS still obtain relatively good performance. ABC-LTVS and ABC-NTVS performs significantly better than ABC algorithms on f 18 , f 19 , and f 21 . With respect to the stability of algorithms, ABC-LTVS and ABC-NTVS show the good stability as compared to ABC algorithms. The standard deviations of solutions found by ABC-LTVS and ABC-NTVS are small for the most functions. On the whole, ABC-LTVS and ABC-NTVS exhibit the accurate convergence precision on almost all the benchmark, which indicates the effectiveness of the proposed time-varying strategy. Moreover, experimental results demonstrate that ABC-NTVS slightly outperforms ABC-LTVS.

4.3. Experimental Results for 50D Problems

The experiments conducted on 50 D problems and the results for solving unimodal, multimodal, and shift and/or rotate problems are presented in Table 6. As the convergence graphs of 50 D are similar to the 30 D problems and space limitation, they are not given. Compared with ABC algorithm, ABC-LTVS and ABC-NTVS can still obtain high-quality solutions under 50 D problems, which can be seen from Table 6. The meaning of column “ h 1 ” and “ h 2 ” in Table 6 is the same as the Table 5. It is noted that ABC-NTVS, from the mean of the results, performs worse than ABC on f 8 , but not statistically significantly. According the t -tests results, ABC-LTVS performs significantly better than ABC algorithm on all benchmark functions except f 4 , f 16 , and f 17 , so does ABC-NTVS except f 4 , f 8 , and f 17 .

Experimental results for 50-dimension problem.

Function ABC ABC-LTVS ABC-NTVS h 1 h 2
Mean SD Mean SD Mean SD
f 1 6.88e − 010 3.54e − 010 9.35e − 015 7.24e − 015 5.28e − 016 5.79e − 016 1 1
f 2 1.59e − 004 9.94e − 005 4.22e − 007 2.01e − 007 7.14e − 008 2.67e − 008 1 1
f 3 4.57e − 005 6.75e − 005 8.82e − 008 3.58e − 007 3.51e − 010 4.42e − 010 1 1
f 4 0.3606 0.0579 0.3132 0.0691 0.3214 0.0657 1 1
f 5 1.11e + 003 1.09e + 002 1.09e + 003 9.64e + 001 1.07e + 003 8.08e + 001 0 0
f 6 8.2319 6.6280 1.3564 1.8842 1.3761 1.8716 1 1
f 7 1.0794 0.7609 0.0979 0.2829 0.0865 0.2755 1 1
f 8 2.45e − 008 3.16e − 008 1.42e − 009 5.88e − 009 3.23e − 004 1.54e − 003 1 0
f 9 4.78e − 005 1.49e − 005 2.25e − 007 9.33e − 008 5.01e − 008 2.47e − 008 1 1
f 10 7.83e + 002 1.75e + 002 3.97e + 002 9.89e + 001 3.45e + 002 1.78e + 002 1 1
f 11 2.7273 0.9297 0.3388 0.4789 0.3029 0.4551 1 1
f 12 4.16e − 005 7.77e − 005 9.34e − 007 2.42e − 006 2.55e − 007 6.02e − 007 1 1
f 13 4.52e − 003 4.91e − 003 5.76e − 004 7.16e − 004 4.73e − 004 5.83e − 004 1 1
f 14 2.37e − 008 1.33e − 009 2.29e − 008 1.40e − 012 2.29e − 008 1.07e − 012 1 1
f 15 1.57e − 010 1.41e − 010 4.61e − 015 4.07e − 015 3.53e − 016 4.77e − 016 1 1
f 16 95.9988 40.0144 82.0768 32.2224 72.1410 26.1076 0 1
f 17 142.9558 23.9951 143.1200 18.8582 142.5459 20.4493 0 0
f 18 5.9388 4.7166 1.6944 1.7353 1.9040 1.8484 1 1
f 19 1.5057 0.8212 0.1243 0.3400 0.0400 0.1990 1 1
f 20 152.5097 18.9577 141.0244 20.0979 146.2151 17.2185 1 0
f 21 1.30e − 004 2.39e − 004 6.84e − 006 8.96e − 006 4.41e − 006 5.53e − 006 1 1

The experiments conducted on 50 D problems and the results for solving unimodal, multimodal, and shift and/or rotate problems are presented in Table 6. As the convergence graphs of 50 D are similar to the 30 D problems and space limitation, they are not given. Compared with ABC algorithm, ABC-LTVS and ABC-NTVS can still obtain high-quality solutions under 50 D problems, which can be seen from Table 6. The meaning of column “ h 1 ” and “ h 2 ” in Table 6 is the same as the Table 5. According to the t -tests results, ABC-LTVS performs significantly better than ABC algorithm on all benchmark functions except f 5 , f 16 , and f 17 , so does ABC-NTVS except f 5 , f 8 , and f 17 .

4.4. GABC with Time-Varying Strategy

Inspired by particle swarm optimization (PSO), Zhu and Kwong  proposed a popular ABC variant, called Gbest-guided artificial bee colony (GABC). GABC incorporates the information of global best (gbest) position into (2). The experimental results have shown that GABC algorithm outperforms the basic ABC algorithm. To test the effect of the time-varying strategy, we applied the proposed LTVS and NTVS into GABC algorithm, yielding GABC-LTVS and GABC-NTVS algorithms, respectively. Experiments are conducted on 30 D and 50 D benchmark functions to test whether the proposed time-varying strategy is effective in GABC algorithm. The parameters setting of GABC algorithm is in accordance with the original reference except the colony size and FEs , the settings of which is same as the previous experiment.

Tables 7 and 8 give the experimental results on 30 D and 50 D problems, respectively. The best results on each problem among these three GABC algorithms are shown in bold. In addition, the columns “ h 1 ” and “ h 2 ” in Tables 7 and 8 are used to determine the statistical significance of the difference obtained by GABC with those yielded by the GABC-LTVS and GABC-NTVS using two-tailed t -tests, respectively. According to the results of t -tests shown in Tables 7 and 8, GABC-LTVS and GABC-NTVS can find more accurate solutions, which are significantly better than those of GABC on about half of all benchmark functions regardless of problem dimensions.

Experimental results for 30D problem of GABC algorithms.

Function GABC GABC-LTVS GABC-NTVS h 1 h 2
Mean SD Mean SD Mean SD
f 1 7.03e − 016 1.29e − 016 1.16e − 025 9.49e − 026 1.71e − 027 1.21e − 027 1 1
f 2 6.21e − 009 2.30e − 008 5.73e − 013 2.18e − 013 6.69e − 014 3.64e − 014 1 1
f 3 1.34e − 012 1.07e − 012 2.76e − 021 3.38e − 021 1.26e − 023 1.711e − 023 1 1
f 4 0.0692 0.0188 0.0652 0.0144 0.0602 0.0155 0 0
f 5 5.23e + 002 6.31e + 001 5.20e + 002 7.81e + 001 5.08e + 002 6.66e + 001 0 0
f 6 6.4906 10.3348 2.7541 3.9329 2.5028 2.4833 0 0
f 7 8.62e − 011 1.58e − 010 7.11e − 017 3.55e − 016 0 0 1 1
f 8 7.89e − 004 3.87e − 003 4.78e − 012 2.13e − 011 1.03e − 005 5.16e − 005 0 0
f 9 2.07e − 009 7.48e − 010 9.53e − 013 3.34e − 013 2.01e − 013 4.74e − 014 1 1
f 10 59.7836 70.9355 23.8017 59.1745 25.5825 48.3538 0 0
f 11 3.76e − 009 1.02e − 008 6.11e − 015 2.12e − 014 1.42e − 016 4.92e − 016 0 0
f 12 5.71e − 010 1.38e − 009 1.51e − 013 6.85e − 013 4.58e − 016 8.36e − 016 1 1
f 13 2.58e − 005 3.20e − 005 1.11e − 005 1.82e − 005 8.89e − 006 2.03e − 005 0 1
f 14 1.37e − 008 2.23e − 013 1.37e − 008 2.32e − 013 1.37e − 008 2.17e − 013 0 0
f 15 6.39e − 016 1.15e − 016 4.72e − 026 4.33e − 026 6.54e − 028 6.60e − 028 1 1
f 16 38.0229 23.1492 41.9666 26.0778 38.1041 23.7575 0 0
f 17 55.5550 8.2793 53.8867 9.6732 51.0331 9.6130 0 0
f 18 7.4043 14.2994 3.7490 4.2342 2.4815 2.9901 0 1
f 19 3.24e − 008 1.59e − 007 4.26e − 016 1.18e − 015 7.11e − 017 3.55e − 016 1 1
f 20 51.3741 7.0974 50.2909 11.1007 54.0969 10.6401 0 0
f 21 3.97e − 004 2.04e − 003 3.68e − 007 6.51e − 007 1.11e − 006 4.09e − 006 1 1

Experimental results for 50D problem of GABC algorithms.

Function GABC GABC-LTVS GABC-NTVS h 1 h 2
Mean SD Mean SD Mean SD
f 1 1.29e − 015 1.40e − 016 1.67e − 025 1.44e − 025 1.79e − 027 1.64e − 027 1 1
f 2 7.72e − 009 2.30e − 009 6.28e − 013 1.76e − 013 6.54e − 014 2.14e − 014 1 1
f 3 3.09e − 012 2.46e − 012 6.32e − 021 8.53e − 021 1.02e − 023 8.18e − 024 1 1
f 4 0.1694 0.0252 0.1647 0.0294 0.1627 0.0296 0 0
f 5 1.04e + 003 7.26e + 001 1.05e + 003 9.29e + 001 1.02e + 003 1.05e + 002 0 0
f 6 12.3236 23.0585 1.7470 1.9037 3.3437 6.2364 1 0
f 7 3.12e − 008 1.53e − 007 7.11e − 017 3.55e − 016 0 0 0 0
f 8 9.09e − 013 2.77e − 012 8.41e − 015 4.18e − 014 2.68e − 015 1.33e − 014 0 0
f 9 2.52e − 009 9.94e − 010 1.04e − 012 3.32e − 013 2.39e − 013 5.46e − 014 1 1
f 10 162.7082 100.3839 52.5115 90.8550 80.6078 101.0140 1 1
f 11 4.99e − 008 9.96e − 008 1.93e − 014 6.27e − 014 1.99e − 015 8.51e − 015 1 1
f 12 3.61e − 010 5.26e − 010 1.26e − 014 2.24e − 014 3.51e − 016 5.18e − 016 1 1
f 13 1.71e − 004 2.75e − 004 4.09e − 005 5.81e − 005 2.02e − 005 3.10e − 005 1 1
f 14 2.29e − 008 2.07e − 013 2.29e − 008 2.24e − 013 2.29e − 008 1.51e − 013 0 0
f 15 1.17e − 015 1.73e − 016 9.38e − 026 1.05e − 025 7.58e − 028 4.85e − 028 1 1
f 16 79.5737 43.0285 87.9010 38.3788 74.1063 42.5826 0 0
f 17 101.0561 19.2130 97.5846 14.9214 98.3341 12.7543 0 0
f 18 6.7201 10.6406 3.1402 4.8951 4.1251 12.6738 1 0
f 19 2.22e − 010 4.99e − 010 4.89e − 013 2.40e − 012 0 0 1 1
f 20 107.1083 13.3490 106.0348 15.2274 104.4232 13.6487 0 0
f 21 1.32e − 005 4.76e − 005 1.94e − 007 2.30e − 007 1.57e − 006 4.50e − 006 0 0

5. Conclusions

In order to make a balance between the exploration and exploitation in ABC algorithm, a time-varying strategy has been developed. The proposed strategy is implemented through making the ratio between the number of employed bees and the numbers of onlooker bees vary with time. We have developed the two types of time-varying strategies of LTVS and NTVS. Moreover we have examined and fine-tuned the parameters settings of LTVS and NTVS for better performance. Comprehensive experiments have demonstrated the effectiveness of the proposed time-varying strategy in ABC and GABC algorithms. The modifications proposed in the present work are general enough to be applied in other state-of-the-art ABC variants to further improve the search performance.

Appendix A. Benchmark Functions A.1. Unimodal Functions

Sphere function (A.1) f 1 x = i = 1 D x i 2 , - 100 x i 100 , x = 0 D .

Schwefel’s function P2.22 (A.2) f 2 x = i = 1 D x i + i = 1 D x i , - 100 x i 100 , x = 0 D .

Elliptic function (A.3) f 3 ( x ) = i = 1 D 1 0 6 ( i - 1 ) / ( D - 1 ) x i 2 , - 100 x i 100 , x = 0 D .

Noise function (A.4) f 4 x = i = 1 D i x i 4 + random 0,1 , - 1.28 x i 1.28 , x = 0 D .

Zakharov function (A.5) f 5 x = i = 1 D x i 2 + i = 1 D 0.5 i x i 2 + i = 1 D 0.5 i x i 4 , - 10 x i 10 , x = 0 D .

A.2. Multimodal Functions

Rosenbrock function (A.6) f 6 ( x ) = i = 1 D - 1 100 x i + 1 - x i 2 2 + x i - 1 2 , - 30 x i 30 , x = 1 D .

Rastrigin function (A.7) f 7 x = i = 1 D x i 2 - 10 cos 2 π x i + 10 , hh - 10 x i 10 , x = 0 D .

Griewank function (A.8) f 8 x = 1 4000 i = 1 D x i 2 - i = 1 D cos x i i + 1 , - 600 x i 600 , x = 0 D .

Ackley function (A.9) f 9 x = - 20 exp - 0.2 1 D i = 1 D x i 2 - exp 1 D i = 1 D cos 2 π x i + 20 + e , - 32 x i 32 , x = 0 D .

Schwefel function (A.10) f 10 x = 418.9829 × D - i = 1 D x i sin x i , - 500 x i 500 , x = 420.96 D .

Noncontinuous Rastrigin function (A.11) f 11 x = i = 1 D y i 2 - 10 cos 2 π y i + 10 , - 10 x i 10 , x = 0 D ,

where (A.12) y i = x i , x i < 0.5 , round 2 x i 2 , x i 0.5 , for    i = 1,2 , , D .

Levy function (A.13) f 12 x = i = 1 D - 1 x i - 1 2 1 + 10 sin 2 3 π x i + 1 + sin 2 3 π x i + x D - 1 1 + 10 sin 2 3 π x D , hhhhhhhhhhiiih - 50 x i 50 , x = 1 D .

Alpine function (A.14) f 13 x = i = 1 D x i sin x i + 0.1 x i , - 10 x i 10 , x = 0 D .

2 D minima (A.15) f 14 x = 78.332331408 × D - i = 1 D x i 4 - 16 x i 2 + 5 x i , - 5 x i 5 , x = - 2.9035 D .

Generalized penalized (A.16) f 15 x = π D 10 sin 2 π y 1 + i = 1 D - 1 y i - 1 2 × 1 + 10 sin 2 π y i + 1 + y D - 1 2 i = 1 D - 1 y i - 1 2 + i = 1 D u x i , 10,100,4 - 50 x i 50 , x = 1 D ,

where (A.17) y i = 1 + 1 4 x i + 1 , u x i , a , k , m = k x i - a m , x i > a , 0 , - a < z i < a , k - x i - a m , x i < - a .

A.3. Shifted and Rotated Functions

Rotated Rosenbrock function (A.18) f 16 x = i = 1 D - 1 100 y i + 1 - y i 2 2 + y i - 1 2 , y = M × x , - 10 x i 10 , x = 0 D .

Rotated Rastrigin function (A.19) f 17 ( x ) = i = 1 D y i 2 - 10 cos 2 π y i + 10 , y = M × x , - 5.12 x i 5.12 , x = 0 D .

Shifted Rosenbrock function (A.20) f 18 ( x ) = i = 1 D - 1 100 z i + 1 - z i 2 2 + z i - 1 2 , z = x - o + 1 , - 10 x i 10 , x = o D .

Shifted Rastrigin function (A.21) f 19 ( x ) = i = 1 D - 1 z i 2 - 10 cos 2 π z i + 10 , z = x - o , - 5.12 x i 5.12 , x = o D .

Shifted Rotated Rastrigin function (A.22) f 20 ( x ) = i = 1 D - 1 z i 2 - 10 cos 2 π z i + 10 , z = ( x - o ) × M , - 5.12 x i 5.12 , x = o D .

Shifted Rotated Griewank function (A.23) f 21 x = 1 4000 i = 1 D z i 2 - i = 1 D cos z i i + 1 , z = x - o × M , - 600 x i 600 , x = 0 D .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by National Natural Science Foundation of China under Grant nos. 71240015, 71402103, and 61273367; National Science Foundation of SZU under Grant 836; the Foundation for Distinguished Young Talents in Higher Education of Guangdong, China, under Grant 2012WYM_0116; the MOE Youth Foundation Project of Humanities and Social Sciences at Universities in China under Grant 13YJC630123; The Youth Foundation Project of Humanities and Social Sciences in Shenzhen University under grant 14QNFC28; and Ningbo Science & Technology Bureau (Science and Technology Project no. 2012B10055).

Kennedy J. Eberhart R. Shi Y. Swarm Intelligence 2001 Boston, Mass, USA Morgan Kaufmann Shi Y. An optimization algorithm based on brainstorming process International Journal of Swarm Intelligence Research 2011 2 4 35 62 10.4018/ijsir.2011100103 Karaboga D. An idea based on honey bee swarm for numerical optimization 2005 Erciyes University, Engineering Faculty, Computer Engineering Department Karaboga D. Akay B. A survey: algorithms simulating bee swarm intelligence Artificial Intelligence Review 2009 31 1–4 61 85 10.1007/s10462-009-9127-4 2-s2.0-75149134664 Karaboga D. Gorkemli B. Ozturk C. Karaboga N. A comprehensive survey: artificial bee colony (ABC) algorithm and applications Artificial Intelligence Review 2014 42 1 21 57 10.1007/s10462-012-9328-0 2-s2.0-84901190234 Karaboga D. Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm Journal of Global Optimization 2007 39 3 459 471 2-s2.0-35148821762 MR2346178 10.1007/s10898-007-9149-x Karaboga D. Akay B. Ozturk C. Torra V. Narukawa Y. Yoshida Y. Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks Modeling Decisions for Artificial Intelligence 2007 4617 Berlin, Germany Springer 318 329 Lecture Notes in Computer Science 10.1007/978-3-540-73729-2_30 Qin Q. Cheng S. Li L. Shi Y. Artificial bee colony algorithm: a survey CAAI Transactions on Intelligent Systems 2014 9 2 127 135 Gao W. Liu S. Improved artificial bee colony algorithm for global optimization Information Processing Letters 2011 111 17 871 882 MR2849397 2-s2.0-79959282672 10.1016/j.ipl.2011.06.002 Kiran M. S. Gündüz M. A recombination-based hybridization of particle swarm optimization and artificial bee colony algorithm for continuous optimization problems Applied Soft Computing Journal 2013 13 4 2188 2203 10.1016/j.asoc.2012.12.007 2-s2.0-84886726325 Karaboga D. Akay B. A comparative study of artificial Bee colony algorithm Applied Mathematics and Computation 2009 214 1 108 132 10.1016/j.amc.2009.03.090 MR2541051 2-s2.0-67349273050 Ozturk C. Karaboga D. Hybrid artificial bee colony algorithm for neural network training Proceedings of the IEEE Congress of Evolutionary Computation (CEC '11) June 2011 IEEE 84 88 10.1109/cec.2011.5949602 2-s2.0-80052065001 Hsieh T.-J. Hsiao H.-F. Yeh W.-C. Forecasting stock markets using wavelet transforms and recurrent neural networks: an integrated system based on artificial bee colony algorithm Applied Soft Computing Journal 2011 11 2 2510 2525 10.1016/j.asoc.2010.09.007 2-s2.0-78751613501 Hsieh T.-J. Hsiao H.-F. Yeh W.-C. Mining financial distress trend data using penalty guided support vector machines based on hybrid of particle swarm optimization and artificial bee colony algorithm Neurocomputing 2012 82 196 206 2-s2.0-84856500674 10.1016/j.neucom.2011.11.020 Zhang R. Song S. Wu C. A hybrid artificial bee colony algorithm for the job shop scheduling problem International Journal of Production Economics 2013 141 1 167 178 10.1016/j.ijpe.2012.03.035 2-s2.0-84869488069 Alvarado-Iniesta A. Garcia-Alcaraz J. L. Rodriguez-Borbon M. I. Maldonado A. Optimization of the material flow in a manufacturing plant by use of artificial bee colony algorithm Expert Systems with Applications 2013 40 12 4785 4790 10.1016/j.eswa.2013.02.029 2-s2.0-84885059422 Cui Z. Gu X. An improved discrete artificial bee colony algorithm to minimize the makespan on hybrid flow shop problems Neurocomputing 2015 148 248 259 10.1016/j.neucom.2013.07.056 Zhang C. Ouyang D. Ning J. An artificial bee colony approach for clustering Expert Systems with Applications 2010 37 7 4761 4767 10.1016/j.eswa.2009.11.003 2-s2.0-77950189133 Yan X. Zhu Y. Zou W. Wang L. A new approach for data clustering using hybrid artificial bee colony algorithm Neurocomputing 2012 97 241 250 10.1016/j.neucom.2012.04.025 2-s2.0-84865320741 Horng M.-H. Multilevel thresholding selection based on the artificial bee colony algorithm for image segmentation Expert Systems with Applications 2011 38 11 13785 13791 10.1016/j.eswa.2011.04.180 2-s2.0-79959937272 He M. Hu K. Zhu Y. Ma L. Chen H. Song Y. Hierarchical artificial bee colony optimizer with divide-and-conquer and crossover for multilevel threshold image segmentation Discrete Dynamics in Nature and Society 2014 2014 22 941534 10.1155/2014/941534 Zhang C. Zhang B. A hybrid artificial bee colony algorithm for the service selection problem Discrete Dynamics in Nature and Society 2014 2014 13 835071 MR3293831 10.1155/2014/835071 Ayan K. Kiliç U. Artificial bee colony algorithm solution for optimal reactive power flow Applied Soft Computing Journal 2012 12 5 1477 1482 10.1016/j.asoc.2012.01.006 2-s2.0-84858070778 Zhu G. Kwong S. Gbest-guided artificial bee colony algorithm for numerical function optimization Applied Mathematics and Computation 2010 217 7 3166 3173 10.1016/j.amc.2010.08.049 MR2733759 2-s2.0-78049297395 Banharnsakun A. Achalakul T. Sirinaovakul B. The best-so-far selection in artificial bee colony algorithm Applied Soft Computing Journal 2011 11 2 2888 2901 10.1016/j.asoc.2010.11.025 2-s2.0-78751612006 He Z.-A. Ma C. Wang X. Li L. Wang Y. Zhao Y. Guo H. A modified artificial bee colony algorithm based on search space division and disruptive selection strategy Mathematical Problems in Engineering 2014 2014 14 432654 10.1155/2014/432654 Gao W.-F. Liu S.-Y. A modified artificial bee colony algorithm Computers & Operations Research 2012 39 3 687 697 10.1016/j.cor.2011.06.007 2-s2.0-79960165374 Xiang W.-L. An M.-Q. An efficient and robust artificial bee colony algorithm for numerical optimization Computers & Operations Research 2013 40 5 1256 1265 10.1016/j.cor.2012.12.006 2-s2.0-84875133509 Alizadegan A. Asady B. Ahmadpour M. Two modified versions of artificial bee colony algorithm Applied Mathematics and Computation 2013 225 601 609 10.1016/j.amc.2013.09.012 MR3129675 2-s2.0-84887087771 Akay B. Karaboga D. A modified artificial bee colony algorithm for real-parameter optimization Information Sciences 2012 192 120 142 10.1016/j.ins.2010.07.015 2-s2.0-84857832525 Diwold K. Aderhold A. Scheidler A. Middendorf M. Performance evaluation of artificial bee colony optimization and new selection schemes Memetic Computing 2011 3 3 149 162 10.1007/s12293-011-0065-8 2-s2.0-80052851863 Kang F. Li J. Ma Z. Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions Information Sciences 2011 181 16 3508 3531 10.1016/j.ins.2011.04.024 MR2801539 2-s2.0-79957509556 Alatas B. Chaotic bee colony algorithms for global numerical optimization Expert Systems with Applications 2010 37 8 5682 5687 10.1016/j.eswa.2010.02.042 2-s2.0-77951203649 Gao W.-F. Liu S.-Y. Huang L.-L. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning IEEE Transactions on Cybernetics 2013 43 3 1011 1024 10.1109/TSMCB.2012.2222373 2-s2.0-84883744292 Qin Q. Cheng S. Zhang Q. Wei Y. Shi Y. Multiple strategies based orthogonal design particle swarm optimizer for numerical optimization Computers & Operations Research 2015 60 91 110 10.1016/j.cor.2015.02.008 Cheng S. Shi Y. Qin Q. Lu B.-L. Zhang L. Kwok J. Promoting diversity in particle swarm optimization to solve multimodal problems Neural Information Processing 2011 7063 Berlin, Germany Springer 228 237 Lecture Notes in Computer Science Cheng S. Shi Y. Qin Q. Population diversity of particle swarm optimizer solving single and multi-objective problems International Journal of Swarm Intelligence Research 2012 3 4 23 60 10.4018/jsir.2012100102 Cheng S. Population diversity in particle swarm optimization: definition, observation, control, and application [Ph.D. dissertation] 2013 Department of Electrical Engineering and Electronics, University of Liverpool Karaboga D. Akay B. A modified Artificial Bee Colony (ABC) algorithm for constrained optimization problems Applied Soft Computing Journal 2011 11 3 3021 3031 10.1016/j.asoc.2010.12.001 2-s2.0-79951858405 Karaboga D. Basturk B. On the performance of artificial bee colony (ABC) algorithm Applied Soft Computing Journal 2008 8 1 687 697 10.1016/j.asoc.2007.05.007 2-s2.0-34548479029 Ratnaweera A. Halgamuge S. K. Watson H. C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients IEEE Transactions on Evolutionary Computation 2004 8 3 240 255 10.1109/tevc.2004.826071 2-s2.0-3142768423 Sharma T. K. Pant M. Enhancing the food locations in an artificial bee colony algorithm Soft Computing 2013 17 10 1939 1965 10.1007/s00500-013-1029-3 2-s2.0-84883746707 Liang J. J. Qin A. K. Suganthan P. N. Baskar S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions IEEE Transactions on Evolutionary Computation 2006 10 3 281 295 10.1109/TEVC.2005.857610 2-s2.0-33744730797