Artificial bee colony (ABC) is one of the newest additions to the class of swarm intelligence. ABC algorithm has been shown to be competitive with some other population-based algorithms. However, there is still an insufficiency that ABC is good at exploration but poor at exploitation. To make a proper balance between these two conflictive factors, this paper proposed a novel ABC variant with a time-varying strategy where the ratio between the number of employed bees and the number of onlooker bees varies with time. The linear and nonlinear time-varying strategies can be incorporated into the basic ABC algorithm, yielding ABC-LTVS and ABC-NTVS algorithms, respectively. The effects of the added parameters in the two new ABC algorithms are also studied through solving some representative benchmark functions. The proposed ABC algorithm is a simple and easy modification to the structure of the basic ABC algorithm. Moreover, the proposed approach is general and can be incorporated in other ABC variants. A set of 21 benchmark functions in 30 and 50 dimensions are utilized in the experimental studies. The experimental results show the effectiveness of the proposed time-varying strategy.
1. Introduction
Swarm intelligence concerns the design of intelligent optimization algorithms by taking inspirations from the collective behavior of social insects [1]. During the past decades, swarm intelligence has shown great success for solving complicated problems. The problems to be optimized by swarm intelligence algorithms do not need to be mathematically represented as continuous, convex, and/or differentiable functions; they can be represented in any form [2]. Artificial bee colony (ABC) algorithm developed by Karaboga in 2005 is a recent addition into this category [3–5].
ABC algorithm is inspired by the intelligent behavior of honey bees seeking quality food sources [3, 6–8]. In a short span of less than 10 years, ABC has already been demonstrated as a promising technique for solving global optimization problems [9, 10]. Numerical comparisons and many engineering applications demonstrated that ABC could obtain good search results [11]. Due to its simplicity, flexibility, and outstanding performance, ABC has captured increasing interests of the swarm intelligence research community and been applied to many real-world areas, such as numerical optimization [6, 10], neural network training [7, 12], finance forecasting [13, 14], production scheduling [15–17], data clustering [18, 19], image segmentation [20, 21], service selection problem [22], and power system optimization [23].
However, like its counterpart population-based stochastic algorithms, it still has some inherent pitfalls [24–26]. The convergence speed of ABC is slower than those of the representative population-based stochastic algorithms, such as DE and PSO [27]. Moreover, ABC suffers from premature convergence while dealing with certain complicated problems [28]. Several ABC variants have been proposed with the aim of overcoming these pitfalls. These ABC variants can be generally categorized into three groups: (1) first, the parameters of configuration tuning [29–31]; (2) second, the ABC algorithm hybridizing with other evolutionary optimization operators to enhance performance [10, 32, 33]; (3) third, the design of new learning strategy by modifying the search equation of the basic ABC algorithm [24, 25, 34].
It is recognized that exploration and exploitation are the two most important factors affecting a population-based optimization algorithm’s performance [6, 35]. Exploration refers to the ability of a search algorithm to investigate the unknown regions in the search space in order to have high probability to discover good promising solutions. Exploitation, on the other hand, refers to the ability to concentrate the search around a promising region in order to refine a candidate solution [36, 37]. A good population-based optimization algorithm should properly balance these two conflictive objectives [38]. It is observed that ABC has good exploration ability but poor exploitation ability [24], which may impede ABC algorithm from proceeding towards a global optimum even though the population has not converged to a local optimum.
The colony of the ABC algorithm contains three groups of bees: employed bees, onlookers, and scouts [3]. In the employed bees phase of ABC algorithm, the algorithm focuses on the explorative search indicated by the solution updating scheme, which uses the current solution and a randomly chosen solution. A fitness-based probabilistic selection scheme is used in the onlooker phase, which indicates the exploitation tendency of the algorithm. In the original ABC algorithm, it is assumed that half of the colony consists of the employed bees, and the rest half consists of the onlookers [3]. In other words, the ratio of the employed and onlooker bees is the same, 1 : 1.
The basic ABC algorithm is easy to tune with few parameters but lacks effective and efficient ways to control the exploration ability and exploitation ability. In the present study, we propose an easy modification to the structure of the basic ABC algorithms in an attempt to balance their exploration and exploitation abilities. Compared with the basic ABC algorithm, we added a new parameter. The core idea of the modification is to design a proper mechanism, adjusting the ratio of the employed and onlooker bees with time. The proposed algorithms are called ABC with linear versus nonlinear time-varying strategy (ABC-LTVS and ABC-NTVS, resp.). The objective of this development is to enhance the global exploration in the early search and to encourage the solution to converge toward the global optima at the later search.
The reminder of this paper is organized as follows: Section 2 briefly introduces the basic ABC algorithm. The proposed algorithms, ABC-LTVS and ABC-NTVS, are elaborated in Section 3. In Section 4, comprehensive experimental studies are conducted on 21 benchmark functions in 30-dimension and 50-dimension problems to verify the effectiveness of the proposed algorithms. Finally, conclusions are presented in Section 5.
2. Basic ABC Algorithm
The ABC algorithm is a recently introduced optimization algorithm proposed by Karaboga [3, 39, 40], which is inspired by the intelligent foraging behavior of the honeybee swarm. In the ABC algorithm, there are two components: the foraging artificial bees and the food source [3, 39]. The position of a food source represents a possible solution. The nectar amount of a food source corresponds to the fitness of the associated solution. In the basic ABC algorithm, the colony of artificial bees contains three groups of bees: employed bees, onlookers, and scouts. The employed bees are responsible for searching available food sources and pass the food information to the onlooker bees [3, 40]. The onlookers select good sources from those found by the employed bees to further search the foods. When the fitness of the food sources is not improved through a predetermined number of cycles, denoted as limit, the food source is abandoned by its employed bee, and then, the employed bee becomes a scout and starts to search for a new food source in the vicinity of the hive.
The ABC algorithm consists of four phases: initialization, employed bees, onlooker bees, and scout bees [3, 39, 40]. In the initialization phase of the ABC, SN food source positions are randomly produced in the D-dimensional search space using the following equation [3, 6]:(1)vid=ld+r1ud-ld,where x→i=[xi1,xi2,…,xiD] is the ith food source, d∈{1,2,…,D}; i∈{1,2,…,SN}, where SN denotes the number of food source; ld and ud are the lower and upper bounds for dimension d, respectively; r1 is a random number uniformly distributed within range [0,1].
After producing food sources and assigning them to the employed bees. There is only one employed bee on each food source. In the employed bee phase of ABC, each employed bee tries to find a better quality food source based on x→i. A new trial food source, denoted as u→i=[ui1,ui2,…,uiD], is calculated from the equation below [3, 6]:(2)uij=xij+ϕxij-xkj,where j is a randomly generated whole number in the range [1,D]; ϕ is a random number uniformly distributed in the range [-1,1]; and k is the index of a randomly chosen food source satisfying k≠i. After u→i is obtained, it will be evaluated and compared to x→i. If the fitness of u→i is better than that of x→i, the bee will forget the old food source u→i and memorize the new one. Otherwise, it keeps x→i in her memory. After all employed bees have finished their search, they share the nectar and position information of their food sources with the onlookers.
Each onlooker bee selects a food source of an employed bee to improve it. The roulette selection mechanism is performed by using (3) [3, 6](3)pi=fit(x→i)∑j=1SNfit(x→j),where pi is the probability of food source x→i being selected by an onlooker bee and fit(x→i) is the fitness of x→i. The fitness of the food sources are defined as [3, 6](4)fit(x→i)=11+f(x→i),f(x→i)≥0,1+fx→i,f(x→i)<0,where f(x→i) is the objective function value of x→i. Once the onlooker has selected a food source, a new candidate food source can be obtained by (2). As in the employed bees, the greedy selection between these two food sources was performed.
If a food source, x→i, cannot be improved for a predetermined number of cycles, referred to as limit, this food source is abandoned. Then, the scout produces a new food source randomly according to (1) to replace x→i.
The detailed pseudocode of the ABC algorithm is shown in Algorithm 1 [3, 40].
Algorithm 1: The pseudocode of artificial bee colony algorithm.
(1) Initialize the set of food sources x→i, i=1,2,…,SN;
(2) Evaluate each x→i, i=1,2,…,SN;
(3) whilehave not found “good enough” solution or not reached the predetermined
maximum number of iterationsdo
(4) fori=1 to SN do/∗Employed bees phase∗/
(5) Generate u→i with x→i using (2);
(6) Evaluate u→i;
(7) iffit(u→i)≥fit(x→i)then
(8) x→i=u→i;
(9) fori=1 to SN do/∗Onlooker bees phase∗/
(10) Select an employed bee using (3);
(11) Try to improve food source quality according to Steps (5)–(8);
(12) Generate a new randomly food source for those does not improve with successive limit
iterations /∗Scout bees phase∗/;
(13) Memorize the best food source achieved so far;
3. The Proposed Algorithm3.1. Time-Varying Strategy
During the process of the employed bees, each food source is assigned with one employed bee. The onlooker bees select the food source based on the fitness values, which is similar to “route wheel selection” in genetic algorithm. Due to this selection scheme, it is assured that the food sources with higher fitness value have more chance to be selected by the onlooker bees. It facilitates improving the quality of these food sources. Thus, the onlooker bees are more concentrated on the exploitation ability than the employed bees. In general, for population-based optimization algorithms, the exploration is typically preferred at the early stages of the search but is required to gradually give ways to exploitation in order to find the optimum solution efficiently [41]. In the ABC algorithm, the population size is fixed. With a large number of employed bees and relatively a small number of onlookers at the early search, it is helpful to move around the search space and enhance the exploration. On the other hand, the small number of the employed bees and the large number of the onlookers allow the individuals to converge to the global optima in the later part of the optimization. In the basic ABC algorithm, the employed bees and onlooker bees are equal to each other; that is, the ratio between the number of the employed bees and the number of the onlookers is 1 : 1. Considering those concerns, we proposed a time-varying strategy, in which the ratio between the number of the employed bees and the number of the onlookers varies with time in this paper.
The total number of the colony size, employed bees, and onlooker denoted as NP, NPe, and NPo respectively, and it holds that NP=NPe+NPo. The ratio between NPe and NPo is denoted as rc, rc=NPe/NPo. A large value of rc is conducive to global exploration. Conversely, it facilitates local exploitation for fine-tuning a local search. Intuitively, a linear-decreasing time-varying strategy (LTVS) may contribute to balance the exploration and exploitation during the entire search. A LTVS is proposed as the follows:(5)rc=rmax-rmax-rmin×fitcFEs,where rmax and rmin are the maximum and minimum value of rc, respectively; fitc is the current number of function evaluations; and FEs is the total number of function evaluations. The number of employed bees is set to the round value of rc×NP, NPe=round(rc×NP), where round() is the round function with zero decimal place. If the number of employed bees NPe is greater than the number of food source SN, each employed bee is assigned to one food source randomly in the first SN employed bees, and each of the rest of employed bees is placed at a randomly selected food source. When NPe is smaller than SN, all employed bees randomly select food sources to search.
To investigate the dynamics of population distribution in basic ABC and ABC with LVTS, we herein take a typical multimodal function, Rastrigin function, with 2D dimension as an example. Figures 1 and 2 show the population distribution of Rastrigin function observed at various search phases of basic ABC and ABC with LTVS, respectively. It is noted that the value of rmax and rmin is set to 0.8 and 0.3 in ABC with LTVS in this experiment. Figures 1 and 2 show the population distribution at 5th, 10th, and 20th iteration when the 2-dimensional Rastrigin function was optimized by the original ABC and ABC with LTVS, respectively. Compared with the original ABC, we can obtain that the population in ABC with LTVS distributed in a wider range of search space at a relatively small iteration and gradually gathered around the global optimum. In other words, LVTS can explore in the search space in the early search stage, while converging to the global optimum with fast speed in the later search.
Population distribution observed at various stages in ABC.
Iterations = 5
Iterations = 10
Iterations = 20
Population distribution observed at various stages in ABC with time-varying strategy.
Iterations = 5
Iterations = 10
Iterations = 20
It would be interesting to know whether the nonlinear variation of rc can enhance the performance of the ABC algorithm. In the preset study, we proposed a nonlinear-decreasing time-varying strategy (NTVS). This strategy is given as the following equation:(6)rc=rmax-rmax-rmin×fitcFEsα,where α is the nonlinear modulation index. Figure 3 shows typical variations of rc with function evaluations for different settings of α. With α=1, this strategy becomes a special case of LTVS. With α>1, rc varied in a convex function manner. Compared with LTVS, it can be seen that rc decreases in a relative-slow speed in the early search while in a faster speed in the later search. Conversely, with α<1, rc varied in a concave function manner.
Variations of rc with functions evaluations for different settings of α.
3.2. Parameters Tuning of Time-Varying Strategy
In order to investigate the influence of rmax and rmin in LTVS, six combinations for setting the values of rmax and rmin are tested on 8 relevant functions including 3 typical unimodal functions, f1, f2, and f3, and 5 typical multimodal functions, f6, f7, f8, f9, and f10, with 30 dimensions. The colony size is set to 60, the maximum functions evaluations are 7×104, and the limit is set to 200 [42]. Experimental results of 25 independent trials are presented in Table 1. In Table 1, “Mean” indicates the mean of function values, “SD” stands for the standard deviation, and “Rank” refers to the performance level of the certain function in the six combinations of rmax and rmin. The best mean function value of six combinations is marked in boldface. From the rank results given in Table 2, it shows that the settings of rmax=0.7 and rmin=0.2 are the best choice. Therefore, the parameters of rmax=0.7 and rmin=0.2 are used in LTVS. We try to obtain better performance with the NTVS of α≠1 having rmax and rmin kept fixed to their values obtained for the linear case. Table 3 shows the performance evaluation of the resultant system with different values of α, keeping FEs=7×104. From Tables 3 and 4, we can observe that the best result is obtained with α=1.2.
Experimental results of different combination of rmax and rmin in LTVS.
Parameter setting
f1
f2
f3
f6
f7
f8
f9
f10
rmax=0.8rmin=0.2
Mean
2.28e − 015
1.41e − 007
4.04e − 010
1.3601
1.72e − 002
1.33e − 003
2.82e − 007
2.12e + 002
SD
1.57e − 015
8.37e − 008
6.84e − 010
1.4594
8.59e − 002
4.63e − 003
2.04e − 007
8.10e + 001
Rank
1
1
1
2
6
5
2
5
rmax=0.8rmin=0.3
Mean
3.20e − 015
1.79e − 007
7.54e − 010
1.5393
6.82e − 003
6.85e − 007
3.24e − 007
2.10e + 002
SD
2.59e − 015
7.97e − 008
1.41e − 009
1.7766
3.42e − 002
3.40e − 006
1.83e − 007
1.01e + 002
Rank
2
2
2
5
5
2
4
4
rmax=0.7rmin=0.2
Mean
1.34e − 014
3.38e − 007
7.28e − 009
1.3620
1.73e − 006
6.48e − 010
2.62e − 007
1.80e + 002
SD
1.81e − 014
1.60e − 007
1.47e − 008
2.0555
5.34e − 006
1.72e − 009
1.39e − 007
7.84e + 001
Rank
4
4
3
3
1
1
1
2
rmax=0.7rmin=0.3
Mean
6.12e − 015
3.41e − 007
1.49e − 008
1.2644
1.93e − 003
3.02e − 004
3.35e − 007
2.19e + 002
SD
4.41e − 015
1.64e − 007
4.59e − 008
1.1504
9.44e − 003
1.52e − 003
1.26e − 007
1.27e + 002
Rank
3
3
4
1
4
3
5
6
rmax=0.6rmin=0.2
Mean
4.86e − 014
6.36e − 007
8.76e − 008
1.4883
8.55e − 006
9.79e − 004
2.85e − 007
1.67e + 002
SD
5.41e − 014
2.57e − 007
1.76e − 007
1.7028
3.02e − 005
2.77e − 003
1.51e − 007
1.02e + 002
Rank
6
5
6
4
2
4
3
1
rmax=0.6rmin=0.3
Mean
4.59e − 014
8.45e − 007
2.66e − 008
1.6454
1.33e − 003
1.43e − 003
4.17e − 007
1.83e + 002
SD
3.49e − 015
4.49e − 007
5.57e − 008
1.6193
6.75e − 003
3.97e − 003
2.17e − 007
8.42e + 001
Rank
5
6
5
6
3
6
6
3
Rank results of the different combinations of rmax and rmin in LTVS.
rmax=0.8
rmax=0.8
rmax=0.7
rmax=0.7
rmax=0.6
rmax=0.6
rmin=0.2
rmin=0.3
rmin=0.2
rmin=0.3
rmin=0.2
rmin=0.3
Average rank
2.875
3.25
2.375
3.625
3.875
5
Final rank
2
3
1
4
5
6
Experimental results of different setting of α in NTVS.
Parameter setting
f1
f2
f3
f6
f7
f8
f9
f10
α=1.6
Mean
2.16e − 017
1.82e − 007
8.41e − 010
1.3859
1.49e − 006
1.73e − 003
3.10e − 007
2.03e + 002
SD
2.83e − 017
7.58e − 008
1.27e − 009
1.1617
6.63e − 006
4.82e − 003
1.15e − 007
1.06e + 002
Rank
3
2
4
3
5
6
5
5
α=1.4
Mean
7.04e − 018
2.36e − 007
1.59e − 012
1.6515
8.40e − 010
6.48e − 004
4.78e − 009
1.41e + 002
SD
6.61e − 018
1.36e − 007
3.38e − 012
1.6594
2.33e − 009
2.35e − 003
2.83e − 009
1.05e + 002
Rank
2
3
1
5
2
5
1
2
α=1.2
Mean
1.46e − 018
4.76e − 009
4.21e − 012
1.5987
6.25e − 010
1.01e − 005
5.58e − 009
9.60e + 001
SD
2.66e − 018
3.43e − 009
9.92e − 012
1.4570
1.61e − 009
4.69e − 005
2.94e − 009
8.84e + 001
Rank
1
1
2
4
1
2
2
1
α=1
Mean
1.34e − 014
3.38e − 007
7.28e − 009
1.3620
1.73e − 006
6.48e − 010
2.62e − 007
1.80e + 002
SD
1.81e − 014
1.60e − 007
1.47e − 008
2.0555
5.34e − 006
1.72e − 009
1.39e − 007
7.84e + 001
Rank
4
4
6
2
6
1
3
4
α=0.8
Mean
1.77e − 014
3.93e − 007
9.21e − 011
1.0221
1.75e − 007
3.633e − 004
2.68e − 007
2.09e + 002
SD
2.04e − 014
1.74e − 007
2.49e − 010
1.0185
4.57e − 007
1.82e − 003
1.47e − 007
9.24e + 001
Rank
5
5
3
1
4
4
4
6
α=0.6
Mean
6.06e − 014
8.31e − 007
1.35e − 009
1.9156
1.90e − 008
3.14e − 004
3.63e − 007
1.45e + 002
SD
7.52e − 014
3.58e − 007
2.45e − 009
2.0691
7.87e − 007
1.64e − 003
2.17e − 007
9.87e + 001
Rank
6
6
5
6
3
3
6
3
Rank results of different setting of α in NTVS.
α=1.6
α=1.4
α=1.2
α=1.0
α=0.8
α=0.6
Average rank
4.125
2.625
1.75
3.75
4
4.75
Final rank
5
2
1
3
4
6
4. Experimental Study4.1. Benchmark Functions and Parameters Settings
In order to test the proposed algorithm, a diverse set of 21 benchmark functions are used to conduct the experiments. These benchmark functions can be classified into three groups: Group 1, Group 2, and Group 3. The first five functions f1–f5 are unimodal functions in Group 1. The next group includes ten multimodal functions with many local optima which are used to test the global search capability in avoiding premature convergence. Note that f6 (Rosenbrock) is a unimodal problem in 2D or 3D search space but is a multimodal problem in higher dimensions. Rotated and/or shifted functions belong to Group 3. f16 and f17 are rotated functions, in which the original variable x→ is rotated by left multiplying the orthogonal matrix M [43], y→=M×x→. M is used to increase the complexity of the function by changing separable functions to nonseparable functions without affecting the shape of the functions. The global optima of f18 and f19 are shifted to different numerical values for different dimensions (z→=x→-o→), where o→ is employed to shift the global optimal solution of the original function from the center of the search space to a new location. f20 and f21 are complicated which are shifted and rotated. We used x→∗ to represent global optimum. In each benchmark function, x→∗ and f(x→∗) represent global optimum and the corresponding function value, respectively. The function value of the best solution found by an algorithm in a run is denoted by f(x→best). The error of this run is denoted as error=f(x→best)-f(x→∗). The parameters of all benchmark functions are described in Appendix.
To validate the effectiveness of the proposed time-varying strategy, experiments were conducted to compare ABC with ABC-LTVS and ABC-NTVS on 21 benchmark functions. In order to make a fair comparison, the colony size for all algorithms was set to 60. FEs are used as the stop criteria for all algorithms and it is set to be 7×104 for solving 30D problems and 1.2×105 for solving 50D problems, respectively. The parameter, limit, is set to 200 [42]. All experiments on each benchmark function were run 25 times independently.
4.2. Experimental Results for 30D Problems
The comparative results obtained by ABC, ABC-LTVS, and ABC-NTVS are presented in Table 5. The best mean results on each problem among all algorithms are given in bold. In order to determine whether the results obtained by ABC-LTVS and ABC-NTVS are statistically different from the results generated by ABC algorithms, a two-tailed t-test with 48 degrees of freedom is used at a significant level of 0.05. Values of “1,” “0,” and “−1” in columns “h1” and “h2” in Table 5, respectively, denote that ABC-LTVS and ABC-NTVS perform significantly better than, almost the same as, and significantly worse than ABC algorithm. In order to give visualized comparisons of the involved algorithms, the convergence graphs of the best and mean function values for each ABC algorithm regarding all benchmark functions are shown in Figures 4, 5, and 6. In these figures, each curve represents the variation of mean value of error over the FEs for a specific ABC algorithm.
Experimental results for 30-dimension problem.
Function
ABC
ABC-LTVS
ABC-NTVS
h1
h2
Mean
SD
Mean
SD
Mean
SD
f1
7.36e − 010
5.41e − 010
1.34e − 014
1.81e − 014
1.46e − 018
2.66e − 018
1
1
f2
1.26e − 004
7.27e − 005
3.38e − 007
1.60e − 007
4.76e − 009
3.43e − 009
1
1
f3
4.37e + 000
4.05e + 000
7.28e − 009
1.47e − 009
4.21e − 012
9.92e − 012
1
1
f4
0.1463
0.0309
0.1385
0.0307
0.1409
0.0412
0
0
f5
5.32e + 002
7.20e + 001
5.66e + 002
5.30e + 001
5.29e + 002
7.51e + 001
0
0
f6
3.4901
3.5885
1.3620
2.0555
1.5987
1.4570
1
1
f7
6.39e − 001
7.12e − 001
1.73e − 006
5.34e − 006
6.25e − 010
1.61e − 009
1
1
f8
1.23e − 003
3.52e − 003
6.48e − 010
1.72e − 009
1.01e − 005
4.69e − 005
0
0
f9
5.79e − 005
2.52e − 005
2.62e − 007
1.39e − 007
5.58e − 009
2.94e − 009
1
1
f10
3.62e + 002
1.01e + 002
1.80e + 002
7.84e + 001
9.60e + 001
8.84e + 001
1
1
f11
1.3891
0.7813
0.1919
0.3751
0.0421
0.1998
1
1
f12
5.02e − 005
5.74e − 005
1.54e − 006
3.83e − 006
8.17e − 007
1.81e − 006
1
1
f13
1.64e − 003
3.33e − 003
6.65e − 005
5.28e − 005
7.51e − 005
1.12e − 004
1
1
f14
1.42e − 008
6.02e − 010
1.37e − 008
0
1.37e − 008
0
1
1
f15
1.30e − 010
8.08e − 011
3.96e − 015
3.52e − 015
2.11e − 016
1.59e − 016
1
1
f16
47.5969
29.6407
45.9323
27.7604
42.1560
23.4833
0
0
f17
80.7968
12.1854
80.0519
10.6073
77.9708
15.6648
0
0
f18
6.5476
5.8738
1.7831
1.9028
2.2656
2.2663
1
1
f19
0.6250
0.6287
0.0399
0.1990
5.24e − 005
2.62e − 004
1
1
f20
73.2093
16.1789
76.5697
16.2155
73.4801
12.5874
0
0
f21
8.40e − 005
8.07e − 005
7.74e − 006
1.40e − 005
9.23e − 006
3.07e − 005
1
1
Convergence curves of ABC variants solving unimodal functions f1–f5 and multimodal function Rosenbrock f6.
Sphere f1
Schwefel’s P2.22 f2
Elliptic f3
Noise f4
Zakharov f5
Rosenbrock f6
Convergence curves of ABC variants solving multimodal functions f7–f15.
For solving unimodal functions, ABC-NTVS achieves the highest solution accuracy on f1, f2, f3, and f5, and ABC-LTVS obtains the best solution on f4. ABC-LTVS and ABC-NTVS perform significantly better than ABC algorithm on f1, f2, and f3. On multimodal problems, there are many local minima and it is not easy to find the global optima. ABC-LTVS and ABC-NTVS are shown to offer better performance than ABC algorithm on these problems. ABC-LTVS can find the best solution on f6, f8, and f13 and ABC-NTVS performs the best on f7, f9, f10, f11, and f12. ABC, ABC-LTVS, and ABC-NTVS obtain the similar performance on f14. The performance of ABC-LTVS and ABC-NTVS are significantly better than that of ABC algorithm on these multimodal problems except ABC-NTVS for solving f8.
For most of optimization algorithms, their performance will sharply decrease when solving the shifted and/or rotated problems. Table 5 shows the results on the rotated and/or shifted functions. The results appear that these three ABC algorithms are affected. We can observe that ABC-LTVS and ABC-NTVS still obtain relatively good performance. ABC-LTVS and ABC-NTVS performs significantly better than ABC algorithms on f18, f19, and f21. With respect to the stability of algorithms, ABC-LTVS and ABC-NTVS show the good stability as compared to ABC algorithms. The standard deviations of solutions found by ABC-LTVS and ABC-NTVS are small for the most functions. On the whole, ABC-LTVS and ABC-NTVS exhibit the accurate convergence precision on almost all the benchmark, which indicates the effectiveness of the proposed time-varying strategy. Moreover, experimental results demonstrate that ABC-NTVS slightly outperforms ABC-LTVS.
4.3. Experimental Results for 50D Problems
The experiments conducted on 50D problems and the results for solving unimodal, multimodal, and shift and/or rotate problems are presented in Table 6. As the convergence graphs of 50D are similar to the 30D problems and space limitation, they are not given. Compared with ABC algorithm, ABC-LTVS and ABC-NTVS can still obtain high-quality solutions under 50D problems, which can be seen from Table 6. The meaning of column “h1” and “h2” in Table 6 is the same as the Table 5. It is noted that ABC-NTVS, from the mean of the results, performs worse than ABC on f8, but not statistically significantly. According the t-tests results, ABC-LTVS performs significantly better than ABC algorithm on all benchmark functions except f4, f16, and f17, so does ABC-NTVS except f4, f8, and f17.
Experimental results for 50-dimension problem.
Function
ABC
ABC-LTVS
ABC-NTVS
h1
h2
Mean
SD
Mean
SD
Mean
SD
f1
6.88e − 010
3.54e − 010
9.35e − 015
7.24e − 015
5.28e − 016
5.79e − 016
1
1
f2
1.59e − 004
9.94e − 005
4.22e − 007
2.01e − 007
7.14e − 008
2.67e − 008
1
1
f3
4.57e − 005
6.75e − 005
8.82e − 008
3.58e − 007
3.51e − 010
4.42e − 010
1
1
f4
0.3606
0.0579
0.3132
0.0691
0.3214
0.0657
1
1
f5
1.11e + 003
1.09e + 002
1.09e + 003
9.64e + 001
1.07e + 003
8.08e + 001
0
0
f6
8.2319
6.6280
1.3564
1.8842
1.3761
1.8716
1
1
f7
1.0794
0.7609
0.0979
0.2829
0.0865
0.2755
1
1
f8
2.45e − 008
3.16e − 008
1.42e − 009
5.88e − 009
3.23e − 004
1.54e − 003
1
0
f9
4.78e − 005
1.49e − 005
2.25e − 007
9.33e − 008
5.01e − 008
2.47e − 008
1
1
f10
7.83e + 002
1.75e + 002
3.97e + 002
9.89e + 001
3.45e + 002
1.78e + 002
1
1
f11
2.7273
0.9297
0.3388
0.4789
0.3029
0.4551
1
1
f12
4.16e − 005
7.77e − 005
9.34e − 007
2.42e − 006
2.55e − 007
6.02e − 007
1
1
f13
4.52e − 003
4.91e − 003
5.76e − 004
7.16e − 004
4.73e − 004
5.83e − 004
1
1
f14
2.37e − 008
1.33e − 009
2.29e − 008
1.40e − 012
2.29e − 008
1.07e − 012
1
1
f15
1.57e − 010
1.41e − 010
4.61e − 015
4.07e − 015
3.53e − 016
4.77e − 016
1
1
f16
95.9988
40.0144
82.0768
32.2224
72.1410
26.1076
0
1
f17
142.9558
23.9951
143.1200
18.8582
142.5459
20.4493
0
0
f18
5.9388
4.7166
1.6944
1.7353
1.9040
1.8484
1
1
f19
1.5057
0.8212
0.1243
0.3400
0.0400
0.1990
1
1
f20
152.5097
18.9577
141.0244
20.0979
146.2151
17.2185
1
0
f21
1.30e − 004
2.39e − 004
6.84e − 006
8.96e − 006
4.41e − 006
5.53e − 006
1
1
The experiments conducted on 50D problems and the results for solving unimodal, multimodal, and shift and/or rotate problems are presented in Table 6. As the convergence graphs of 50D are similar to the 30D problems and space limitation, they are not given. Compared with ABC algorithm, ABC-LTVS and ABC-NTVS can still obtain high-quality solutions under 50D problems, which can be seen from Table 6. The meaning of column “h1” and “h2” in Table 6 is the same as the Table 5. According to the t-tests results, ABC-LTVS performs significantly better than ABC algorithm on all benchmark functions except f5, f16, and f17, so does ABC-NTVS except f5, f8, and f17.
4.4. GABC with Time-Varying Strategy
Inspired by particle swarm optimization (PSO), Zhu and Kwong [24] proposed a popular ABC variant, called Gbest-guided artificial bee colony (GABC). GABC incorporates the information of global best (gbest) position into (2). The experimental results have shown that GABC algorithm outperforms the basic ABC algorithm. To test the effect of the time-varying strategy, we applied the proposed LTVS and NTVS into GABC algorithm, yielding GABC-LTVS and GABC-NTVS algorithms, respectively. Experiments are conducted on 30D and 50D benchmark functions to test whether the proposed time-varying strategy is effective in GABC algorithm. The parameters setting of GABC algorithm is in accordance with the original reference except the colony size and FEs, the settings of which is same as the previous experiment.
Tables 7 and 8 give the experimental results on 30D and 50D problems, respectively. The best results on each problem among these three GABC algorithms are shown in bold. In addition, the columns “h1” and “h2” in Tables 7 and 8 are used to determine the statistical significance of the difference obtained by GABC with those yielded by the GABC-LTVS and GABC-NTVS using two-tailed t-tests, respectively. According to the results of t-tests shown in Tables 7 and 8, GABC-LTVS and GABC-NTVS can find more accurate solutions, which are significantly better than those of GABC on about half of all benchmark functions regardless of problem dimensions.
Experimental results for 30D problem of GABC algorithms.
Function
GABC
GABC-LTVS
GABC-NTVS
h1
h2
Mean
SD
Mean
SD
Mean
SD
f1
7.03e − 016
1.29e − 016
1.16e − 025
9.49e − 026
1.71e − 027
1.21e − 027
1
1
f2
6.21e − 009
2.30e − 008
5.73e − 013
2.18e − 013
6.69e − 014
3.64e − 014
1
1
f3
1.34e − 012
1.07e − 012
2.76e − 021
3.38e − 021
1.26e − 023
1.711e − 023
1
1
f4
0.0692
0.0188
0.0652
0.0144
0.0602
0.0155
0
0
f5
5.23e + 002
6.31e + 001
5.20e + 002
7.81e + 001
5.08e + 002
6.66e + 001
0
0
f6
6.4906
10.3348
2.7541
3.9329
2.5028
2.4833
0
0
f7
8.62e − 011
1.58e − 010
7.11e − 017
3.55e − 016
0
0
1
1
f8
7.89e − 004
3.87e − 003
4.78e − 012
2.13e − 011
1.03e − 005
5.16e − 005
0
0
f9
2.07e − 009
7.48e − 010
9.53e − 013
3.34e − 013
2.01e − 013
4.74e − 014
1
1
f10
59.7836
70.9355
23.8017
59.1745
25.5825
48.3538
0
0
f11
3.76e − 009
1.02e − 008
6.11e − 015
2.12e − 014
1.42e − 016
4.92e − 016
0
0
f12
5.71e − 010
1.38e − 009
1.51e − 013
6.85e − 013
4.58e − 016
8.36e − 016
1
1
f13
2.58e − 005
3.20e − 005
1.11e − 005
1.82e − 005
8.89e − 006
2.03e − 005
0
1
f14
1.37e − 008
2.23e − 013
1.37e − 008
2.32e − 013
1.37e − 008
2.17e − 013
0
0
f15
6.39e − 016
1.15e − 016
4.72e − 026
4.33e − 026
6.54e − 028
6.60e − 028
1
1
f16
38.0229
23.1492
41.9666
26.0778
38.1041
23.7575
0
0
f17
55.5550
8.2793
53.8867
9.6732
51.0331
9.6130
0
0
f18
7.4043
14.2994
3.7490
4.2342
2.4815
2.9901
0
1
f19
3.24e − 008
1.59e − 007
4.26e − 016
1.18e − 015
7.11e − 017
3.55e − 016
1
1
f20
51.3741
7.0974
50.2909
11.1007
54.0969
10.6401
0
0
f21
3.97e − 004
2.04e − 003
3.68e − 007
6.51e − 007
1.11e − 006
4.09e − 006
1
1
Experimental results for 50D problem of GABC algorithms.
Function
GABC
GABC-LTVS
GABC-NTVS
h1
h2
Mean
SD
Mean
SD
Mean
SD
f1
1.29e − 015
1.40e − 016
1.67e − 025
1.44e − 025
1.79e − 027
1.64e − 027
1
1
f2
7.72e − 009
2.30e − 009
6.28e − 013
1.76e − 013
6.54e − 014
2.14e − 014
1
1
f3
3.09e − 012
2.46e − 012
6.32e − 021
8.53e − 021
1.02e − 023
8.18e − 024
1
1
f4
0.1694
0.0252
0.1647
0.0294
0.1627
0.0296
0
0
f5
1.04e + 003
7.26e + 001
1.05e + 003
9.29e + 001
1.02e + 003
1.05e + 002
0
0
f6
12.3236
23.0585
1.7470
1.9037
3.3437
6.2364
1
0
f7
3.12e − 008
1.53e − 007
7.11e − 017
3.55e − 016
0
0
0
0
f8
9.09e − 013
2.77e − 012
8.41e − 015
4.18e − 014
2.68e − 015
1.33e − 014
0
0
f9
2.52e − 009
9.94e − 010
1.04e − 012
3.32e − 013
2.39e − 013
5.46e − 014
1
1
f10
162.7082
100.3839
52.5115
90.8550
80.6078
101.0140
1
1
f11
4.99e − 008
9.96e − 008
1.93e − 014
6.27e − 014
1.99e − 015
8.51e − 015
1
1
f12
3.61e − 010
5.26e − 010
1.26e − 014
2.24e − 014
3.51e − 016
5.18e − 016
1
1
f13
1.71e − 004
2.75e − 004
4.09e − 005
5.81e − 005
2.02e − 005
3.10e − 005
1
1
f14
2.29e − 008
2.07e − 013
2.29e − 008
2.24e − 013
2.29e − 008
1.51e − 013
0
0
f15
1.17e − 015
1.73e − 016
9.38e − 026
1.05e − 025
7.58e − 028
4.85e − 028
1
1
f16
79.5737
43.0285
87.9010
38.3788
74.1063
42.5826
0
0
f17
101.0561
19.2130
97.5846
14.9214
98.3341
12.7543
0
0
f18
6.7201
10.6406
3.1402
4.8951
4.1251
12.6738
1
0
f19
2.22e − 010
4.99e − 010
4.89e − 013
2.40e − 012
0
0
1
1
f20
107.1083
13.3490
106.0348
15.2274
104.4232
13.6487
0
0
f21
1.32e − 005
4.76e − 005
1.94e − 007
2.30e − 007
1.57e − 006
4.50e − 006
0
0
5. Conclusions
In order to make a balance between the exploration and exploitation in ABC algorithm, a time-varying strategy has been developed. The proposed strategy is implemented through making the ratio between the number of employed bees and the numbers of onlooker bees vary with time. We have developed the two types of time-varying strategies of LTVS and NTVS. Moreover we have examined and fine-tuned the parameters settings of LTVS and NTVS for better performance. Comprehensive experiments have demonstrated the effectiveness of the proposed time-varying strategy in ABC and GABC algorithms. The modifications proposed in the present work are general enough to be applied in other state-of-the-art ABC variants to further improve the search performance.
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is partially supported by National Natural Science Foundation of China under Grant nos. 71240015, 71402103, and 61273367; National Science Foundation of SZU under Grant 836; the Foundation for Distinguished Young Talents in Higher Education of Guangdong, China, under Grant 2012WYM_0116; the MOE Youth Foundation Project of Humanities and Social Sciences at Universities in China under Grant 13YJC630123; The Youth Foundation Project of Humanities and Social Sciences in Shenzhen University under grant 14QNFC28; and Ningbo Science & Technology Bureau (Science and Technology Project no. 2012B10055).
KennedyJ.EberhartR.ShiY.2001Boston, Mass, USAMorgan KaufmannShiY.An optimization algorithm based on brainstorming process201124356210.4018/ijsir.2011100103KarabogaD.An idea based on honey bee swarm for numerical optimization2005Erciyes University, Engineering Faculty, Computer Engineering DepartmentKarabogaD.AkayB.A survey: algorithms simulating bee swarm intelligence2009311–4618510.1007/s10462-009-9127-42-s2.0-75149134664KarabogaD.GorkemliB.OzturkC.KarabogaN.A comprehensive survey: artificial bee colony (ABC) algorithm and applications2014421215710.1007/s10462-012-9328-02-s2.0-84901190234KarabogaD.BasturkB.A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm20073934594712-s2.0-35148821762MR234617810.1007/s10898-007-9149-xKarabogaD.AkayB.OzturkC.TorraV.NarukawaY.YoshidaY.Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks20074617Berlin, GermanySpringer318329Lecture Notes in Computer Science10.1007/978-3-540-73729-2_30QinQ.ChengS.LiL.ShiY.Artificial bee colony algorithm: a survey201492127135GaoW.LiuS.Improved artificial bee colony algorithm for global optimization201111117871882MR28493972-s2.0-7995928267210.1016/j.ipl.2011.06.002KiranM. S.GündüzM.A recombination-based hybridization of particle swarm optimization and artificial bee colony algorithm for continuous optimization problems20131342188220310.1016/j.asoc.2012.12.0072-s2.0-84886726325KarabogaD.AkayB.A comparative study of artificial Bee colony algorithm2009214110813210.1016/j.amc.2009.03.090MR25410512-s2.0-67349273050OzturkC.KarabogaD.Hybrid artificial bee colony algorithm for neural network trainingProceedings of the IEEE Congress of Evolutionary Computation (CEC '11)June 2011IEEE848810.1109/cec.2011.59496022-s2.0-80052065001HsiehT.-J.HsiaoH.-F.YehW.-C.Forecasting stock markets using wavelet transforms and recurrent neural networks: an integrated system based on artificial bee colony algorithm20111122510252510.1016/j.asoc.2010.09.0072-s2.0-78751613501HsiehT.-J.HsiaoH.-F.YehW.-C.Mining financial distress trend data using penalty guided support vector machines based on hybrid of particle swarm optimization and artificial bee colony algorithm2012821962062-s2.0-8485650067410.1016/j.neucom.2011.11.020ZhangR.SongS.WuC.A hybrid artificial bee colony algorithm for the job shop scheduling problem2013141116717810.1016/j.ijpe.2012.03.0352-s2.0-84869488069Alvarado-IniestaA.Garcia-AlcarazJ. L.Rodriguez-BorbonM. I.MaldonadoA.Optimization of the material flow in a manufacturing plant by use of artificial bee colony algorithm201340124785479010.1016/j.eswa.2013.02.0292-s2.0-84885059422CuiZ.GuX.An improved discrete artificial bee colony algorithm to minimize the makespan on hybrid flow shop problems201514824825910.1016/j.neucom.2013.07.056ZhangC.OuyangD.NingJ.An artificial bee colony approach for clustering20103774761476710.1016/j.eswa.2009.11.0032-s2.0-77950189133YanX.ZhuY.ZouW.WangL.A new approach for data clustering using hybrid artificial bee colony algorithm20129724125010.1016/j.neucom.2012.04.0252-s2.0-84865320741HorngM.-H.Multilevel thresholding selection based on the artificial bee colony algorithm for image segmentation20113811137851379110.1016/j.eswa.2011.04.1802-s2.0-79959937272HeM.HuK.ZhuY.MaL.ChenH.SongY.Hierarchical artificial bee colony optimizer with divide-and-conquer and crossover for multilevel threshold image segmentation201420142294153410.1155/2014/941534ZhangC.ZhangB.A hybrid artificial bee colony algorithm for the service selection problem2014201413835071MR329383110.1155/2014/835071AyanK.KiliçU.Artificial bee colony algorithm solution for optimal reactive power flow20121251477148210.1016/j.asoc.2012.01.0062-s2.0-84858070778ZhuG.KwongS.Gbest-guided artificial bee colony algorithm for numerical function optimization201021773166317310.1016/j.amc.2010.08.049MR27337592-s2.0-78049297395BanharnsakunA.AchalakulT.SirinaovakulB.The best-so-far selection in artificial bee colony algorithm20111122888290110.1016/j.asoc.2010.11.0252-s2.0-78751612006HeZ.-A.MaC.WangX.LiL.WangY.ZhaoY.GuoH.A modified artificial bee colony algorithm based on search space division and disruptive selection strategy201420141443265410.1155/2014/432654GaoW.-F.LiuS.-Y.A modified artificial bee colony algorithm201239368769710.1016/j.cor.2011.06.0072-s2.0-79960165374XiangW.-L.AnM.-Q.An efficient and robust artificial bee colony algorithm for numerical optimization20134051256126510.1016/j.cor.2012.12.0062-s2.0-84875133509AlizadeganA.AsadyB.AhmadpourM.Two modified versions of artificial bee colony algorithm201322560160910.1016/j.amc.2013.09.012MR31296752-s2.0-84887087771AkayB.KarabogaD.A modified artificial bee colony algorithm for real-parameter optimization201219212014210.1016/j.ins.2010.07.0152-s2.0-84857832525DiwoldK.AderholdA.ScheidlerA.MiddendorfM.Performance evaluation of artificial bee colony optimization and new selection schemes20113314916210.1007/s12293-011-0065-82-s2.0-80052851863KangF.LiJ.MaZ.Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions2011181163508353110.1016/j.ins.2011.04.024MR28015392-s2.0-79957509556AlatasB.Chaotic bee colony algorithms for global numerical optimization20103785682568710.1016/j.eswa.2010.02.0422-s2.0-77951203649GaoW.-F.LiuS.-Y.HuangL.-L.A novel artificial bee colony algorithm based on modified search equation and orthogonal learning20134331011102410.1109/TSMCB.2012.22223732-s2.0-84883744292QinQ.ChengS.ZhangQ.WeiY.ShiY.Multiple strategies based orthogonal design particle swarm optimizer for numerical optimization2015609111010.1016/j.cor.2015.02.008ChengS.ShiY.QinQ.LuB.-L.ZhangL.KwokJ.Promoting diversity in particle swarm optimization to solve multimodal problems20117063Berlin, GermanySpringer228237Lecture Notes in Computer ScienceChengS.ShiY.QinQ.Population diversity of particle swarm optimizer solving single and multi-objective problems201234236010.4018/jsir.2012100102ChengS.2013Department of Electrical Engineering and Electronics, University of LiverpoolKarabogaD.AkayB.A modified Artificial Bee Colony (ABC) algorithm for constrained optimization problems20111133021303110.1016/j.asoc.2010.12.0012-s2.0-79951858405KarabogaD.BasturkB.On the performance of artificial bee colony (ABC) algorithm20088168769710.1016/j.asoc.2007.05.0072-s2.0-34548479029RatnaweeraA.HalgamugeS. K.WatsonH. C.Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients20048324025510.1109/tevc.2004.8260712-s2.0-3142768423SharmaT. K.PantM.Enhancing the food locations in an artificial bee colony algorithm201317101939196510.1007/s00500-013-1029-32-s2.0-84883746707LiangJ. J.QinA. K.SuganthanP. N.BaskarS.Comprehensive learning particle swarm optimizer for global optimization of multimodal functions200610328129510.1109/TEVC.2005.8576102-s2.0-33744730797