A Hybrid Artificial Bee Colony Algorithm for the Service Selection Problem

. To tackle the QoS-based service selection problem, a hybrid artificial bee colony algorithm called ℎ -ABC is proposed, which incorporates the ant colony optimization mechanism into the artificial bee colony optimization process. In this algorithm, a skyline query process is used to filter the candidates related to each service class, which can greatly shrink the search space in case of not losing good candidates, and a flexible self-adaptive varying construct graph is designed to model the search space based on a clustering process. Then, based on this construct graph, different foraging strategies are designed for different groups of bees in the swarm. Finally, this approach is evaluated experimentally using different standard real datasets and synthetically generated datasets and compared with some recently proposed related service selection algorithms. It reveals very encouraging results in terms of the quality of solutions.


Introduction
With the proliferation of the cloud computing and software as a service (SaaS) concepts, more and more web services will be offered on the web at different levels of quality [1].There may be multiple service providers competing to offer the same functionality with different quality of service.Quality of service (QoS) has become a central criterion for differentiating these competing service providers and plays a major role in determining the success or failure of the composed application.Therefore, a service level agreement (SLA) is often used as a contractual basis between service consumers and service providers on the expected QoS level.The QoS-based service selection problem aims at finding the best combination of web services that satisfies a set of end-to-end QoS constraints in order to fulfill a given SLA, which is an NP-hard problem [2].
This problem becomes especially important and challenging as the number of functionally equivalent services offered on the web at different QoS levels increases exponentially [3].As the number of possible combinations can be very huge, based on the number of subtasks comprising the composite process and the number of alternative services for each subtask, using the proposed exact search algorithms [4,5] to perform an exhaustive search to find the best combination that satisfies a certain composition level, SLA is impractical.So, the most researches are concentrated on heuristic-based algorithms especially the metaheuristic approaches aiming at finding near-optimal compositions.In [5], the authors propose heuristic algorithms that can be used to find a near-optimal solution more efficiently than exact solutions.The authors propose two models for the QoS-based service composition problem and introduce a heuristic for each model.In [6], a memetic algorithm is used for the service selection problem.In [7], the authors present a genetic algorithm for this problem, including the design of a special relation matrix coding scheme of chromosomes, evolution function of population, and population diversity handling with simulated annealing.In [8], a new cooperative evolution (coevolution) algorithm consists of stochastic particle swarm optimization (SPSO) and simulated annealing (SA) is presented to solve this problem.In [9], the basic principle of ACO is expounded and the service selection problem based on the QoS is transformed into the problem of finding the optimization path.In [10], a services composition graph is applied to model this problem and an extended ant colony system using a novel ant clone rule is applied to solve it.In [11], an algorithm named as multipheromone and dynamically updating ant colony optimization algorithm (MPDACO) are put forward to solve this problem which includes one global optimization process and a local optimizing process.But the performance of these existing service selection algorithms is not satisfying when the number of candidates becomes large.This is mainly because many redundant candidates exist.If they are not filtered beforehand, lots of search efforts will be wasted at running.Moreover, the used construction graphs of the existing ACO based service selection algorithms are static and their information granularities for this problem are too coarse, which make these algorithms excessively rely on their local search processes.Furthermore, as a novel metaheuristic approach, the artificial bee colony (ABC) algorithm is defined by Karaboga and Basturk [12], motivated by the intelligent foraging behavior of honey bees.It has been applied to solve many problems and obtained satisfying results [13].But no research of its applications for service selection has been done.
To tackle these problems, a hybrid artificial bee colony algorithm called ℎ-ABC is proposed in this paper.In this algorithm, an unsupervised clustering process based on IS [14] algorithm is used for building a directed dynamic construct graph to guide the employed bees making exploration.A strategy inspired from the ants search mechanism of ACO is designed and used for the employed bees to forage, and an efficient greedy local search strategy is designed for the onlookers to make exploitation for the promising area identified by the obtained current global information.Then a self-adaptive reflecting process is used to adjust the construct graph based on the obtained local search information.To further improve the solving efficiency, a skyline query process based on the multicriteria dominance relationships [15] is used to filter the candidates of each service class, which can greatly shrink the search space without losing any good candidate.This approach is evaluated experimentally using different standard real datasets and synthetically generated datasets, and the best one is compared with some recently proposed service selection algorithms, DiGA [7], SPSO [8], MA [6], and MPDACO [11].The computational results demonstrate the effectiveness of our approach in comparison to these algorithms.This paper is organized as follows.In Section 2, we give the definition of the QoS-based service selection problem and the basic artificial bee colony algorithm.The details of the hybrid artificial bee colony algorithm for service selection including search space representation and searching strategies are provided in Section 3. The evaluations of this approach including its parameters tuning and comparative studies based on different standard real datasets and synthetically generated datasets are given in Section 4. Finally, Section 5 summarizes the contribution of this paper along with some future research directions.

Problem Definition and
Ant Colony Algorithm is the estimated end-to-end value of the th QoS attribute.Although many different service composition structures may exist in the workflow, we only focus on the sequential structure, since the other structures can be reduced or transformed to the sequential structure, using, for example, techniques for handling multiple execution paths and unfolding loops [16].So the    (CS) can be computed by aggregating the corresponding values of component services.
Definition 1 (abstract metaworkflow).For an abstract workflow I  , it is an abstract metaworkflow if all its contained abstract services need to bind with a candidate service.
Definition 2 (abstract subworkflow).For an abstract metaworkflow I  ⊆ I, it is an abstract subworkflow of I if the solution of composite application corresponding to I  is also a solution of composite application corresponding to I. Definition 3 (feasible selection).For a given abstract workflow I and a vector of global QoS constraints,   = {  1 ,   2 , . . .,    }, 1 ≤  ≤ , which refer to the user's requirements and are expressed in terms of a vector of upper (or lower) bounds for different QoS criteria, we consider a selection of concrete services CS to be a feasible selection, if and only if it contains exactly one service for each service class   of a subworkflow of I and its aggregated QoS values satisfy the global QoS constraints.
In order to evaluate the overall quality of a given feasible selection CS, a utility function   is used which maps the quality vector  CS into a single real value and is defined as follows: with   ∈  + 0 , ∑  =1   = 1 being the weight of    to represent user's priorities, being the minimum and maximum aggregated values of the th QoS attribute for composite service CS, and  denotes an aggregation function that depends on QoS criteria as shown in Table 1.[12], motivated by the intelligent forage behavior of honey bees.In ABC algorithm, the colony of artificial bees consists of three groups of bees: employed bees, onlookers, and scouts.A food source represents a possible solution to the problem to be optimized.The nectar amount of a food source corresponds to the quality of the solution represented by that food source.For every food source, there is only one employed bee.In other words, the number of employed bees is equal to the number of food sources around the hive.The employed bee whose food source has been abandoned by the bees becomes a scout.
As other social foragers, bees search for food sources in a way that maximizes the ration / where  is the energy obtained and  is the time spent for foraging.In the case of artificial bee swarms,  is proportional to the nectar amount of food sources discovered by bees.In a maximization problem, the goal is to find the maximum of the objective function (),  ∈   .Assume that   is the position of the th food source; (  ) represents the nectar amount of the food source located at   and is proportional to the energy (  ).Let () = {  () |  = 1, 2, . . ., } (: cycle, : number of food sources being visited by bees) represent the population of food sources being visited by bees.
As mentioned before, the preference of a food source by an onlooker bee depends on the nectar amount () of that food source.As the nectar amount of the food source increases, the probability with the preferred source by an onlooker bee increases proportionally.Therefore, the probability with the food source located at   will be chosen by an onlooker and can be expressed as After watching the dances of employed bees, an onlooker bee goes to the region of food source located at   by this probability and determines a neighbor food source to take its nectar depending on some visual information, such as signs existing on the patches.In other words, the onlooker bee selects one of the food sources after making a comparison among the food sources around   .The position of the selected neighbor food source can be calculated as   ( + 1) =   () ±   ().  () is a randomly produced step to find a food source with more nectar around   .() is calculated by taking the difference of the same parts of   () and   () ( is a randomly produced index) food positions.If the nectar amount (  ( + 1)) at   ( + 1) is higher than that at   (), then the bee goes to the hive and shares its information with others and the position   () of the food source is changed to be   ( + 1); otherwise   () is kept as it is.
Every food source has only one employed bee.Therefore, the number of employed bees is equal to the number of food sources.If the position   of the food source  cannot be improved through the predetermined number of trials "limit, " then that food source   is abandoned by its employed bee and then the employed bee becomes a scout.The scout starts to search a new food source, and, after finding a new source, the new position is accepted to be   .Every bee colony has scouts that are the colony's explorers.The explorers do not have any guidance while looking for food.They are primarily concerned with finding any kind of food source.As a result of such behavior, the scouts are characterized by low search costs and a low average in food source quality.Occasionally, the scouts can accidentally discover rich, entirely unknown food sources.In the case of artificial bees, the artificial scouts could have the fast discovery of the group of feasible solutions as a task.
It is clear from the above explanation that there are four control parameters used in the ABC algorithm: the number of food sources which is equal to the number of employed bees (), the value of limit, and the maximum cycle number (MCN).The main steps of the algorithm can be described as follows.
Step 2. Produce new solutions for the employed bees, evaluate them, and apply the greedy selection process.
Step 3. Calculate the probabilities of the current sources with which they are preferred by the onlookers.
Step 4. Assign onlooker bees to employed bees according to probabilities, produce new solutions, and apply the greedy selection process.
Step 5. Stop the exploitation process of the sources abandoned by bees and send the scouts in the search area for discovering new food sources randomly.
Step 6. Memorize the best food source found so far.
Step 7. If the termination condition is not satisfied, go to Step 2; otherwise stop the algorithm.
After each candidate source position being produced and evaluated by the artificial bee, its performance is compared with that of its old one.If the new food has an equal or better nectar amount than the old one, it is replaced with the old one in the memory.Otherwise, the old one is retained in the memory.In other words, a greedy selection mechanism is employed as the selection operation between the old one and the candidate one.

The ℎ-ABC Algorithm
When the number of functionally equivalent services offered becomes large, how to effectively shrink the solution space and make the search quickly go towards the right direction is very important.So, in this hybrid algorithm, a skyline query process is used to filter the candidates related to each service class, and an unsupervised clustering process is introduced to partition the skyline services per service class.Then a directed clustering graph is constructed based on clustering result to abstract the search space and is used to guide the bees global searching.
Definition 5 (skyline services).The skyline of a service class , denoted by SLS, comprises the set of those services in  that are not dominated by any other service; that is, SLS = { ∈  | ¬∃ ∈ ;  ≺ }.We regard these services as the skyline services of .Definition 6 (dominance).Consider a service class , and two services, ,  ∈ , characterized by a set of  of QoS attributes. dominates , denoted by  ≺ , if  is as good as or better than  in all parameters in  and better in at least one parameter in ; that is, ∀ ∈ Since not all services are potential candidates for the solution, a skyline query can be performed on the services in each class to distinguish between those services that are potential candidates for the composition and those that cannot possibly be the part of the composition.In the proposed ℎ-ABC algorithm, the skyline query process is implemented using the sequential online archiving process in [17] which is a hypervolume based archiving process and can update the skylines online.This makes it able to be extended and used to tackle the candidate changes.If the candidate services number in the skyline   ⊂   of a service class   is more than , which is a predefined threshold value, an unsupervised clustering process based on IS [14] is used to discover the similar candidate services,  , is used to represent the th cluster center, and use  , is used to represent the service candidates in this cluster.Then a directed clustering graph CG(, ) is formed as where V  , V  represent the start point and end point and  in (V , ) and  out (V , ) are the in-degree and out-degree of node V , .When binding each vertex V , except the V  and V  in CG with a candidate service,  , ∈  , , a binding mode of the clustering graph is generated.Based on this binding mode, the following definition can be given.
Definition 7 (feasible path).Given a path  from the vertex V  to V  of a clustering graph with a specified binding mode, it is a feasible path if and only if the composite service CS formed by the current services binding with the vertexes between V  and V  in this path satisfies all the global QoS constraints, The fitness of a path  is computed as follows: where cons(CS) denotes the number of the constraints violated by CS.By this way, the more constraints a path violates, the bigger its fitness value will be.We can see that the evaluation does not only depend on its utility but also depend on how many constraints have been violated.Based on this fitness definition, for the current obtained paths (food sources),  = { 0 ,  1 , . . .,  −1 }, the attractive probability   for   ∈  is computed as follows: To cover all possible service combinations, a dynamic construction graph is used in this framework, which can self-adaptively vary from one binding mode to another through dynamically changing the binding relationship of candidate services and vertex.In the ℎ-ABC algorithm, the employed bees and scouts are responsible for searching in the current binding mode CBM, and its transition to the next binding mode NBM is incorporated into the send onlooker process and determined by the obtained paths  and the exploitation results of onlookers.If the onlookers number is num onlooker, then the process of sending onlookers can be detailed as shown in Procedure 1.
By this process, the binding mode will be self-adaptively converted to another containing a feasible path with smaller fitness value.Obviously, the information granularities are fractionized further by the dynamic construction graph.Furthermore, since all binding modes of a dynamic construction graph have the same topology and scale, which are determined by the built clustering graph, the mechanism of ACO algorithm can be introduced and used by employed bees to make exploration, and the pheromone information needed to store is controllable.In the ℎ-ABC algorithm, the employed bees communicate by laying pheromone on graph vertices like the ants in ACO.The amount of pheromone on vertex V , is denoted by (V , ).Intuitively, this amount of pheromone represents the learnt desirability moving towards the service class   binding with its th service instance.The way by which an employed bee discovers a food source (path) in the current binding mode is outlined in Procedure 2.
For a given employed bee  that is building a path   and is currently at the vertex V  , its feasible neighborhood in the current binding mode is defined as Nbr  (V , ) = {V , | ⟨V  , V , ⟩ ∈  ∧ V , ∈ }.In this paper, the roulette wheel selection (RS) rule is used for an employed bee selecting a vertex in its feasible neighborhood.In this rule, the probability of this employed bee to select the vertex V , in its feasible neighborhood is computed as follows: where (V , ) is the pheromone factor of vertex V , , (V , ) is its heuristic factor, and  and  are the parameters that determine their relative weights.In this paper, the heuristic factor (V , ) depends on the whole current set of visited vertices in   .It is inversely proportional to the number of new violated constraints when adding V , to   and is computed as follows: The details of sending the employed bees for making exploration are given in Procedure 3.
In order to simulate evaporation and allow employed bees to forget bad assignments, all pheromone trails are decreased uniformly, and the chosen employed bees of the cycle deposit pheromones.More formally, after sending the employed bees and onlookers in each cycle, the quantity of pheromone on each vertex is updated as in Procedure 4.
In Procedure 4  is the evaporation rate, 0 ≤  ≤ 1.The set ElitistsofCycle contains all the paths remembered by the employed bees in the current iteration.The Δ(  , V) is the quantity of pheromone that should be deposited on vertex V and is defined as follows: If a food source has not been improved when its trial number is bigger than the predefined threshold value "limit, " the employed bee related to it will be search as a scout.Different from the onlookers and employed bees, the scouts search in the current binding mode randomly.When constructing path, a scout randomly selects a next vertex to move.Furthermore, to clear up the effects of the abandoned food sources, the phonemes of related vertexes are reset as their initial values.The details of the employed bees search as scouts are given in Procedure 5.
In Procedure 5, the  min and  max are the explicitly imposed lower and upper bounds of pheromone trails and their values are set as 1.0 and 4.0, respectively.The goal is to favor a larger exploration of the search space by preventing the relative differences between pheromone trails from becoming two extremes during processing.Furthermore, the pheromone trails are set to ( min +  max )/2 for all vertexes at the beginning of the proposed ℎ-ABC algorithm for balancing the exploitation and exploration ability during the first cycle.Based on the above definitions and descriptions, the ℎ-ABC algorithm for service selection can be formulated as shown in Algorithm 1.
In Algorithm 1, we can see that the binding mode scale of the dynamic construction graph can be controlled by the parameters Max cluster number and Min cluster number.After building the clustering graph, the candidate service  , ∈   nearest to the center of cluster  , is chosen to be bound with the vertex V , to form the initialized binding mode.At each generation, a promising area is located by the employed bees, and then the onlookers are used to make further exploitation for this area and switch the binding mode.Moreover, the numbers of employed bees and onlookers are both set as half of the colony size in this algorithm.

Experimental Evaluations
In this part, we present an experimental evaluation of our approaches, focusing on the solving quality in terms of the obtained best solution utility values, and compare the proposed ℎ-ABC algorithm with the recently proposed related algorithms DiGA [7], and SPSO [8], MA [6], and MPDACO [11] on 12 different scale test cases.All algorithms are implemented in C++ language and executed on a Core(i7), 2.93 GHZ, 2 GB RAM computer.

Test Cases.
In our evaluation, we experimented with four datasets.The first is the publicly available updated data set called QWS (http://www.uoguelph.ca/∼qmahmoud/qws/index.html),which comprises measurements of nine QoS attributes for 2507 real-world web services.These attributes, priorities, and their aggregation functions are shown in Table 1.These services were collected from public sources on the web, including UDDI registries, search engines, and service portals, and their QoS values were measured using commercial benchmark tools.More details about this dataset can be found in [3].We also experimented with other three synthetically generated datasets in order to test our approach with larger number of services and different distributions through a publicly available synthetic generator (http://randdataset.projects.postgresql.org/):(a) a correlated data set (cQoS), in which the values of QoS parameters are positively correlated, (b) an anticorrelated (aQoS) data set, in which the values of the QoS parameters are negatively correlated, and (c) an independent dataset, in which the QoS values are randomly set.Each dataset contains 40000 QoS vectors, and each vector represents the nine QoS attributes of a web service.Based on these datasets, twelve test cases are created, which are shown in Table 2.In this table, the composition scale is defined as the number of the abstract services included, and the candidate scale is defined as the number of the candidate services related to each abstract service.Since all other models can be reduced or transformed to the sequential model using the techniques for handling multiple execution paths and unfolding loops [18], the sequential composition models are focused on in this paper.We then created several QoS vectors of up to 9 random values to represent the user end-to-end QoS constraints.Each QoS vector corresponds to one QoS-based composition request, for which one concrete service needs to be selected from each class, such that the overall utility value is maximized, while all end-to-end constraints are satisfied.

Parameter Tuning.
In order to set an appropriate terminate condition for this algorithm on each test case, this algorithm is run ten times on the test selected cases 5, 8, and 11.Since they have different composition scale and candidate scale, they are considered as being representative.Each run is terminated when the obtained best fitness value is not updated during 100 consecutive time intervals.Each time interval is set as 1000 milliseconds.The colony size C Size is set as 50 and the other parameters are set as the default value in Table 3.We found that all the obtained best solutions of these runs do not change after 1.5 * 10 5 milliseconds.So, for a test case, the termination condition for a run of an algorithm is set as [( * )/2500] * 1.5 * 10 5 milliseconds during the following experiments conveniently, where Co and Ca denote the composition scale and candidate scale, respectively.In the proposed algorithm, since the the Max cluster number, Min cluster number are used to control the binding mode scale of the dynamic construction graph, their value settings mainly depend on the running platform configurations.If the parameter Max cluster number is set too big and the Min cluster number is set too small, large space will be needed to store the phoneme trial information for some problem.Based on our used running environments, we let the Min cluster number = 50 and Max cluster number = Ca/min culster number.The influence of parameter C-Size to the algorithms' performance is obvious if not taking the complexity into account; the larger the problem scale is, the bigger its value is.So we set it as 50 for convenience.Except for the above parameters, there are some other more complex and sensitive parameters in this algorithm.Their ranges are shown in Table 3.
In order to perform parameter exploration studies, we select three representative test cases 5, 8, and 11, which are characterized by the correlated, anticorrelated, and independent property, respectively.To set appropriate values for these parameters, we tuned them in the sequential order limit, , , and .For the parameter limit, we vary its value one at a time, while setting the values of the other parameters to their default values.For the next untuned parameter , we vary its value one at a time while setting the values of tuned parameters to the obtained most appropriate ones and the values of the other untuned parameters to their default values.Then the other two parameters are tuned in the same way as the parameter .During this process, the ℎ-ABC algorithm with each parameter configuration is run ten times on each used test case and the results are shown as in Figure 1.From Figure 1(a), we can see that the maximum average utility values for case 8 and case 11 are obtained when  = 20.From Figure 1(b), we can see that the maximum average utility values for instance 5 and case 11 are obtained when  = 1.0.From Figure 1(c), we can see that the maximum average utility values for case 5 and instance 8 are obtained when  = 2.0.The maximum average utility values for instance 8 and case 11 are obtained when  = 0.25 as shown in Figure 1(d).So, the comparatively better settings for these parameters are  = 20,  = 1.0,  = 2.0, and  = 0.25 for the proposed algorithm.

Compared with the Recently Proposed Related Algorithms.
In this part, we compare the ℎ-ABC algorithm with the recently proposed related algorithms DiGA [7], SPSO [8], MA [6], and MPDACO [11] on the 12 different scale test cases in Table 2.The parameters of the ℎ-ABC and the termination condition for all these algorithms are set as in Section 4.2.The parameters of other compared algorithms except the termination condition are set as in their original researches.We run each algorithm twenty times on each test case.The  We can see that the minimum utility obtained by the ℎ-ABC on each case is still larger than the biggest utility obtained by other compared algorithms.Furthermore, the superiority of the ℎ-ABC is more obvious for the test cases generated from the data set QWS2 and  data.This is mainly because these two datasets are not correlated or anticorrelated, and many candidate services can be filtered by the skyline query process included in the defined framework.So, we can conclude that the ℎ-ABC outperforms the compared methods in terms of the utility score and possesses competitive performance for the large scale service selection problem.

Conclusions
To tackle the large scale service problem, a hybrid artificial bee colony algorithm is proposed.In this algorithm, a selfadaptive dynamic cluster graph is constructed which provides insight into the large scale service selection problem and is exploited to predict the subspace crucial to search.It provides a useful way to solve the service selection problem and can give a reference for solving other optimization problems.There are a number of research directions that can be considered as useful extensions of this research.We can combine it with some local search strategy or hybrid it with other metaheuristic algorithms.Furthermore, how to tackle the QoS uncertainty during service selection in this designed framework is our next studying problem.
= { 1 ,  2 , . . .,   },  ∈ [0,‖S‖ − 1], consists of all services that deliver the same functionality but potentially differ in terms of QoS values.The QoS attributes which are published by the service provider may be positive or negative.We use the vector   = { 1 (),  2 (), . . .,   ()} to represent the  QoS values of service , and   () denotes the published value of the th attribute of the service .Then, the QoS vector, for a composite service consisting of ,  ∈ [1, ‖S‖], service components CS = { 1 ,  2 , . . .,   }, is defined as  CS = {  1 (CS),   2 (CS), . . .,    (CS)}, where the    (CS) 2.1.The QoS-Based Service Selection Problem.For a composite application that is specified as abstract workflow I composed of a set of abstract services S, each abstract service,

Table 1 :
The considered attributes, their priorities, and aggregation functions.
(5)infor each current food source   ∈  do Compute its attractive probability   according to(5);   ) then ++; endif; until (rand <   ); //make exploitation for the food source   , and adjust the binding mode bool improved = true; int  = 1; trial  ++; //increment the trial number of food source   , by 1 while (improved ) do  = randomInt(1,   .length-1);/ * Generate a random number between 1 and   .length-1* /   =   ; Random select a candidate   from the cluster containing the current binding service ; Bind   with the vertex  of   to replace ; if ((  ) < (  )) then Begin  = {V  }; repeat Select a vertex V from the its feasible neighborhood based on the used selection rule; Move the employed bee to this vertex,   =   ∨ {V}; until (V == V  ) End Procedure 2: Construct a path by an employed bee .
Beginfor each employed bee  do if (trial  > ) then //reinitialize the phonemes of related vertexes for each vertex V in the   do  (V) = ( min +  max ) /2; endfor; generate a path   from the V  to V  randomly; Parameter int Max cluster number; int Min cluster number; int C size; //The colony size Begin for each service class s class do Use the skyline query process to identify its skyline services SL   ; if (     SL       > Min cluster number) Use the IS process partitioning the skyline services into  clusters,  <= Max cluster number;

Table 2 :
The used test cases.

Table 3 :
The tuned parameters.

Table 4 :
The utilities obtained by the compared algorithms [max/min/ave utility, minimum utility, mean value, and the standard deviation obtained by each compared algorithm in the twenty runs on each case are given in Table4.We can see that the maximum utility, minimum utility, and mean value obtained by the ℎ-ABC algorithm for each test case are larger than those obtained by compared other algorithms.Moreover, the results on the cases based on QWS dataset are generally higher.This is mainly because the constraints used by the test cases related to QWS dataset are less restrictive than others.Tightening the constraints can make the test case more difficult to some extent.So, we make the constraints more and more restrictive in the experiments.It also has achieved the smallest deviation values for the case 1, case 2, case 3, case 4, case 8, and case 10.The MPDACO algorithm obtained the smallest deviation values for the other test cases.It may be because a local search process is combined with the ant colony optimization process in the MPDACO algorithm, and the performance of the used ant colony process for global search is limited for these test cases.The deviation values obtained by the DiGA, SPSO, and MA for all test cases are all bigger than the values obtained by the ℎ-ABC.Therefore, we can clearly get that the ℎ-ABC is more stable than the other compared algorithms except the MPDACO algorithm and can perform better than all the compared other algorithms.This can be further proved by Figure2, which explicitly shows the statistical results using the boxplot based on the utilities obtained by the compared algorithms on each test instance.It gives the distribution of the utilities obtained by each algorithm, including the smallest observation, lower quartile, median, mean, upper quartile, and the largest observation. maximum