The wolf pack unites and cooperates closely to hunt for the prey in the Tibetan Plateau, which shows wonderful skills and amazing strategies. Inspired by their prey hunting behaviors and distribution mode, we abstracted three intelligent behaviors, scouting, calling, and besieging, and two intelligent rules, winnertakeall generation rule of lead wolf and strongersurvive renewing rule of wolf pack. Then we proposed a new heuristic swarm intelligent method, named wolf pack algorithm (WPA). Experiments are conducted on a suit of benchmark functions with different characteristics, unimodal/multimodal, separable/nonseparable, and the impact of several distance measurements and parameters on WPA is discussed. What is more, the compared simulation experiments with other five typical intelligent algorithms, genetic algorithm, particle swarm optimization algorithm, artificial fish swarm algorithm, artificial bee colony algorithm, and firefly algorithm, show that WPA has better convergence and robustness, especially for highdimensional functions.
Global optimization is a hot topic with applications in many areas, such as science, economy, and engineering. Generally; unconstrained global optimization problems can be formulated as follows:
As many realworld problems are becoming increasingly complex; global optimization, especially using traditional methods, is becoming a challenging task [
The wolf pack is marvelous. Harsh living environment and constant evolution for centuries have created their rigorous organization system and subtle hunting behavior. Wolves tactics of Mongolia cavalry in Genghis Khan period, submarine tactics of Nazi Admiral Doenitz in World War II and U.S. military wolves attack system for electronic countermeasures all highlight great charm of their swarm intelligence. [
The remainder of this paper is structured as follows. In Section
Wolves are gregarious animals and have clearly social work division. There is a lead wolf; some elite wolves act as scouts and some ferocious wolves in a wolf pack. They cooperate well with each other and take their respective responsibility for the survival and thriving of wolf pack.
Firstly, the lead wolf, as a leader under the law of the jungle, is always the smartest and most ferocious one. It is responsible for commanding the wolves and constantly making decision by evaluating surrounding situation and perceiving information from other wolves. These can avoid the wolves in danger and command the wolves to smoothly capture prey as soon as possible.
Secondly, the lead wolf sends some elite wolves to hunt around and look for prey in the probable scope. Those elite wolves are scouts. They walk around and independently make decision according to the concentration of smell left by prey; and higher concentration means the prey is closer to the wolves. So they always move towards the direction of getting stronger smell.
Thirdly, once a scout wolf finds the trace of prey, it will howl and report that to lead wolf. Then the lead wolf will evaluate this situation and make a decision whether to summon the ferocious wolves to round up the prey or not. If they are summoned, the ferocious wolves will move fast towards the direction of the scout wolf.
Fourthly, after capturing the prey, the prey is not distributed equitably, but in an order from the strong to the weak. That is to say that, the stronger the wolf is, the more the food it will get is. Although this distribution rule will make some weak wolf dead for lack of food, it makes sure that the wolves that have the ability to capture prey get more food so as to keep being strong and can capture more prey successfully in the next time. The rule avoids that the whole pack starves to death and ensures its continuance and proliferating. In what follows, the author made detailed description and realization for the above intelligent behaviors and rules.
If the predatory space of the artificial wolves is a
The distance between two wolves
The cooperation between lead wolf, scout wolves, and ferocious wolves makes nearly perfect predation, while prey distribution from the strong to the weak makes the wolf pack thrives towards the direction of the prey that it most probably can be able to capture. The whole predation behavior of wolf pack is abstracted three intelligent behaviors, scouting, calling, and besieging behavior, and two intelligent rules, winnertakeall generating rule for the lead wolf and the strongersurvive renewing rule for the wolf pack.
If
If
It should be noted that
This formula consists of two parts; the former is the current position of wolf
If
Calling behavior shows information transferring and sharing mechanism in wolf pack and blends the idea of social cognition.
There are
When the value of
As described in the previous section, WPA has three artificial intelligent behaviors and two intelligent rules. There are scouting behavior, calling behavior, and besieging behavior and winnertakeall rule for generating lead wolf and the strongersurvive renewing rule for wolf pack.
Firstly, the scouting behavior accelerates the possibility that WPA can fully traverse the solution space; Secondly, the winnertakeall rule for generating lead wolf and the calling behavior make the wolves move towards the lead wolf whose position is the nearest to the prey and most likely capturing prey. The winnertakeall rule and calling behavior also make wolves arrive at the neighborhood of the global optimum only after a few iterations elapsed, since the step of wolves in calling behavior is the largest one. Thirdly, with a small step,
All the above make WPA possesses superior performance in accuracy and robustness, which will be seen in Section
Having discussed all the components of WPA, the important computation steps are detailed below.
Initialize the following parameters, the initial position of artificial wolf
The wolf with best function value is considered as lead wolf. In practical computation,
Except for the lead wolf, the rest of the
The position of artificial wolves who take besieging behavior is updated according to (
Update the position of lead wolf under the winnertakeall generating rule and update the wolf pack under the population renewing rule according to (
If the program reaches the precision requirement or the maximum number of iterations, the position and function value of lead wolf, the problem optimal solution, will be outputted; otherwise go to Step
So the flow chart of WPA can be shown as Figure
The flow chart of WPA.
The ingredients of the WPA method have been described in Section
In order to evaluate the performance of these algorithms, eight classical benchmark functions are presented in Table
Benchmark functions in experiments.
No.  Functions  Formulation  Global extremum 

C  Range 

1  Rosenbrock 


2  UN  (−2.048, 2.048) 
2  Colville 


4  UN  (−10, 10) 
3  Sphere 


200  US  (−100, 100) 
4  Sumsquares 


150  US  (−10, 10) 
5  Booth 


2  MS  (−10, 10) 
6  Bridge 


2  MN  (−1.5, 1.5) 
7  Ackley 


50  MN  (−32, 32) 
8  Griewank 


100  MN  (−600, 600) 
If a function has more than one local optimum, this function is called multimodal. Multimodal functions are used to test the ability of algorithms to get rid of local minima. Another group of test problems is separable or nonseparable functions. A
In Table
The variety of functions forms and dimensions make it possible to fairly assess the robustness of the proposed algorithms within limit iteration. Many of these functions allow a choice of dimension, and an input dimension ranging from 2 to 200 for test functions is given. Dimensions of the problems that we used can be found under the column titled
In this subsection, experimental settings are given. Firstly, in order to fully compare the performance of different algorithms, we take the simulation under the same situation. So the values of the common parameters used in each algorithm such as population size and evaluation number were chosen to be the same. Population size was 100 and the maximum evaluation number was 2000 for all algorithms on all functions. Additionally, we follow the parameter settings in the original paper of GA, PSO, AFSA, ABC, and FA; see Table
The list of various methods used in the paper.
Method  Authors and references 

Genetic algorithm (GA)  Goldberg [ 
Particle swarm optimization algorithm (PSO)  Kennedy and Eberhart [ 
Artificial fish school algorithm (ASFA)  Li et al. [ 
Artificial bee colony algorithm (ABC)  Karaboga [ 
Firefly algorithm (FA)  Yang [ 
For each experiment, 50 independent runs were conducted with different initial random seeds. To evaluate the performance of these algorithms, six criteria are given in Table
Six criteria and their abbreviations.
Criteria  Abbreviation 

The best value of optima found in 50 runs  Best 
The worst value of optima found in 50 runs  Worst 
The average value of optima found in 50 runs  Mean 
The standard deviations  StdDev 
The success rate of the results  SR 
The average reaching time  Art 
Accelerating convergence speed and avoiding the local optima have become two important and appealing goals in swarm intelligent search algorithms. So, as seen in Table
Specifically speaking, SR provides very useful information about how stable an algorithm is. Success is claimed if an algorithm successfully gets a solution below a prespecified threshold value with the maximum number of function evaluations [
The SR is a percentage value that is calculated as
Art is the average value of time once an algorithm gets a solution satisfying the formula (
All algorithms have been tested in Matlab 2008a over the same Lenovo A4600R computer with a DualCore 2.60 GHz processor, running Windows XP operating system over 1.99 Gb of memory.
In order to study the effect of two distance measures and four parameters on WPA, different measures and values of parameters were tested on typical functions listed in Table
This subsection will investigate the performance of different distance measurements using functions with different characteristics. As is known to all, Euclidean distance (ED) and Manhattan distance (MD) are the two most common distance metrics in practical continuous optimization. In the proposed WPA, MD or ED can be adopted to measure the distance between two wolves in the candidate solution space. Therefore, a discussion about their impacts on the performance of WPA is needed.
There are two wolves:
The statistical results obtained by WPA after 50run computation are shown in Table
Sensitivity analysis of distance measurements.
Function  Global extremum 

Distance  Best  Worst  Mean  StdDev  SR/%  Art/s 

Rosenbrock 

2  MD 




100  10.5165 
ED 




100  37.1053  


Colville 

4  MD 




100  46.8619 
ED 




90  68.3220  


Sphere 

200  MD 




100  11.5494 
ED 




100  11.6825  


Sumsquares 

150  MD 




100  8.5565 
ED 




100  8.7109  


Booth 

2  MD 




100  11.1074 
ED 




100  40.5546  


Bridge 

2  MD  3.0054  3.0054  3.0054 

100  1.1093 
ED  3.0054  3.0054  3.0054 

100  1.9541  


Ackley 

50  MD 



0  100  19.3648 
ED 



0  100  43.6884  


Griewank 

100  MD  0  0.1507 

0.0213  98  >8 
ED  0  0.8350  0.0167  0.1181  92  >1 
As seen from Table
Naturally, because of its better efficiency, precision, and robustness, WD is more suitable for WPA. So the WPA algorithm used in what follows is WPA_MD.
In this subsection, we investigate the impact of the parameters
Each time one of the WPA parameters is varied in a certain interval to see which value within this internal will result in the best performance. Specifically, the WPA algorithm also runs 50 times on each case.
Table
Sensitivity analysis of step coefficient (
Functions  Mean ± Std (SR/%) (the default of SR is 100%)  

0.04  0.06  0.08  0.10  0.12  0.14  0.16  
Rosenbrock 







Colville 







Sphere 







Sumsquares 







Booth 







Bridge 







Ackley 







Griewank 







Meanwhile, based on detailed comparison of the results, on Rosenbrock, Sphere, and Bridge functions, step coefficient is not sensitive to WPA, and for Booth function there is a tendency of better results with larger
Tables
Sensitivity analysis of distance determinant coefficient (
Functions  Mean ± Std (SR/%) (the default of SR is 100%)  

0.04  0.06  0.08  0.10  0.12  0.14  0.16  
Rosenbrock 







Colville 







Sphere 







Sumsquares 







Booth 







Bridge 







Ackley 







Griewank 







Sensitivity analysis of the maximum number of repetitions in scouting behavior (
Functions  Mean ± Std (SR/%) (the default of SR is 100%)  

6  8  10  12  14  16  18  
Rosenbrock 







Colville 







Sphere 







Sumsquares 







Booth 







Bridge 







Ackley 







Griewank 







Sensitivity analysis of population renewing proportional coefficient (
Functions  Mean ± Std (SR/%) (the default of SR is 100%)  

2  3  4  5  6  7  8  
Rosenbrock 







Colville 







Sphere 







Sumsquares 







Booth 







Bridge 







Ackley 







Griewank 







Best suggestions for WPA parameters.
No.  WPA parameters name  Original  Bestsuggested 

1  Step coefficient ( 
0.08  0.12 
2  Distance determinant coefficient ( 
0.12  0.08 
3  The maximum number of repetitions in scouting ( 
10  8 
4  Population renewal coefficient ( 
5  2 
Table
Tables
So we summarize the above findings in Table
In this section, we compared GA, PSO, AFSA, ABC, FA, and WPA algorithms on eight functions described in Table
Statistical results of 50 runs obtained by GA, PSO, AFSA, ABC, FA, and WPA algorithms.
Function  Global extremum 

C  Algorithms  Best  Worst  Mean  StdDev  SR/%  Art/ 

Rosenbrock 

2  UN  GA 

0.0373  0.0091  0.0092  10  >759.8323 
PSO 







AFSA 





2.0578  
ABC 

0.0099 

0.0015  0  >391.0297  
FA 





33.1256  
WPA 





6.6333  


Colville 

4  UN  GA  0.0022  0.3343  0.1272  0.1062  0  >1 
PSO 




0  >114.0869  
AFSA 





40.1807  
ABC  0.0103  0.5337  0.1871  0.1232  0  >384.4193  
FA 




8  >3 

WPA 









Sphere 

200  US  GA 




0  >4 
PSO  1.0361  1.5520  1.2883  0.1206  0  >271.9201  
AFSA 




0  >7 

ABC  0.0041  1.2521  0.0444  0.1773  0  >442.9045  
FA  0.1432  0.2327  0.1865  0.0199  0  >8 

WPA 









Sumsquares 

150  US  GA 




0  >3 
PSO  39.7098  91.1145  55.9050  10.4165  0  >232.5464  
AFSA 




0  >7 

ABC 

0.0017 


0  >435.1848  
FA  8.9920  99.8861  40.5721  19.2743  0  >6 

WPA 









Booth 

2  MS  GA 



0 

1.2621 
PSO 







AFSA 





4.4329  
ABC 





0.4175  
FA 





37.9191  
WPA 





6.9339  


Bridge 

2  MN  GA 





0.1927 
PSO 







AFSA 

3.0047  3.0052 

12  >8 

ABC 





0.0932  
FA 





22.7230  
WPA 





0.1742  


Ackley 

50  MN  GA  11.4570  12.6095  12.1612  0.2719  0  >1 
PSO  0.0469  1.7401  0.6846  0.6344  0  >192.5522  
AFSA  20.1600  20.6009  20.4229  0.1009  0  >9 

ABC  20.0085  20.0025  20.0061  0.0014  0  >596.3841  
FA  0.0101  0.0209  0.0160  0.0021  0  >4 

WPA 









Griewank 

100  MN  GA  317.4525  399.6376  363.4174  17.2922  0  >2 
PSO  0.0029  0.0082  0.0052  0.0011  0  >367.0080  
AFSA 



109.6821  0  >6 

ABC 

0.0043 


2  >620.9561  
FA  0.0068  0.0118  0.0091  0.0011  0  >5 

WPA 






As can clearly be seen from Table
Rosenbrock function (
As seen in Figure
On the Colville function, its surface plot and contour lines are shown in Figure
Colville function (
Although the best accurate solution is obtained by AFSA, WPA outperforms the other algorithms in terms of the worst, mean, std., SR, and Art on Colville function.
Sphere and Sumsquares are convex, unimodal, and separable functions. They are all highdimensional functions for their 200 and 150 parameters, respectively, and the global minima are all 0 and optimum solution is
Sphere function
Sumsquares function
As seen from Table
Booth is a multimodal and separable function. Its global minimum value is 0 and optimum solution is
Booth function
As shown in Figure
Bridge and Ackley are multimodal and nonseparable functions. The global maximum value of Bridge function is 3.0054 and optimum solution is
Bridge function
Ackley function
As seen in Figures
Otherwise, the dimensionality and size of the search space are important issues in the problem [
Griewank function
WPA with optimized coefficients has good performance in highdimensional functions. Griewank function (
As is shown in Table
In the experiments, there are 8 functions with variables ranging from 2 to 200. WPA statistically outperforms GA on 6, PSO on 5, ASFA on 6, ABC on 6, and FA on 7 of these 8 functions. Six of the functions on which GA and ABC are unsuccessful are two unimodal nonseparable functions (Rosenbrock and Colville) and four highdimensional functions (Sphere, Sumsquares, Ackley, and Griewank). PSO and FA are unsuccessful on 1 unimodal nonseparable function and four highdimensional functions. But WPA is also not perfect enough for all functions; there are many problems that need to be solved for this new algorithm. From Table
It can be drawn that the efficiency of WPA becomes much clearer as the number of variables increases. WPA performs statistically better than the five other stateoftheart algorithms on highdimensional functions. Nowadays, highdimensional problems have been a focus in evolutionary computing domain, since many recent realworld problems (biocomputing, data mining, design, etc.) involve optimization of a large number of variables [
Inspired by the intelligent behaviors of wolves, a new swarm intelligent optimization method, wolf pack algorithm (WPA), is presented for locating the global optima of continuous unconstrained optimization problems. We testify the performance of WPA on a suite of benchmark functions with different characteristics and analyze the effect of distance measurements and parameters on WPA. Compared with PSO, ASFA, GA, ABC, and FA, WPA is observed to perform equally or potentially more powerful. Especially for highdimensional functions such as Sphere
After all, WPA is a new attempt and achieves some success for global optimization, which can provide new ideas for solving engineering and science optimization problems. In future, different improvements can be made on the WPA algorithm and tests can be made on more different test functions. Meanwhile, practical applications in areas of classification, parameters optimization, engineering process control, and design and optimization of controller would also be worth further studying.
The authors declare that there is no conflict of interests regarding the publication of this paper.