An Adaptive Shrinking Grid Search Chaotic Wolf Optimization Algorithm Using Standard Deviation Updating Amount

To improve the optimization quality, stability, and speed of convergence of wolf pack algorithm, an adaptive shrinking grid search chaotic wolf optimization algorithm using standard deviation updating amount (ASGS-CWOA) was proposed. First of all, a strategy of adaptive shrinking grid search (ASGS) was designed for wolf pack algorithm to enhance its searching capability through which all wolves in the pack are allowed to compete as the leader wolf in order to improve the probability of finding the global optimization. Furthermore, opposite-middle raid method (OMR) is used in the wolf pack algorithm to accelerate its convergence rate. Finally, “Standard Deviation Updating Amount” (SDUA) is adopted for the process of population regeneration, aimed at enhancing biodiversity of the population. The experimental results indicate that compared with traditional genetic algorithm (GA), particle swarm optimization (PSO), leading wolf pack algorithm (LWPS), and chaos wolf optimization algorithm (CWOA), ASGS-CWOA has a faster convergence speed, better global search accuracy, and high robustness under the same conditions.


Literature Review.
e metaheuristic search technology based on swarm intelligence has been increasing in popularity due to its ability to solve a variety of complex scientific and engineering problems [1]. e technology models the social behavior of certain living creatures, in which each individual is simple and has limited cognitive capability, but the swarm can act in a coordinated way without a coordinator or an external commander and yield intelligent behavior to obtain global optima as a whole [2]. In [3], Yang Cuicui et al. adopted bacterial foraging optimization to optimize the structural learning of Bayesian networks. In [4] and [5], swarm intelligent algorithm is used for functional module detection in protein-protein interaction networks to help biologists to find some novel biological insights. In [6], Ji et al. performed a systematic comparison of three typical methods based on ant colony optimization, artificial bee colony algorithm, and bacterial foraging optimization regarding how to accurately and robustly learn a network structure for a complex system. In [7], the authors utilize the artificial immune algorithm to infer the effective connectivity between different brain regions. In [8], researchers used an ant colony optimization algorithm for learning brain effective connectivity network from fMRI data. Particle swarm optimization (PSO) [9] algorithm was proposed through the observation and study of the bird group's flapping behavior. Ant colony optimization (ACO) [10] algorithm was proposed under the principle of simulating ant social division of labor and cooperative foraging. Fish swarm algorithm (FSA) [11] was proposed to simulate the behavior of foraging and clustering in the fish group. In [12], bacterial foraging optimization (BFO) algorithm was proposed by mimicking the foraging behavior of Escherichia coli in the human esophagus. Artificial shuffled frog leaping algorithm (SFLA) [13] was put forward through the simulation of the frog groups to share information foraging process and exchange mechanism. In [14], Karaboga and Basturk proposed artificial bee colony (ABC) algorithm based on the concept of postpeak, colony breeding, and the way of foraging. But no algorithm is universal; these swarm intelligence optimization algorithms have their own shortcomings such as slow convergence, easy to fall into local optimum, or low accuracy. In [15], the authors proposed an improved ant colony optimization (ICMPACO) algorithm based on the multipopulation strategy, coevolution mechanism, pheromone updating strategy, and pheromone diffusion mechanism in order to balance the convergence speed and solution diversity and improve optimization performance in solving large-scale optimization problem. To overcome the deficiencies of weak local search ability in genetic algorithms (GA) [16] and slow global convergence speed in ant colony optimization (ACO) algorithm in solving complex optimization problems, in [17], the authors introduced the chaotic optimization method, multipopulation collaborative strategy, and adaptive control parameters into the GA and ACO algorithm to propose a genetic and ant colony adaptive collaborative optimization (MGACACO) algorithm for solving complex optimization problems. On the one hand, the ant colony optimization (ACO) algorithm has the characteristics of positive feedback, essential parallelism, and global convergence, but it has the shortcomings of premature convergence and slow convergence speed; on the other hand, the coevolutionary algorithm (CEA) emphasizes the existing interaction among different subpopulations, but it is overly formal and does not form a very strict and unified definition. erefore, in [18], Huimin et al. proposed a new adaptive coevolutionary ant colony optimization (SCEACO) algorithm based on the complementary advantages and hybrid mechanism.
In 2007, Yang and his coauthors proposed the wolf swarm algorithm [19], which is a new swarm intelligence algorithm. e algorithm simulates the wolf predation process, mainly through the walk, raid, and siege of three kinds of behavior and survival of the fittest population update mechanism to achieve the purpose of solving complex nonlinear optimization problems. Since the wolf pack algorithm is proposed, it is widely used in various fields and has been developed and improved continually, such as follows. In [20], the authors proposed a novel and efficient oppositional wolf pack algorithm to estimate the parameters of Lorenz chaotic system. In [21], the modified wolf pack search algorithm is applied to compute the quasioptimal trajectories for the rotor wing UAVs in the complex threedimensional (3D) spaces. In [22], wolf algorithm was used to make out polynomial equation roots of the problem accurately and quickly. In [23], a new wolf pack algorithm was designed aiming to get better performance, including new update rule of scout wolf, new concept of siege radius, and new attack step kind. In [24], Qiang and Zhou presented a wolf colony search algorithm based on the leader's strategy.
In [25], to explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step size was proposed. In [26], Mirjalili et al. proposed "grey wolf optimizer" based on the cooperative hunting behaviour of wolves, which can be regarded as a variant of paper [19]; in [27][28][29], grey wolf optimizer is adopted to solve nonsmooth optimal power flow problems, optimal planning of renewable distributed generation in distribution systems, and optimal reactive power dispatch considering SSSC (static synchronous series compensator).

Motivation and Incitement.
From the review of the above literature, the current optimization algorithm of wolf pack follows the principle of "a certain number of scouting wolves lead wolves through greedy search with a specially limited number of times (each wolf has only four opportunities in some literatures)," the principle of "fierce wolves approach the first wolf through a specially limited number of times of rushing" in the call and rushing process, and the principle of "fierce wolves pass through a specially limited number of times of rushing" in the siege process greedy search for prey "in this operation mechanism, the algorithm itself too imitates the actual hunting behavior of wolves, especially in" grey wolf optimization," which divides the wolves in the algorithm into a more detailed level. e advantage of doing so is that it can effectively guarantee the final convergence of the algorithm because it completely mimics the biological foraging process; however, the goal of the intelligent optimization algorithm of wolves is to solve the optimization problem efficiently, and imitating wolves' foraging behavior is only a way. erefore, the intelligent optimization algorithm of wolves should be more abstract on the basis of imitating wolves' foraging behavior in order to improve its optimization ability and efficiency. Due to the reasons above, each of variants about wolf pack algorithm mentioned above has its own limitation or inadequacy, including slow convergence, weak optimization accuracy, and narrow scope of application.

Contribution and Paper Organization.
To further improve the wolf pack optimization algorithm to make it have better performance, this paper proposed an adaptive shrinking grid search chaotic wolf optimization algorithm using standard deviation updating amount (ASGS-CWOA). e paper consists of five parts including Introduction, Principle of LWPS and CWOA (both of them are classic variants of wolf pack optimization algorithm), Improvement, Steps of ASGS-CWOA, Numerical Experiments, and corresponding analyses. e leader strategy was employed into traditional wolf pack algorithm, detailed in [24], unlike the simple simulation of wolves' hunting in [26]; LWPS abstracts wolves' hunting activities into five steps. According to the thought of LWPS, the specific steps and related formulas are as follows:

Materials and Methods
(1) Initialization. e step is to disperse wolves to search space or solution space in some way. Here, the number of wolf populations can be represented by N and dimensions of the search space can be represented by D, and then, we can get the position of the i-th wolf by the following equation: (1) where x id is the location of the i-th wolf in d-th dimension, N means the number of wolf population, and D is the maximum dimension number of the solution space. Initial location of each wolf can be produced by where rand (0, 1) is a random number distributed uniformly in the interval [0, 1] and x max and x min are the upper and lower limits of the solution space, respectively.
(2) Competition for the Leader Wolf. is step is to find the leader wolf with best fitness value. Firstly, Q wolves should be chosen as the candidates, which are the top Q of wolf pack according to their fitness. Secondly, each one of the Q wolves searches around itself in D directions, so a new location can be got by equation (3); if its fitness is better than the current fitness, the current wolf i should move to the new location, otherwise do not move, then searching will be continued until the searching time is greater than the maximum number T max or the searched location cannot be improved any more. Finally, by comparing the fitness of Q wolves, the best wolf can be elected, and it is the leader wolf: where step_a is the search step size.
(3) Summon and Raid. Each of the other wolves will raid toward the location of leader wolf as soon as it receives the summon from leader wolf, and it continues to search for preys during the running road. Finally, a new location can be got by equation (4); if its fitness is better than the one of current location, the wolf will move to the new one, otherwise stay at the current location: where x ld is the location of the leader wolf in d-th dimension and step_b is the run step size.
(4) Siege the Prey [30]. After the above process, wolves come to the nearby of the leader wolf and will be ready to catch the prey until the prey is got. e new location for anyone except leader wolf can be obtained by the following equation: where step_c is the siege step size, r m is a number generated randomly by the function rand (−1, 1) distributed uniformly in the interval [−1, 1], and R 0 is a preset siege threshold.
In the optimization problem, with the current solution getting closer and closer to the theoretical optimal value, the siege step size of wolf pack also decreases with the increase in iteration times so that wolves have greater probability to find better values. e following equation is about the siege step size, which is obtained from [31]: where step_c min is the lower limit of the siege step size, x d-max is the upper limit of search space in d-th dimension, x d-min is the lower limit of search space in d-th dimension, step_c max is the upper limit of the siege step size, and t means the current number of iterations while T represents the upper limit one. Being out of boundary is not allowed to each wolf, so equation (7) is used to deal with the possible transboundary: (5) Distribution of Food and Regeneration. According to the wolf group renewal mechanism of "survival of the strong," group renewal is carried out including the worst m wolves will die and be deleted and new m wolves will be generated by equation (2).

CWOA.
CWOA develops from the LWPS by introducing the strategy of chaos optimization and adaptive parameters into the traditional wolf pack algorithm. e former utilizes logistic map to generate chaotic variables that are projected to solution space in order to search, while the latter introduces adaptive step size to enhance the performance, and they work well. e equation of logistic map is as follows, which is from [32]: where μ is the control variable and when μ is 4, the system is in chaos state. e strategy of adaptive variable step size search includes the improvement of search step size and siege step-size. In the early stage, wolves should search for preys with a large step size so as to cover the whole solution space as much as possible, while in the later stage, with the wolves gathering continuously, wolves should take a small step size to search for preys so that they can search finely in a small target area. As a result, the possibility of finding the global optimal solution will be increased. And by equation (9), we can get Computational Intelligence and Neuroscience the step size of migration, while by equation (10), the one of siege can be obtained: where step_c 0 is the starting step size for siege.

Strategy of Adaptive Shrinking Grid Search (ASGS).
In the solution space with D dimensions, a searching wolf needs to migrate along different directions in order to find preys; during any iteration of the migration, it is according to the thought of the original LWPS and CWOA that there is only a dynamic point around the current location to be generated according to equation (3). However, a single dynamic point is isolated and not well enough to search in the current domain space of some a wolf; Figures 1(a) and 1(b) show the two-dimensional and the three-dimensional spaces, respectively. In essence, the algorithm needs to take the whole local domain space of the current wolf to be considered in order to find the local best location, so ASGS was used to generate an adaptive grid centered on the current wolf, which is extending along 2 × D directions and includes (2 × K + 1) D nodes, where K means the number of nodes taken along any direction. Figures 2(a) and 2(b) show the migration about ASGS in two-dimensional and three-dimensional space, respectively, and for brevity, here, K is set to 2, detailed in the following equation: So a node in the grid can be defined as During any migration, the node with best fitness in the grid should be selected as new location of the searching wolf, and after any migration, the leader wolf of the population is updated according to the new fitness. It needs to be particularly pointed out that the searching grid will be smaller and smaller as the step_a new becomes smaller. Obviously, compared to a single isolated point in traditional methods, the searching accuracy and the possibility of finding the optimal value can be improved by ASGS including (2 × K + 1) D nodes.
As the same reason, during the process of the siege, the same strategy is used to find the leader wolf in the local domain space including (2 × K + 1) D points. But the different is that the step size of siege is smaller than the one of migration. After any migration, the leader wolf of the population is updated according to new current fitness, and then, the current best wolf will be the leader wolf temporarily.

Strategy of Opposite-Middle Raid (OMR).
According to the idea of the traditional wolf pack algorithms, it is unfortunately that raid pace is too small or unreasonable and wolves cannot rapidly appear around the leader wolf when they receive the summon signal from the leader wolf, as shown in Figure 3(a). So OMR is put forward to solve the problem above, and its main clue is that the opposite location of the current location relative to the leader wolf should be calculated by the following equation: If the opposite location has better fitness than the current one, the current wolf should move to the opposite one. Otherwise, the following is obtained: where x i−m_d is the middle location in d-th dimension between the i-th wolf and the leader wolf, x i_d is the location of the wolf i in d-th dimension, and x l_d is the location of the leader wolf in d-th dimension. "Bestfitness" returns a wolf with the best fitness from the given ones. From equation (14) and Figure 3(b), it is seen that there are 2 D points among the current point and the middle point, and as the result of each raid, the point with best fitness is chosen as the new location of the current i-th wolf. ereby, not only the wolves can appear around the leader wolf as soon as possible but also they can try to find new preys as far as possible.

Standard Deviation Updating Amount (SDUA).
According to the basic idea of the leader wolf pack algorithm, during the iterations, some wolves with poorer fitness will be continuously eliminated, while the same amount of   wolves will be added to the population to make sure that the bad gene can be eliminated and the population diversity of the wolf pack can be ensured, so it is not easy to fall into the local optimal solution and the convergence rate can be improved. However, the amount of wolves that should be eliminated and added to the wolf pack is a fixed number, which is 5 in LWPS and CWOA mentioned before, and that is stiff and unreasonable. In fact, the amount should be a dynamic number to reflect the dynamic situation of the wolf pack during any iteration.
Standard deviation (SD) is a statistical concept, which has been widely used in many fields. Such as in [33], standard deviation was used into industrial equipment to help process the signal of bubble detection in liquid sodium; in [34], standard deviation is used with delay multiply to form a new weighting factor, which is introduced to enhance the contrast of the reconstructed images in medical fields; in [35], based on eight-year-old dental trauma research data, standard deviation was utilized to help analyze the potential of laser Doppler flowmetry; the beam forming performance has a large impact on image quality in ultrasound imaging, to improve image resolution and contrast; in [36], a new adaptive weighting factor for ultrasound imaging called signal mean-to-standard-deviation factor (SMSF) was proposed, based on which researchers put forward an adaptive beam forming method for ultrasound imaging based on the mean-to-standard-deviation factor; in [37], standard deviation was adopted to help analyze when an individual should start social security.
In this paper, we take a concept named "Standard Deviation Updating Amount" to eliminate wolves with poor fitness and dynamically reflect the situation of the wolf pack, which means the population amount of wolf pack and standard deviation about their fitness determine how many wolves will be eliminated and regenerated. Standard deviation can be obtained as follows: where N means the number of wolf pack, x i means the fitness of the i-th wolf, and μ is the mean value of the fitness. en, SDUA is gained by the following formula: SDUA is zero when the iteration begins; next, difference between the mean value and the SD about the fitness of the wolf pack should be computed, and if the fitness of current wolf is less than the difference, SDUA increases by 1; otherwise, nothing must be done. So the value of SDUA is got, and SDUA wolves should be eliminated and regenerated. e effect of using a fixed value is displayed in Figure 4(a), and Figure 4(b) shows the effect of using ASD. It is clear that the amount in the latter is fluctuant as the wolf pack has a different fitness during any iteration and better than the former to reflect the dynamic situations of iterations. Accordingly, not only will the bad gene from the wolf pack be eliminated, but the population diversity of the wolf pack can be also kept better so that it is difficulty to fall into local optimal solution and the convergence rate can be improved as far as possible.

2.3.
Steps of ASGS-CWOA. Based on the thoughts of adaptive grid search and adaptive standard deviation updating amount mentioned above, ASGS-CWOA is proposed, the implementation steps are following, and its flow chart is shown in Figure 5.

Initialization of Population.
e following parameters should be initially assigned: amount of wolf population is N and dimension of searching space is D; for brevity, the ranges of all dimensions are [range min , range max ]; the upper limit of iterations is T, value of step size in migration is step_a, value of step size in summon and raid is step_b, value of step size in siegement is step_c, and the population can be generated initially by the following equation [25]: where chaos (0, 1) returns a chaotic variable distributed in [0, 1] uniformly.

Migration.
Here, an adaptive grid centered on the current wolf can be generated by equation (12), which includes 2 nodes. After migration, the wolf with best fitness can be found as the leader wolf.

Summon and Raid.
After migration, the others begin to run toward the location of leader wolf, and during the process of raiding, any wolf keeps searching for preys following equations (13) and (14).

Siege.
After summon and raid, all other wolves come to be around the leader wolf in order to siege the prey following equations: After any siegement, the newest leader wolf can be got, which has best fitness temporarily.

Regeneration.
After siege, some wolves with poorer fitness will be eliminated, while the same amount of wolves will be regenerated according to equation (14). 6 Computational Intelligence and Neuroscience

Loop.
Here, if t (the number of current iteration) is bigger than T (the upper limit of iterations), exit the loop. Otherwise, go to 4.2 for loop until t is bigger than T. And when the loop is over, the leader wolf maybe is the global optima that the algorithm can find.

Numerical Experiments.
In this paper, four state-of-theart optimization algorithms are taken to validate the performance of new algorithm proposed above, detailed in Table 1. Table 2 shows benchmark functions for testing, and Table 3 shows the numerical experimental data of five algorithms on 12 benchmark functions for testing mentioned above, especially the numerical experiments were done on a computer equipped with Ubuntu 16.04.4 operating system, Intel (R) Core (TM) i7-5930K processor and 64G memory as well as Matlab 2017a. For genetic algorithm, toolbox in Matlab 2017a is utilized for GA experiments; the PSO experiments were implemented by a "PSOt" toolbox for Matlab; LWPS is got from the thought in [24]; CWOA is carried out based on the idea in [25], and the new algorithm ASGS-CWOA is carried out in Matlab 2017a, which is an integrated development environment with M programming language. To prove the good performance of the proposed algorithm, optimization calculations were run for 100 times on any benchmark function for testing as well as any optimization algorithm mentioned above. Firstly, focusing on Table 3, seen from the term of best value, only new algorithm can find the theoretical global optima of all benchmark functions for testing, so ASGS-CWOA has better performance in optimization accuracy.

Experiments on Twelve Classical Benchmark Functions.
Furthermore, for 100 times, the worst and average values of ASGS-CWOA in the all benchmark functions for testing except the F2 named "Easom" reach the theoretical values, respectively, and the new algorithm has better standard deviation about best values detailed in Figure 6, from which it can be seen that nearly all the standard deviations are best except F2 and F6, especially the standard deviations are zero on F1, F3, F4, F5, F7, F9, F10, F11, and F12. Figure 6(b) shows that the value of standard deviation about ASGS-CWOA on F2 is not worst in five algorithms, and it has better performance than GA; Figure 6(f) indicates that the value of standard deviation about ASGS-CWOA on F6 reaches 10 −11 , weaker than the values on others; Figure 6(h) demonstrates that the value of standard deviation about ASGS-CWOA on F8 reaches 10 −30 , which is not zero, but best in five algorithms. erefore, ASGS-CWOA has better stability than others.
In addition, focusing on the number of mean iteration, ASGS-CWOA has weaker iteration times on the test function 2 "Easom," the test function 8 "Bridge," and the test function 10 "Bohachevsky1," but it has better performances on the other test functions, especially the iteration times on test function 3, 4, 5, 6, 11, and 12 are 1 or about 1. So ASGS-CWOA has better advantage on performance of iteration number.  Figure 4: (a) e number that how many wolves will be starved to death and regenerated as the iteration goes on. It is a fixed number 5. (b) e number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goes on.
Step 1 : initialize parameters of algorithm Step 2: migration process with ASGS Step 3: summon -raid process with OMR Step 4 : siege process with ASGS Step 5: population regeneration with SDUA End condition satisfied?
Output or record head wolf position and fitness value Yes No Step 6 Computational Intelligence and Neuroscience Genetic algorithm [16] e crossover probability is set at 0.7; the mutation probability is set at 0.01, while the generation gap is set at 0.95  Matyas  F1, F5, F6, and F11, as shown in Figures 7(a), 7(e), 7(f), and 7(k), respectively. On F7, F9, and F10, the five algorithms spent time roughly in the same order of magnitude, and ASGS-CWOA has better performance than GA, PSO, and LWPS on F7 and F9, as shown in Figures 7(g) and 7(i). Unfortunately, ASGS-CWOA spent most time on F2, F3, F4, F8, and F12, yet it is a comfort that the times spent by ASGS-CWOA are not too much to be unacceptable. is phenomenon conforms to a philosophical truth that nothing is perfect and flawless in the world, and ASGS-CWOA is without exception. Accordingly, in general, ASGS-CWOA has a better speed of convergence. e detailed is shown in Figure 7.  Table 4. It is different with the above 12 testing functions that CEC2014 testing functions are conducted on a computer equipment with Win7-32 bit, Matlab 2014a, CPU (AMD A6-3400M), and RAM (4.0 GB), due to the match between the given cec14_func.mexw32 and windows 32 bit system, not Linux 64 bit.
From Table 5 and Figure 8(a), obviously, it can be seen that new proposed algorithm has better performance than GA and LWPS in terms of "optimal value," which means ASGS-CWOA has better performance in finding the global optima. From Figures 8(b)-8(d), it can be seen that the new proposed algorithm has best performance in terms of "worst value," "average value," and "standard deviation;" in other words, the new proposed algorithm has best stability and robustness. From Figure 8(e), the proportion of ASGS- Computational Intelligence and Neuroscience 9 CWOA is better than PSO, LWPS, and CWOA, and it means the new proposed ASGS-CWOA has advantages in time performance.
In a word, ASGS-CWOA has good optimization quality, stability, advantage on performance of iteration number, and speed of convergence.

Results and Discussion
eoretical research and experimental results reveal that compared with traditional genetic algorithm, particle swarm optimization, leading wolf pack algorithm, and chaotic wolf optimization algorithm, ASGS-CWOA has better global optimization accuracy, fast convergence speed, and high robustness under the same conditions.
In fact, the strategy of ASGS greatly strengthens the local exploitation power of the original algorithm, which makes it easier for the algorithm to find the global optimum; the strategy of ORM and SDUA effectively enhances the global exploration power, which makes the algorithm not easy to fall into the local optimum and thus easier to find the global optimum.
Focusing on Tables 3 and 5 and Figures 6 and 7 above, compared with the four state-of-the-art algorithms, ASGS-CWOA is effective and efficient in most of terms on        Computational Intelligence and Neuroscience 13 benchmark functions for testing, but it has weaker performance in some terms on some benchmark functions for testing, such as functions 2, 3, 4, 8, and 12 shown, respectively, in Figures 7(b)-7(d), 7(h), and 7(l); ASGS-CWOA spent more time on iterations than the four other algorithms. When D (dimension of the solution space) is very large, this means too much memory space of the computer is required to implement the algorithm completely according to the formula, and it is impossible to meet the spatial demand growing exponentially, which is also a reflection of its limitations. Moreover, in the supplementary experiments, ASGS-CWOA spent more time than the other algorithms, and its proportion is 0 in the optimal time statistic index of 16 times, detailed in Figure 7(f ). Our future work is to continue perfecting the performance of ASGS-CWOA in all aspects and to apply it to specific projects and test its performance.

Conclusions
To further improve the speed of convergence, and optimization accuracy under a same condition, this paper proposes an adaptive shrinking grid search chaotic wolf optimization algorithm using standard deviation updating amount. Firstly, ASGS was designed for wolf pack algorithm to enhance its searching capability, through which any wolf can be the leader wolf and this benefits to improve the probability of finding the global optimization. Moreover, OMR is used in the wolf pack algorithm to enhance the convergence speed. In addition, we take a concept named SDUA to eliminate some poorer wolves and regenerate the same amount of wolves, so as to update wolf population and keep its biodiversity.
Data Availability e tasks in this paper are listed in https://pan.baidu.com/s/ 1Df_9D04DSKZI2xhUDg8wgQ (password: 3fie).

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.  Figure 8: (a) e statistical proportion of each algorithm in the number of best "optimal value" on 16 test functions; (b) the one of best "worst value;" (c) the one of best "average value;" (d) the one of best "standard deviation;" (e) the one of best "average iteration;" (f ) the one of best "average time." 14 Computational Intelligence and Neuroscience