Adaptive Parameters for a Modified Comprehensive Learning Particle Swarm Optimizer

Particle swarm optimization PSO is a stochastic optimization method sensitive to parameter settings. The paper presents a modification on the comprehensive learning particle swarm optimizer CLPSO , which is one of the best performing PSO algorithms. The proposed method introduces a self-adaptive mechanism that dynamically changes the values of key parameters including inertia weight and acceleration coefficient based on evolutionary information of individual particles and the swarm during the search. Numerical experiments demonstrate that our approach with adaptive parameters can provide comparable improvement in performance of solving global optimization problems.


Introduction
The complexity of many real-world problems has made exact solution methods impractical to solve them within a reasonable amount of time and thus gives rise to various types of nonexact metaheuristic approaches 1-3 .In particular, swarm intelligence methods, which simulate a population of simple individuals evolving their solutions by interacting with one another and with the environment, have shown promising performance on many difficult problems and have become a very active research area in recent years 4-11 .Among these methods, particle swarm optimization PSO , initially proposed by Kennedy and Eberhart 4 , is a population-based global optimization technique that involves algorithmic mechanisms similar to social behavior of bird flocking.The method enables a number of individual solutions, called particles, to move through the solution space and towards the most promising area for optimal solution s by stochastic search.Consider a D-dimensional optimization problem as follows: min f x , x x 1 , x 2 , . . .x D T s.t.x iL ≤ x i ≤ x iU , 0 < i ≤ D. 1.1 In the D-dimensional search space, each particle i of the swarm is associated with a position vector x i x i1 , x i2 , . . ., x iD and a velocity vector v i v i1 , v i2 , . . ., v iD , which are iteratively adjusted by learning from a local best pbest i found by the particle itself and a current global best gbest found by the whole swarm: where c 1 and c 2 are two acceleration constants reflecting the weighting of "cognitive" and "social" learning, respectively, and r 1 and r 2 are two distinct random numbers in 0, 1 .It is recommended that c 1 c 2 2 since it on average makes the weights for cognitive and social parts both to be 1.
To achieve a better balance between the exploration global search and exploitation local search , Shi and Eberhart 12 introduce a parameter named inertia weight w to control velocity, which is currently the most widely used form of velocity update equation in PSO algorithms: 1.4 Empirical studies have shown that a large inertia weight facilitates exploration and a small inertia weight one facilitates exploitation and a linear decreasing inertia weight can be effective in improving the algorithm performance: where t is the current iteration number, t max is the maximum number of allowable iterations, and w max and w min are the initial value and the final value of inertia weight, respectively.It is suggested that w max can be set to around 1.2 and w min around 0.9, which can result in a good algorithm performance and remove the need for velocity limiting.PSO is conceptually simple and easy to implement, and has been proven to be effective in a wide range of optimization problems 13-20 .Furthermore, It can be easily parallelized by concurrently processing multiple particles while sharing the social information 21, 22 .Kwok et al. 23 present an empirical study on the effect of randomness on the control coefficients of PSO, and the results show that the selective and uniformly distributed random coefficients perform better on complicate functions.
In recent years, PSO has attracted a high level of interest, and a number of variant PSO methods e.g., 24-32 have been proposed to accelerate convergence speed and avoid local optima.In particular, Liang et al. develop a comprehensive learning particle swarm optimizer CLPSO 26 , which uses all other particles' historical best information instead of pbest and gbest to update a particle's velocity: where pbest fi d ,d can be the dth dimension of any particle's pbest including pbest i itself , and particle fi d is selected based on a learning probability P C i .The authors suggest a tournament selection procedure that randomly chooses two particles and then select one with the best fitness as the exemplar to learn from for that dimension.Note that CLPSO has only one acceleration coefficient c which is normally set to 1.494, and it limits the inertia weight value in the range of 0.4, 0.9 .
According to empirical studies 29, 30, 33 , CLPSO has been shown to be one of the best performing PSO algorithms, especially for complex multimodal function optimization.In 34 a self-adaptation technique is introduced to adaptively adjust the learning probability, and the historical information is used in the velocity update equation, which effectively improve the performance of CLPSO on single modal problems.
Wu et al. 35 adapt the CLPSO algorithm by improving the search behavior to optimize the continuous externality for antenna design.In 36 Li and Tan present a hybrid strategy to combine CLPSO with Broyden-Fletcher-Goldfarb-Shanno BFGS method, which defines a local diversity index to indicate whether the swarm enters into an optimality region in a high probability.They apply the method to identify multiple local optima of the generalization bounds of support vector machine parameters and obtain satisfying results.However, to the best of our knowledge, modifications of CLPSO based on adaptive inertia weight and acceleration coefficient have not been reported.
In this paper we propose an improved CLPSO algorithm, named CLPSO-AP, by introducing a new adaptive parameter strategy.The algorithm evaluates the evolutionary states of individual particles and the whole swarm, based on which values of inertia weight and acceleration coefficient are dynamically adjusted to search more effectively.Numerical experiments on test functions show that our algorithm can significantly improve the performance of CLPSO.
In the rest of the paper, Section 2 presents our PSO method with adaptive parameters, Section 3 presents the computational experiments, and Section 4 concludes with discussion.

Adaptive Inertia Weight and Acceleration Coefficient
To provide an adaptive parameter strategy, we first need to determine the situation of each particle at each iteration.In this paper, two concepts are used for this purpose.The first one considers whether or not a particle i improves its personal best solution at the tth iteration in the paper we assume the problem is to minimize the objective function without loss of generality : And the second considers the particle's "rate of growth" from the t − 1 th iteration to the tth iteration: where | denotes the Euclidean distance between x t i and x t−1 i :

2.3
Based on 2.1 , we can calculate the percentage of particles that successfully improve their personal better solutions: where p is the number of particles in the swarm.This measure has been utilized in 33 and some other evolutionary algorithms such as 37 .Generally, in PSO a high ρ indicates a high probability that the particles have converged to a nonoptimum point or slowly moving toward the optimum, while a low ρ indicates that the particles are oscillating around the optimum without much improvement.Considering role of inertia weight in the convergence behavior of PSO, in the former case the swarm should have a large inertial weight and in the latter case the swarm should have a small inertial weight.Here we use the following nonlinear function to map the values of ρ to w: w t e ρ t −1 .

2.5
It is easy to derive that w ranges from about 0.36 to 1.The nonlinear and nonmonotonous change of inertial weight can improve the adaptivity and diversity of the swarm, because the search process of PSO is highly complicated on most problems.
Most PSO algorithms use constant acceleration coefficients in 1.4 .But it deserves to note that Ratnaweera et al. 38 introduce a time-varying acceleration coefficient strategy where the cognitive coefficient is linearly decreased and the social coefficient is linearly increased.The basic CLPSO also uses a constant acceleration coefficient in 1.6 , where c reflects the weighting of stochastic acceleration term that pulls each particle i towards the personal best position of particle fi d at each dimension d.Considering the measure defined in 2.2 , a large value of γ t i indicates that at t time, particle i falls rapidly in the search space and gets a considerable improvement on the fitness function; thus it is reasonable to anticipate that the particle may also gain much improvement at least at the next iteration.On the contrary, a small value of γ t i indicates that particle i progresses slowly and thus needs a large acceleration towards the exemplar.
From the previous analysis, we suggest that the acceleration coefficient should be an increasing function of γ t i /Γ t , where Γ t is the SRSS of the rates of growth of all the particles in the swarm:

2.6
Based on our empirical tests, we use the following function to map the values of γ i to c i : 2.7

The Proposed Algorithm
Using the adaptive parameter strategy described in the previous section, the equation for velocity update for the CLPSO-AP algorithm is where w t and c t i are calculated based on 2.5 and 2.7 , respectively.Now we present the flow of CLPSO-AP algorithm as follows.
1 Generate a swarm of p particles with random positions and velocities in the range.
Let t 0 and initialize w t 1 and each c t i 1.
2 Generate a learning probability P C i for each particle based on the following equation suggested in 10 : 3 Evaluate the fitness of each particle and update its pbest and then select a particle with the best fitness value as gbest.
4 For each particle i in the swarm do the following.

Table 1 :
Detailed information of the test functions used in the paper.

Table 2 :
Parameter settings of the algorithms.

Table 3 :
The mean best values obtained by the algorithms for 10D problems.

Table 4 :
The success rates obtained by the algorithms for 10D problems.

Table 5 :
The mean best values obtained by the algorithms for 30-D problems.

Table 6 :
The success rates obtained by the algorithms for 30-D problems.