Particle Swarm Optimization Based on Local Attractors of Ordinary Differential Equation System

Particle swarm optimization (PSO) is inspired by sociological behavior. In this paper, we interpret PSO as a finite difference scheme for solving a system of stochastic ordinary differential equations (SODE). In this framework, the position points of the swarm converge to an equilibrium point of the SODE and the local attractors, which are easily defined by the present position points, also converge to the global attractor. Inspired by this observation, we propose a class of modified PSO iteration methods (MPSO) based on local attractors of the SODE.The idea of MPSO is to choose the next update state near the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function. In particular, the quantum-behaved particle swarm optimization method turns out to be a special case of MPSO by taking a special probability density function. The MPSO methods with six different probability density functions are tested on a few benchmark problems. These MPSO methods behave differently for different problems.Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.


Introduction
Inspired by sociological behavior associated with bird flock, particle swarm optimization (PSO) was first introduced by Kennedy and Eberhart [1].In a PSO, the individual particles of a swarm fly stochastically toward the positions of their own previous best performance and the best previous performance of the swarm.Researchers have been trying to excogitate new framework to interpret PSO in order to analyze the property of PSO and to construct new PSOlike methods.For instance, in [2] PSO is interpreted as a difference scheme for a second-order ordinary differential equation.Fernández-Martínez et al. interpret PSO algorithm as a stochastic damped mass-spring system: the so-called PSO continuous model and present a theoretical analysis of PSO trajectories in [3].Based on a continuous vesion of PSO, Fernández Martínez and García Gonzalo propose Generalized PSO(GPSO) in [4] and introduce a delayed version of the PSO continuous model in [5].Furthermore, Fernández-Martínez and García-Gonzalo give stochastic stability analysis of PSO models in [6] and propose two novel algorithms: PP-GPSO and RR-GPSO in [7].PSO algorithms have been applied successfully for practical applications [8][9][10].
In [11], a so-called quantum-behaved particle swarm optimization (QPSO) is proposed based on an assumption that the individual particles in a PSO system have quantum behavior.A wide range of continuous optimization problems have been solved successfully by QPSO and many efficient strategies have been proposed to improve the algorithm [11][12][13][14][15][16][17][18][19][20].A global convergence analysis of QPSO is given by Sun et al. in [21].
In this paper, we interpret PSO as a finite difference scheme for solving a system of stochastic ordinary differential equations (SODE in short).In this framework, the convergent point of the position points in the PSO iteration process corresponds to an equilibrium point (a global attractor) of the SODE.We observe that the local attractors, which are easily computed by using the present position points, also converge to the global attractor in the PSO iteration process.Inspired by this observation, we propose a class of modified PSO iteration methods (MPSO in short) based on local attractors of the SODE.The idea of MPSO is to choose the next update state in a neighbourhood of the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function.We will test the MPSO methods with six different probability density functions for solving a few benchmark problems.These MPSO methods behave differently for different problems.Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.
Our work is partly inspired by the second-order ordinary differential equation framework for PSO in [2,3].But the solution of a second-order ordinary differential equation is somehow more difficult to describe and it seems more difficult to construct new PSO-like methods through this framework.Our framework of ordinary differential equation makes the job easier.
Our work is also inspired by the quantum-behaved particle swarm optimization (QPSO) method in [11].It turns out that QPSO becomes a special case of our MPSO by choosing a special probability density function.And fortunately, the convergence analysis for QPSO given in [21] is still valid for our MPSO methods.In fact, what matters in the convergence analysis is a suitable choice of the probability density function, and the particular quantum behavior does nothing in the convergence analysis.
The rest of the paper is organized as follows.In Section 2 we interpret PSO as a system of ODE and propose a class of MPSO methods.Then, in Section 3 we test, on some nonlinear benchmark functions, our MPSO methods with different probability density functions.Finally, some conclusions are gathered in Section 4.

Particle Swarm Optimization Algorithms
2.1.The Original PSO.Particle swarm optimization algorithm (PSO) was first introduced in Kennedy and Eberhart [1] inspired by sociological behavior associated with bird flock.In a PSO with population size , the velocity vector  and the position vector  of each particle are iteratively adjusted to minimize an   →  objective function  with  as input value.At the th iteration step, the particle  updates its velocity vector  , = ( for  = 1, 2, .(2) When the PSO system is convergent, all particles converge, as  tends to infinity, to a global attractor  * ; that is, lim and their velocity vectors  , converge to zero.

Interpretation of PSO as a Finite Difference Scheme for an SODE.
Let us rewrite the iteration formula (1) for PSO as a system of difference equations: for  = 1, 2, . .., where Generally but not necessarily, the constants  1 and  2 are set to be equal.For the sake of simple representation, we rewrite where We regard (4) as a finite difference scheme with time step length Δ = 1 for solving the following system of stochastic ordinary differential equations (SODE): where ) for all real numbers  > 0. Let p() = ( p1 (), p2 (), . . ., p ()), Ṽ() = ( Ṽ1 (), Ṽ2 (), . . ., Ṽ ()), X() = ( X1 (), X2 (), . . ., X ()), and () = ( Ṽ(), X()), where p () = ( p1  (), p1  (), . . ., p  ()), and so forth.Then, the SODE ( 7) is written as where  is a nonlinear function.The theoretical analysis of this SODE and its difference schemes is difficult, but it is not our concern.For our purpose, we recall that usually, or at least very often, the solution of a difference scheme for an ODE will converge to an equilibrium point  * satisfying ( * ) = 0 when the iteration time  tends to infinity.Thus, the convergent point of the PSO iteration process corresponds to an equilibrium point of the SODE.Usually, for an equilibrium point  * = ( * ,  * ) of the SODE (7) we have lim for some constant vector  * ∈   .Let us call p() the local attractor and  * the global attractor for the SODE at time .

MPSO for Iteratively Decreasing X(𝑡)− p(𝑡)
. Now, let us forget the original PSO and concentrate on the task of finding a global attractor (an equilibrium point) of the SODE (7).
Based on (10), we see that finding an equilibrium point of the ODE system ( 7) is equivalent to making the position vector X() closer and closer to the local attractor p().Thus, we choose the next update state in a neighbourhood of the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function (PDF).Define where  is an adjustable parameter and   = (  and 6.
Now, let us demonstrate how to choose a random number Then, for a randomly given number   ,+1 ∼ (0, 1), we solve the equation to get Choose a PDF (; ).Therefore, the update formulas corresponding to  1 - 6 can be written as follows:

Initialize population position vector 𝑋
where  ∼ (0, 1),  1 ∼ (0, 1), and  2 ∼ (0, 1).Here the superscripts and subscripts of  and  have been removed where Hamiltonian operator is Note that the Schrödinger equation is a second-order partial differential equation.|Ψ| 2 is chosen to be the PDF for the present position of the particle.In particular,  5 and  6 defined above are two such PDFs mentioned in [11].Thus, QPSO can be regarded as a special case of MPSO.But the other four PDFs  1 to  4 are not related to the Schrödinger equation.
Let us elaborate a little bit on QPSO.In [11], Sun et al. assume that, at the th iteration, on the th dimension (1 ≤ ) ,   ,+1 ∼  (0, 1) .(25) Sun et al. gave a global convergence analysis of QPSO in [20], which employed certain properties of the PDF but had nothing to do with the particular Schördinger equation.Actually, all our six PDFs given Section 2.3.1 possess these properties, and hence the global convergence analysis applies.

Numerical Simulation
In this section, we use MPSO with six different PDFs mentioned in Section 2 to test five nonlinear benchmark functions used in [11].The first function is the Sphere function described by The second function is Rosenbrock function described by The third function is the generalized Rastrigrin function described by The fourth function is the generalized Griewank function described by The last function is Shcaffer function described by where  = ( 1 ,  2 , . . .,   ) is an -dimension real-valued vector.The initialization and search ranges of the four functions used in [11] are listed in Table 1.As in [11], different population sizes (20,40, and 80) are used for each function to investigate the scalability.The maximum number of the generations is set to 1000, 1500, and 2000, corresponding to the dimensions 10, 20, and 30 for the four functions, respectively.The mean best fitness values and standard deviations, out of total 50 runs of MPSO with different PDFs on  1 to  5 , are shown in Tables 2, 3, 4, 5, and 6.In the tables,  is the parameter in (12), and  = 1 → 0.5 means that  decreases linearly from 1 to 0.5.The boldface results are obtained by performing the QPSO algorithm in [11], which is equivalent to MPSO with the PDF  5 .An efficiency comparison of MPSOs with different PDFs on  1 to  5 is presented in Table 7.

Conclusion
Particle swarm optimization (PSO) algorithm is interpreted as a finite difference scheme for solving a system of stochastic ,+1 according to the probability density function (;   , ).First, we obtain the corresponding probability distribution function () by  () = ∫  0  (;   , ) .

Table 1 :
Asymmetric initialization and search ranges.

Table 2 :
Mean best fitness values and standard deviation for the sphere function.
≤ ) of the search space, particle  moves in a  potential well centered at   , which is the th dimension coordinate

Table 3 :
Mean best fitness values and standard deviation for the Rosenbrock function.Mean best Standard deviation Mean best Standard deviation Mean best St andard deviation

Table 4 :
Mean best fitness values and standard deviation for the generalized Rastrigrin function.

Table 5 :
Mean best fitness values and standard deviation for the generalized Griewank function.

Table 6 :
Mean best fitness values and standard deviation for the Schaffer function.

Table 7 :
Descending order of MPSOs efficiency with different PDFs for testing benchmark functions.FunctionDescending order of MPSOs efficiency with different PDFs Sphere 2 >  6 >  5 >  4 >  1 >  3 Rosenbrock  2 >  3 >  1 >  5 >  6 >  4 Generalized Rastrigrin  3 >  4 >  6 >  1 >  5 >  2 Generalized Griewank  2 >  5 >  4 >  3 >  6 >  1 Schaffer  1 >  6 >  4 >  3 >  5 >  2ordinary differential equations (SODE in short).It is illustrated that the position points of the swarm and the local attractors, which are easily defined by the present position points, all converge to a global attractor of the SODE.A class of modified PSO iteration methods (MPSO in short) based on local attractors of the SODE is proposed such that the next update state is chosen near the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function.In particular, the quantum-behaved particle swarm optimization method turns out to be a special case of MPSO by taking a special probability density function.The MPSO methods with six different probability density functions are tested on a few benchmark problems.These MPSO methods behave differently for different problems.Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.