The disadvantages of particle swarm optimization (PSO) algorithm are that it is easy to fall into local optimum in high-dimensional space and has a low convergence rate in the iterative process. To deal with these problems, an adaptive particle swarm optimization algorithm based on directed weighted complex network (DWCNPSO) is proposed. Particles can be scattered uniformly over the search space by using the topology of small-world network to initialize the particles position. At the same time, an evolutionary mechanism of the directed dynamic network is employed to make the particles evolve into the scale-free network when the in-degree obeys power-law distribution. In the proposed method, not only the diversity of the algorithm was improved, but also particles’ falling into local optimum was avoided. The simulation results indicate that the proposed algorithm can effectively avoid the premature convergence problem. Compared with other algorithms, the convergence rate is faster.
The particle swarm optimization algorithm is a population-based stochastic optimization algorithm proposed by Kennedy and Eberhart in 1995 [
The disadvantages of particle swarm optimization (PSO) algorithm are that it is easy to fall into local optimum in high-dimensional space and has a low convergence rate in the iterative process. A great number of investigations have been done to improve PSO algorithm in the last decades. In [
Traditional neighborhood structure of PSO is a global coupled network. In the structure, there is a connection between any two pairs of nodes [
In this paper, each particle is only compared with neighbors (position) by using small-world network model to initialize the neighborhood structure of PSO. Therefore, the particle’s convergence speed is fast and all particles are able to the most optimal value, and all particles are able to keep looking for optimal performance (i.e., the diversity of particle solution). The neighborhood structure of small-world network model within the acceptable error limits should have a better optimization effect on deep optimal point in functions. When PSO algorithm eventually is convergence, the node’s degrees of particles obey power-law distribution [
By introducing the characteristics of scale-free network model and small-world network model, based on directed weighted complex network, an adaptive particle swarm optimization algorithm (DWCNPSO) is proposed. The simulation results show that the proposed algorithm, especially in high-dimensional space, has a good performance. The premature convergence problem can be avoided, and the convergence rate in the late iterative process is faster than other algorithms.
In the original PSO, the system is initialized as a set of random solutions; each particle is considered to be a potential solution (fitness) which moves in the search space following the optimal particle [
The original process for implementing PSO is as follows. Initialize the particles with random positions and velocities on For each particle, evaluate the fitness in Compare the fitness of particle with its Compare the fitness of particle with its Update the velocity and position of the particle according to ( If a criterion is met (usually a sufficiently good fitness or a maximum number of iterations), the algorithm is ended or return to step 2.
In this section, we would like to build the complex network model of particle swarm to further optimize and ameliorate learning styles between the particles.
Network model of particle swarm is described as follows.
A complex network
Weighted out-degree of node
Weighted in-degree of node
Adjacency matrix of the
Network edge weight matrix
Normalized network edge weights are as follows:
Considering a dynamic evolution process of the network topology of particles in which small-world networks evolve into scale-free networks, a complex network model mentioned above is introduced into the neighborhood structure of the particle swarm optimization. In the process of iteration, the network topology of particles will change under the condition of the in-degrees of nodes obeying power-law distribution. When particles converge to the optimum, the in-degrees of nodes in the particles network obey power-law distribution characteristics.
The learning styles between particles follow three principles.
At this moment particles at a certain probability
The way of mutual learning between particles is only to learn from the optimal neighbor position including virtual neighbors. When a particle
The weighted in-degree (
The procedure for implementing the DWCNPSO is given by the following steps.
Set the parameters: the learning factors
Initialize the particle swarm positions and velocities within a certain range of values.
Initialize the network neighborhood of the particle swarm; calculate the fitness
Compare the fitness of each particle with its optimal value
If the in-degree of nodes in the network neighborhood obeys power-law distribution, all particles carry out optimization in accordance with (
If termination conditions are satisfied, the maximum weighted in-degree of particles in complex networks is greater than a certain threshold
The simulation experiments are implemented on the computer with MATLAB R2010a, Windows XP, and Intel Core2 CPU clocked at 2.10 GHz, memory of 2 GB.
Four important functions, two of which are unimodal (containing only one optimum) and two of which are multimodal (containing many local optima, one global optimum or many global optima), are considered to test the efficiency of the proposed methods. The four test functions are as follows. The Sphere function The Rastrigin function The Griewank function The Rosenbrock function
The parameters in the experiments are set as follows: the number of particles is set to 20 and the iteration number is 100. The acceleration factors
The experimental results are presented in Table
Simulation results obtained from three methods for four functions.
Function | PSO | SFPSO | DWCNPSO | |||
---|---|---|---|---|---|---|
Mean | Best | Mean | Best | Mean | Best | |
|
0.000175 |
|
|
|
0.00011 |
|
|
0.687286 | 0.012409 |
|
|
0.178946 |
|
|
0.001259 |
|
|
|
0.000263 |
|
|
0.093436 | 0.017859 | 0.035261 |
|
0.000167 |
|
As shown in Table
The curves of the evolutionary optimization of three algorithms for four test functions are presented in Figure
The evolutionary curve for three methods for four functions.
The evolutionary curve for
The evolutionary curve for
The evolutionary curve for
The evolutionary curve for
The evolutionary curve for three methods for two functions.
The evolutionary curve for Rosenbrock
The evolutionary curve for Rastrigrin
The original PSO is convergent, which is proved in literature [
The convergence region of parameters of PSO.
The maximum iteration is denoted as
The directed weighted complex network is built on the particle swarm as well as the introduction of dynamic learning factor
In this paper, an adaptive particle swarm optimization algorithm based on directed weighted complex network (DWCNPSO) is proposed. By introducing a complex network model and dynamic learning factor
The authors declare that there is no conflict of interests regarding the publication of this paper.
This research is supported by the National Natural Science Foundation of China (no. 61263019), the Fundamental Research Funds for the Gansu Universities (no. 1114ZTC144), the Natural Science Foundation of Gansu Province (no. 1112RJZA029), and the Doctoral Foundation of LUT.