^{1}

^{2}

^{1}

^{2}

^{1}

^{2}

^{1}

^{2}

Although particle swarm optimization (PSO) has been widely used to address various complicated engineering problems, it still needs to overcome the several shortcomings of PSO, e.g., premature convergence and low accuracy. Its final optimization result is related to the control parameters selection; therefore, an improved convergence particle swarm optimization algorithm with random sampling of control parameters is proposed. For the proposed algorithm, the random sampling strategy of control parameters is designed, which can promote the flexibility of algorithm parameters and simultaneously enhance the updating randomness for both particle velocity and position. According to the convergence analysis of PSO, the sampling range for inertial weight is determined after both the acceleration factors have already been sampled in their respective value interval, to ensure convergence for every evolution step of algorithm. Besides that, in order to make full use of dimension information of some better particles, the stochastic correction approach on each dimension for the population optimum value has been adopted. The final experiments results demonstrate that the proposed algorithm further improves the convergence rate while maintaining higher convergence accuracy, compared with basic particle swarm optimization and other variants.

PSO, proposed by Kennedy and Eberhart [

Inertial weight, which is the relatively important control parameter of PSO, is first introduced by Shi [_{1} and_{2} is put forward by Ratnaweera [

The convergence of PSO algorithms should be based on the framework of random search algorithm [

In view of the problems mentioned above, this paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters (SC-PSO), and the main contribution of the present work is delineated as follows.

This paper is organized as follows. Section

While PSO is running, each particle is regarded as a feasible solution to the optimization problem in search space and the flight behavior of particles can be treated as the search process of all individuals; then the velocity of particles is dynamically updated according to the historical optimal position of particle and the optimal position of swarm population. It is assumed that the swarm population is composed of _{th} particle is represented by _{i,d}(_{th} dimension variable of the_{th} particle in the_{th} iteration, and variables_{i,d}(_{g,d}(_{i,d}(_{1} and_{2} denote acceleration coefficients;_{1} and_{2} are random numbers uniformly distributed in interval

According to the detailed optimization problem to be settled, the objective function should be set, and the objective function values of each particle are corresponding fitness values. The fitness value can be used to not only measure the position of particles but also update the historical optimal position of particles and the optimal position of swarm population.

The convergence of particles trajectories is determined by control parameters of algorithm, and in order to facilitate analysis and generality, the case of a single particle system having only one dimension is taken as an example. After that, the basic evolution equations (

For (

According to formula (_{1} and_{2} at the situation of convergence.

For basic PSO, control parameters have great impact on the performance of algorithm. If they are assigned inappropriately, the trajectories of particles cannot converge and may even be unstable, which will cause that the optimal solution of optimization problems cannot be found. At present, the control parameters are usually chosen according to the experience or experiments from engineers, so it is not flexible and the exploration ability of PSO has also been greatly restricted.

Random sampling strategy is designed to improve the flexibility of control parameters and enhance the exploration ability of PSO to help to jump out of local optimum. On the basis of the conclusion from [_{1} and_{2} are, respectively, uniformly sampled in their corresponding value interval, and the parameters

Finally, the inertial weight ought to be sampled in the above computed interval. However, in order to avoid the phenomenon of “oscillation” and “two steps forward, one step back,” the inertia weight is selected around the center part of the convergence interval of_{1}=_{2}=2, and then parameters^{2} can be computed,^{2}=2/3. The convergence interval of inertial weight is

The relationship between

On the basis of random sampling strategy, all particles in swarm update their positions and velocity using formula (

The method for generating intermediate particles is as follows:

We choose the one which has the best fitness value from the four intermediate particles. If it is better than the optimal position, the optimal position of swarm population will be replaced by it; otherwise, the optimal position remains unchanged.

The intermediate particles mentioned in the above section are only used to correct the optimal position in the same dimensions; however, we know that the positions of particles are evaluated overall, and the historical optimal positions of particles, which are less than the optimal position of swarm, may have some useful information at some dimensions. Therefore, the stochastic correction approach on each dimension for the optimal position is used in this section.

According to the fitness value order of historical optimal positions of particles, we select five better historical optimal positions of particles denoted as

% two dimensions m and n are randomly selected from the five better historical optimal position

%

The whole process of SC-PSO algorithm is shown in Figure

Flow chart of SC-PSO.

In order to demonstrate the performance of the proposed algorithm SC-PSO, the basic PSO and the other three improved versions of PSO are selected to compare. For convenience, the basic PSO is represented by A1; A2 indicates DCW-PSO in [_{1} be linearly increased but the acceleration coefficient_{2} be linearly declined. Algorithm A4(DPSO) is an improved algorithm using asynchronous learning variation strategy for learning factors [

Benchmark functions.

function | function expression | search range | global optimum |
---|---|---|---|

Sphere | | | 0 |

Rosenbrock | | | 0 |

Griewank | | | 0 |

Ackley | | | 0 |

| |||

Schwefel | | | 0 |

Weierstrass | | | 0 |

| |||

| |||

Noncontinuous Rastrigin’s | | | 0 |

| |||

Quartic | | | 0 |

Schwefel 2.22 | | | 0 |

Schaffer6 | | | 0 |

The problem dimension of the benchmark functions is set, respectively, as 30 and 50, and the sample interval for both acceleration coefficients_{1} and_{2} is

Tables _{1},_{8}, and_{9}, the convergence accuracy of the proposed algorithm is obviously better than that of all other algorithms. For_{2},_{3}, and_{10}, the convergence accuracy of the proposed algorithms is improved slightly. Besides that, for the remaining test functions, the performance convergence accuracy of PSO, DCW-PSO, LDC-PSO, and DPSO is almost the same, and their convergence accuracy is worse than that of SS-PSO and the proposed algorithm. Table

Comparison of the optimization results on benchmark functions (30D).

function | algorithm | PSO | DCW-PSO | LDC-PSO | DPSO | SS-PSO | SC-PSO |
---|---|---|---|---|---|---|---|

_{ 1 } | Mean | 7.59E+03 | 9.11E-10 | 8.29E-12 | 4.63E-06 | 3.06E-67 | 4.13E-125 |

Std | 9.98E+03 | 4.07E-09 | 2.96E-11 | 1.40E-05 | 1.23E-66 | 1.60E-124 | |

Rank | 6 | 4 | 3 | 5 | 2 | 1 | |

| |||||||

_{ 2 } | Mean | 1.05E+03 | 2.68E+01 | 3.31E+01 | 3.81E+01 | 2.98E+01 | 2.61E+01 |

Std | 8.56E+02 | 1.82E+01 | 2.28E+01 | 2.39E+01 | 1.70E+01 | 6.17E+00 | |

Rank | 6 | 2 | 4 | 5 | 3 | 1 | |

| |||||||

_{ 3 } | Mean | 6.83E+01 | 2.54E-01 | 9.97E-02 | 3.16E-02 | 1.03E-02 | 1.04E-02 |

Std | 8.03E+01 | 4.25E-01 | 1.26E-01 | 3.35E-02 | 1.44E-02 | 3.70E-02 | |

Rank | 6 | 5 | 4 | 3 | 2 | 1 | |

| |||||||

_{ 4 } | Mean | 1.54E+01 | 7.56E+00 | 4.57E+00 | 2.35E+00 | 4.87E-14 | 2.15E-14 |

Std | 2.71E+00 | 2.38E+00 | 1.21E+00 | 3.00E+00 | 1.01E-14 | 7.24E-15 | |

Rank | 6 | 5 | 4 | 3 | 2 | 1 | |

| |||||||

_{ 5 } | Mean | 5.40E+03 | 4.85E+03 | 4.77E+03 | 3.40E+03 | 3.82E-04 | 3.82E-04 |

Std | 6.16E+02 | 8.06E+02 | 8.83E+02 | 7.63E+02 | 7.05E-12 | 2.25E-12 | |

Rank | 6 | 5 | 4 | 3 | 1 | 1 | |

| |||||||

_{ 6 } | Mean | 2.36E+01 | 1.47E+01 | 6.38E+00 | 6.03E+00 | 3.20E-14 | 7.82E-15 |

Std | 5.74E+00 | 1.78E+00 | 1.97E+00 | 3.84E+00 | 2.49E-14 | 6.06E-15 | |

Rank | 6 | 5 | 4 | 3 | 2 | 1 | |

| |||||||

_{ 7 } | Mean | 1.29E+02 | 9.26E+01 | 8.04E+01 | 2.91E+01 | 0.00E+00 | 0.00E+00 |

Std | 4.19E+01 | 2.13E+01 | 3.13E+01 | 1.74E+01 | 0.00E+00 | 0.00E+00 | |

Rank | 6 | 4 | 5 | 3 | 1 | 1 | |

| |||||||

_{ 8 } | Mean | 4.10E+00 | 6.13E-60 | 7.21E-24 | 1.35E-08 | 4.19E-139 | 5.76E-253 |

Std | 6.46E+00 | 2.15E-59 | 3.21E-23 | 5.58E-08 | 1.87E-138 | 0.00E+00 | |

Rank | 6 | 3 | 4 | 5 | 2 | 1 | |

| |||||||

_{ 9 } | Mean | 3.19E+01 | 2.05E+00 | 7.55E-03 | 6.22E+01 | 1.22E-34 | 1.22E-64 |

Std | 1.73E+01 | 1.71E+00 | 1.42E-02 | 2.35E+01 | 2.05E-34 | 9.86E-65 | |

Rank | 6 | 4 | 3 | 5 | 2 | 1 | |

| |||||||

_{ 10 } | Mean | 8.58E+00 | 1.02E+01 | 9.31E+00 | 1.03E+01 | 3.13E+00 | 1.49E+00 |

Std | 1.39E+00 | 7.68E-01 | 9.44E-01 | 6.89E-01 | 8.29E-01 | 8.05E-01 | |

Rank | 3 | 5 | 4 | 6 | 2 | 1 | |

| |||||||

Average Rank | 5.7 | 4.2 | 3.9 | 4.1 | 1.9 | 1 | |

Total rank | 6 | 5 | 3 | 4 | 2 | 1 |

Comparison of the optimization results on benchmark functions (50D).

function | algorithm | PSO | DCW-PSO | LDC-PSO | DPSO | SS-PSO | SC-PSO |
---|---|---|---|---|---|---|---|

_{1} | Mean | 4.64E+04 | 8.63E-02 | 1.12E-03 | 2.46E-01 | 5.26E-59 | 8.06E-87 |

Std | 1.45E+04 | 9.92E-02 | 8.26E-04 | 2.96E-01 | 6.84E-59 | 2.85E-87 | |

Rank | 6 | 4 | 3 | 5 | 2 | 1 | |

| |||||||

_{ 2 } | Mean | 4.06E+03 | 8.15E+01 | 7.75E+01 | 1.23E+02 | 4.91E+01 | 4.89E+01 |

Std | 1.60E+03 | 3.12E+01 | 4.12E+01 | 5.55E+01 | 1.74E+01 | 6.04E+00 | |

Rank | 6 | 4 | 3 | 5 | 2 | 1 | |

| |||||||

_{ 3 } | Mean | 3.70E+02 | 1.59E+00 | 4.66E-01 | 4.00E-01 | 5.66E-03 | 4.93E-03 |

Std | 1.33E+02 | 2.26E+00 | 6.53E-01 | 6.26E-01 | 9.00E-03 | 9.59E-03 | |

Rank | 6 | 5 | 4 | 3 | 2 | 1 | |

| |||||||

_{ 4 } | Mean | 1.88E+01 | 1.33E+01 | 8.68E+00 | 8.32E+00 | 8.76E-14 | 3.69E-14 |

Std | 1.21E+00 | 1.78E+00 | 1.47E+00 | 3.79E+00 | 1.44E-14 | 7.42E-15 | |

Rank | 6 | 5 | 4 | 3 | 2 | 1 | |

| |||||||

_{ 5 } | Mean | 1.10E+04 | 8.81E+03 | 1.05E+04 | 5.25E+03 | 6.36E-04 | 6.36E-04 |

Std | 8.08E+02 | 6.17E+02 | 1.28E+03 | 9.32E+02 | 1.37E-11 | 1.47E-09 | |

Rank | 6 | 4 | 5 | 3 | 1 | 1 | |

| |||||||

_{ 6 } | Mean | 5.99E+01 | 3.51E+01 | 2.15E+01 | 1.30E+01 | 9.59E-14 | 2.84E-14 |

Std | 7.96E+00 | 3.95E+00 | 3.09E+00 | 4.60E+00 | 6.04E-14 | 1.03E-14 | |

Rank | 6 | 5 | 4 | 3 | 2 | 1 | |

| |||||||

_{ 7 } | Mean | 3.37E+02 | 2.11E+02 | 1.70E+02 | 1.06E+02 | 0.00E+00 | 0.00E+00 |

Std | 6.89E+01 | 3.62E+01 | 6.86E+01 | 3.42E+01 | 0.00E+00 | 0.00E+00 | |

Rank | 6 | 4 | 5 | 3 | 1 | 1 | |

| |||||||

_{ 8 } | Mean | 8.37E+01 | 3.49E-18 | 3.14E-04 | 4.33E-05 | 1.37E-122 | 7.49E-175 |

Std | 4.00E+01 | 7.07E-18 | 1.32E-03 | 1.14E-04 | 4.24E-122 | 0.00E+00 | |

Rank | 6 | 3 | 5 | 4 | 2 | 1 | |

| |||||||

_{ 9 } | Mean | 1.11E+02 | 2.23E+01 | 1.19E+00 | 1.56E+12 | 6.71E-30 | 1.70E-44 |

Std | 1.97E+01 | 1.08E+01 | 1.23E+00 | 8.44E+12 | 8.51E-30 | 1.58E-44 | |

Rank | 6 | 4 | 3 | 5 | 2 | 1 | |

| |||||||

_{ 10 } | Mean | 1.62E+01 | 1.91E+01 | 1.65E+01 | 1.93E+01 | 5.11E+00 | 3.44E+00 |

Std | 1.33E+00 | 1.02E+00 | 2.00E+00 | 5.87E-01 | 1.02E+00 | 1.07E+00 | |

Rank | 3 | 5 | 4 | 6 | 2 | 1 | |

| |||||||

Average Rank | 5.7 | 4.3 | 4.0 | 4.0 | 1.8 | 1 | |

Total Rank | 6 | 5 | 3 | 3 | 2 | 1 |

In order to intuitively investigate the convergence performance of SC-PSO, the convergence curves of the 6 algorithms (PSO, DCW-PSO, LDC-PSO, DPSO, SS-PSO, and SC-PSO) are on the 10 selected functions. From Figures

The function_{1} iterates over the curve.

The function_{2} iterates over the curve.

The function_{3} iterates over the curve.

The function_{4} iterates over the curve.

The function_{5 } iterates over the curve.

The function_{6} iterates over the curve.

The function_{7} iterates over the curve.

The function_{8} iterates over the curve.

The function_{9} iterates over the curve.

The function_{10} iterates over the curve.

In summary, compared with basic PSO and its modified versions, the convergence rate of SC-PSO proposed in this paper is significantly improved while its convergence accuracy is still guaranteed higher.

From Figure

Average runtime comparison of the 6 algorithms (Sphere 30D).

This paper proposes an improved convergence particle swarm optimization algorithm with random sampling of control parameters. The random sampling strategy of control parameters is designed to improve the flexibility of control parameter setting, and corresponding updating randomness can also help to jump out from the local optimum. In order to avoid premature caused by the strong randomness of sampling strategy, intermediate particle updating strategy is devised. Besides that, the stochastic correction approach on each dimension for the optimal position is used to take advantage of some useful information of other particles. The final experimental results show that not only the proposed algorithm has high accuracy, but also its convergence rate is significantly improved. In the future, we will consider how to apply it to node localization in wireless sensor network.

The data used to support the findings of this study are available from the corresponding author upon request.

The authors declare no conflicts of interest.

This work was supported in part by the National Natural Science Foundation of China (U1604151; 61803146), Outstanding Talent Project of Science and Technology Innovation in Henan Province (174200510008), Program for Scientific and Technological Innovation Team in Universities of Henan Province (16IRTSTHN029), Science and Technology Project of Henan Province (182102210094), Natural Science Project of Education Department of Henan Province (18A510001), and the Fundamental Research Funds of Henan University of Technology (2015QNJH13, 2016XTCX06).