A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.
Optimization comes to focus when there are needs to plan, take decisions, operate and control systems, design models, or seek optimal solutions to varieties of problems faced from day to day by different people. A number of these problems, which can be formulated as continuous optimization problems, are often approached with limited resources. Dealing with such problems, most especially when they are large scale and complex, has attracted the development of different nature-inspired optimization algorithms. These algorithms display problem-solving capabilities for researchers to solve complex and challenging optimization problems with many success stories. Swarm-based techniques are a family of nature-inspired algorithms and are population-based in nature; they are also known as evolutionary computation techniques. Particle swarm optimization (PSO) technique is a member of swarm-based techniques which is capable of producing low cost, fast, and robust solutions to several complex optimization problems. It is a stochastic, self-adaptive, and problem-independent optimization technique and was originally proposed in 1995 by Eberhart and Kennedy as simulation of a flock of bird or the sociological behavior of a group of people [
PSO technique was initially implemented with few lines of codes using basic mathematical operations with no major adjustment needed to adapt it to new problems and it was almost independent of the initialization of the swarm [
There are possibilities of the positions and velocities of the particles in the swarm increasing in value beyond necessary when they are updated. As a measure, the positions are clamped in each dimension to the search range
A major feature that characterizes an efficient optimization algorithm is the ability to strike a balance between local and global search. Global search involves the particles being able to advance from a solution to other parts of the search space and locate other promising candidates while local search means that the particle is capable of exploiting the neighbourhood of the present solution for other promising candidates. In PSO, as the rate of information sharing increases among the particles they migrate towards the same direction and region in the search space. If any of the particles could not locate any better global solution after some time, they will eventually converge about the existing one which may not be the global minimum due to lack of exploration power; this is known as premature convergence. This type of behaviour is more likely when the swarm of particles is overconcentrated. It could also occur when the optimization problem is of high dimension and/or nonconvex. One of the possible ways to prevent this premature convergence is to embed a local search technique into PSO algorithm to help improve the quality of each solution by searching its neighbourhood. After the improvement, better information is communicated among the particles thereby increasing the algorithm's ability to locate better global solution in course of optimization. Hill climbing, modified Hooke and Jeeves, gradient descent, golden ratio, Stochastic local search, adaptive local search, local interpolation, simulated annealing, and chaotic local search are different local search techniques that have been combined with PSO to improve its local search ability [
In this paper, a different local search technique was proposed to harness the global search ability of PSO and improve on its local search efforts. This technique is based on the collective efforts of randomly selected (with replacement) particles a number of times equal to the size of the problem dimension. When a particle is selected, it is made to contribute the value in the position of its randomly selected dimension from its personal best. The contributed values are then used to form a potential global best solution which is further refined. This concept could offer PSO the ability to enhance its performance in terms of convergence speed, local search ability, robustness, and increased solution accuracy. The local search technique was hybridized with two of the existing PSO variants, namely, random inertia weight PSO (RIW-PSO) and linear decreasing inertia weight PSO (LDIW-PSO), to form two new variants. Numerical simulations were performed to validate the efficiencies of each of them and some statistical analyses were performed to ascertain any statistically significant difference in performance between the proposed variants and the old ones. From the results obtained it was shown that the proposed variants are very efficient.
In the sections that follow, RIW-PSO and LDIW-PSO are briefly described in Section
Two PSO variants were used to validate the proposed improvement of the performance of PSO technique. The variants are LDIW-PSO and RIW-PSO. These were chosen because of the evidence available in the literature that they are less efficient in optimizing many continuous optimization problems [
This variant was proposed in [
Due to the improved performance of PSO when the constant inertia weight was introduced into it [
The basic principle underlying the optimizing strategy of PSO technique is that each particle in the swarm communicates their discoveries to their neighbours and the particle with the best discovery attracts others. While this strategy looks very promising, there is the risk of the particles being susceptible to premature convergence, especially when the problem to be optimized is multimodal and high in dimensionality. The reason is that the more the particles share their discoveries among themselves, the higher their identical behaviour is until they converge to the same area in the solution search space. If none of the particle could discover better global best, after some time all the particles will converge about the existing global best which may not be the global minimizer.
One of the motivations for this local search technique is the challenge of premature convergence associated with PSO technique which affects its reliability and efficiency. Another motivation is the decision-making strategy used by the swarm in searching for optimal solution to optimization problems. The decision is dictated by a single particle in the swarm; that is, other particles follow the best particle among them to search for better solution. Involving more than one particle in the decision making could lead to a promising region in the search space where optimal solution could be obtained.
The description of the local search technique is as follows: after all the particles have obtained their various personal best positions, each particle has an equal chance of being selected to contribute its idea towards how a potential location in the search space where better global best could be obtained. As a result, a number of particles equal to the dimension of the problem being optimized are randomly selected (with replacement). Each selected particle contributes an idea by donating the value in the location of its randomly selected dimension from its personal best. All the ideas contributed by the selected particles are collectively used (hybridized) to construct a potential solution in the solution search space. After constructing the potential solution, some searches are locally done around its neighbourhood with the hope of locating a better solution in comparison with the current global solution. If a better solution is found, it is then used to replace the current global solution; otherwise no replacement is made.
In this local search, the potential new position is denoted by
This proposed local search technique has been named collective local unimodal search (CLUS) technique. It has some trace of similarity in operation with local unimodal sampling (LUS) technique [
While for end for validate for search space boundary else end if end while Return
The RIW-PSO increases convergence in early iterations and does more of global search activities but soon gets stuck in local optima because of lack of local search ability. Also, LDIW-PSO does global search at earlier part of its iteration but lacks enough momentum to do local search as it gets towards its terminal point of execution. The aim of this paper is to make a general improvement on the performance of PSO which can be applied to any of its variants. To achieve this, the two PSO variants were hybridized with the proposed collective local unimodal search (CLUS) technique which takes advantage of their global search abilities to do some neighbourhood search for better results. The improved PSO algorithm is presented in Algorithm
(1.1) function to optimize as (1.2) Parameter (1.2.1) swarm size (1.2.2) problem dimension (1.2.3) solution search space (1.2.4) particle velocity range (2.1) position (2.2) velocity (2.3) (2.4) (2.5) evaluate (3.1). Compute inertia weight using any inertia weight formula (3.2). For each particle (3.2.1). update (3.2.2). validate for velocity boundaries (3.2.3). update (3.2.4). validate for position boundaries (3.2.5). If (3.3). (3.4). Implement local search using CLUS in Algorithm (4.1). (4.2). (4.3). Return
In this section, the improved algorithm (PSOCLUS) was implemented using the inertia weight strategy of RIW-PSO and the variant was labeled
Parameter settings for experiment.
Parameter |
|
|
|
|
|
min |
max |
max |
---|---|---|---|---|---|---|---|---|
Value | 0.9 | 0.4 | 1.494 | 0.05 * |
0.05 * |
0.01 | 2.0 | 100 |
The application software was developed in Microsoft Visual C# programming language.
A total of 21 problems were used in the experiments. These problems have different degrees of complexity and multimodality which represents diverse landscapes enough to cover many of the problems which can arise in global optimization problems. Shown in Table
Benchmark problems.
Number | Problem | Dimensions | Optimal value | Success threshold |
---|---|---|---|---|
1 | Ackley | 10, 20, 30 | 0 |
|
2 | Booth | 2 | 0 |
|
3 | Easom | 2 | −1 | −1 |
4 | Griewank | 10, 20, 30 | 0 |
|
5 | Dixon-Price | 10, 20, 30 | 0 |
|
6 | Levy | 10, 20, 30 | 0 |
|
7 | Michalewicz | 5 | −4.687 | −4.687 |
8 | Noisy Quartic | 10, 20, 30 | 0 |
|
9 | Noncontinous Rastrigin | 10, 20, 30 | 0 | 20 |
10 | Rastrigin | 10, 20, 30 | 0 | 20 |
11 | Rosenbrock | 10, 20, 30 | 0 | 20 |
12 | Rotated Ellipsoid | 10, 20, 30 | 0 |
|
13 | Salomon | 5 | 0 |
|
14 | Schaffer's f6 | 2 | 0 |
|
15 | Schwefel | 10, 20, 30 | ||
16 | Schwefel P2.22 | 10, 20, 30 | 0 |
|
17 | Shubert | 2 | −186.7309 | −186.7309 |
18 | Sphere | 10, 20, 30 | 0 |
|
19 | Step | 10, 20, 30 | 0 |
|
20 | Sum Squares | 10, 20, 30 | 0 |
|
21 | Trid | 6 | −50 | −50 |
Benchmark problems.
Number | Problem | Formulation | Feature | Search range |
---|---|---|---|---|
1 | Ackley |
|
MN | ±32 |
2 | Booth |
|
MN | ±10 |
3 | Easom |
|
UN | ±100 |
4 | Griewank |
|
MN | ±600 |
5 | Dixon-Price |
|
UN | ±10 |
6 | Levy |
|
MN | ±10 |
7 | Michalewicz |
|
MS |
|
8 | Noisy Quartic |
|
US | ±1.28 |
9 | Noncontinous Rastrigin |
|
MS | ±5.12 |
10 | Rastrigin |
|
MS | ±5.12 |
11 | Rosenbrock |
|
UN | ±30 |
12 | Rotated Ellipsoid |
|
UN | ±100 |
13 | Salomon |
|
MN | ±100 |
14 | Schaffer's f6 |
|
MN | ±100 |
15 | Schwefel |
|
MS | ±500 |
16 | Schwefel P2.22 |
|
UN | ±10 |
17 | Shubert |
|
MN | ±10 |
18 | Sphere |
|
US | ±100 |
19 | Step |
|
US | ±10 |
20 | SumSquares | US | ±10 | |
21 | Trid |
|
UN | ± |
The additional parameters that were set in the experiment are inertia weight threshold for LDIW-PSO (
The efficiency of the algorithms was tested against the set of benchmark problems given in Table Best fitness solution: the best of the fitness solution among the solutions obtained during the runs. Mean best fitness solution: this is a measure of the precision (quality) of the result that the algorithm can get within given iterations in all the 25 runs. Standard deviation (Std. Dev.) of mean best fitness solution over 25 runs: this measures the algorithm's stability and robustness. Average number of iterations an algorithm was able to reach the success threshold. Success rate (SR) =
Comparison between GLSPSO and
Problem | Ackley | Griewank | Rastrigin | Rosenbrock | Sphere | |||||
---|---|---|---|---|---|---|---|---|---|---|
Algorithm | GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
|
||||||||||
Best fitness | 0.0364 |
|
|
|
8.8062 |
|
2.6188 |
|
|
|
Mean fitness |
|
17.1371 | 0.0041 |
|
29.4936 |
|
9.0025 |
|
0.0142 |
|
Worst fitness |
|
20.0888 |
|
0.0791 | 50.4781 |
|
18.9887 |
|
0.0476 |
|
Std. Dev. |
|
6.7543 |
|
0.0111 | 10.4372 |
|
0.034 |
|
0.0123 |
|
Statistical analysis using the Wilcoxon signed rank nonparametric test with 0.05 level of significance [
Results obtained from all the experiments are discussed in this subsection to show the overall performance of the various algorithms. Presented in Tables
Comparison between GLSPSO and
Problem | Ackley | Griewank | Rastrigin | Rosenbrock | Sphere | |||||
---|---|---|---|---|---|---|---|---|---|---|
Algorithm | GLSPSO | R-PSOCLUS | GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
|
||||||||||
Best fitness |
|
20.3075 | 0.0897 |
|
109.5946 |
|
175.8785 |
|
1.9123 |
|
Mean fitness |
|
20.4778 | 0.1257 |
|
185.5221 |
|
218.4976 |
|
2.7449 |
|
Worst fitness |
|
20.5792 | 0.2074 |
|
229.6229 |
|
259.2466 |
|
3.9559 |
|
Std. Dev. | 0.2273 |
|
0.0274 |
|
24.9829 |
|
21.8027 |
|
0.4840 |
|
Comparison between GLSPSO and
Problem | Ackley | Griewank | Rastrigin | Rosenbrock | Sphere | |||||
---|---|---|---|---|---|---|---|---|---|---|
Algorithm | GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
|
||||||||||
Best fitness |
|
20.9666 | 0.3195 |
|
792.004 |
|
|
1867.2669 | 23.0614 |
|
Mean fitness |
|
21.0691 | 0.4242 |
|
881.0822 |
|
|
24909.8486 | 27.2534 |
|
Worst fitness |
|
21.1306 | 0.4992 |
|
934.9773 |
|
|
95519.4585 | 29.1615 |
|
Std. Dev. | 0.0551 |
|
0.0303 |
|
|
103.1854 |
|
21083.5791 |
|
4.2498 |
Comparison between GLSPSO and
Problem | Ackley | Griewank | Rastrigin | Rosenbrock | Sphere | |||||
---|---|---|---|---|---|---|---|---|---|---|
Algorithm | GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
|
||||||||||
Best fitness | 0.0364 |
|
|
|
8.8062 |
|
2.6188 |
|
|
|
Mean fitness |
|
18.2504 |
|
0.0042 | 29.4936 |
|
9.0025 |
|
0.0142 |
|
Worst fitness |
|
20.0771 |
|
0.1008 | 50.4781 |
|
18.9887 |
|
0.0476 |
|
Std. Dev. |
|
5.4640 |
|
0.0186 | 10.4372 |
|
|
0.6449 | 0.0123 |
|
Comparison between GLSPSO and
Problem | Ackley | Griewank | Rastrigin | Rosenbrock | Sphere | |||||
---|---|---|---|---|---|---|---|---|---|---|
Algorithm | GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
|
||||||||||
Best fitness |
|
20.3184 | 0.0897 |
|
109.5946 |
|
175.8785 |
|
1.9123 |
|
Mean fitness |
|
20.4631 | 0.1257 |
|
185.5221 |
|
218.4976 |
|
2.7449 |
|
Worst fitness |
|
20.5734 | 0.2074 |
|
229.6229 |
|
259.2466 |
|
3.9559 |
|
Std. Dev. | 0.2273 |
|
0.0274 |
|
24.9829 |
|
21.8027 |
|
0.4840 |
|
Comparison between GLSPSO and
Problem | Ackley | Griewank | Rastrigin | Rosenbrock | Sphere | |||||
---|---|---|---|---|---|---|---|---|---|---|
Algorithm | GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
GLSPSO |
|
|
||||||||||
Best fitness |
|
20.2136 | 0.3195 |
|
792.004 |
|
1378.0 |
|
23.0614 |
|
Mean fitness |
|
21.0491 | 0.4242 |
|
881.0822 |
|
1602.0 |
|
27.2534 |
|
Worst fitness |
|
21.1152 | 0.4992 |
|
934.9773 |
|
1763.0 |
|
29.1615 |
|
Std. Dev. |
|
0.1254 | 0.0303 |
|
|
68.2009 | 90.2874 |
|
1.2253 |
|
Comparison between PSOlis,
Problem | Algorithm | ||
---|---|---|---|
PSOlis |
|
| |
Ackley |
|
|
|
Griewank |
|
|
|
Rastrigin | 2.005 |
|
|
Rosenbrock | 3.987 |
|
|
Sphere |
|
|
|
Results for RIW-PSO and
Problem | Booth | Easom | Michalewicz | Schaffer's f6 | Salomon | Shubert | Trid-6 | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 39.12 |
|
55.0 |
|
— | — | 133.83 |
|
— |
|
|
107.16 | 114.40 |
|
SR (%) | 100 | 100 | 100 | 100 | 0 | 0 | 48 |
|
0 |
|
100 | 100 | 100 | 100 |
Results for RIW-PSO and
Problem | Ackley | Griewank | Dixon-Price | Levy | Noisy Quartic | Noncontinuous Rastrigin | Rastrigin | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 287.88 |
|
— |
|
295.50 |
|
127.31 |
|
— | — | 40.49 |
|
49.44 |
|
SR (%) | 96 |
|
0 |
|
8 | 8 | 52 |
|
0 | 0 | 92 |
|
100 | 100 |
|
||||||||||||||
Problem | Rosenbrock | Rotated Ellipsoid | Schwefel | Schwefel 2.22 | Sphere | Step | Sum Squares | |||||||
|
||||||||||||||
Algorithm | RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 89.28 |
|
426.60 |
|
— |
|
268.84 |
|
180.12 |
|
88.72 |
|
143.52 |
|
SR (%) | 100 | 100 | 100 | 100 | 0 |
|
100 | 100 | 100 | 100 | 72 |
|
100 | 100 |
Results for RIW-PSO and
Problem | Ackley | Griewank | Dixon-Price | Levy | Noisy Quartic | Noncontinuous Rastrigin | Rastrigin | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 580.67 |
|
402.00 |
|
— | — | 248.00 |
|
— | — | 51.5 |
|
56.2 | 89.08 |
SR (%) | 84 |
|
4 |
|
0 | 0 | 16 |
|
0 | 0 | 16 |
|
20 |
|
|
||||||||||||||
Problem | Rosenbrock | Rotated Ellipsoid | Schwefel | Schwefel 2.22 | Sphere | Step | Sum Squares | |||||||
|
||||||||||||||
Algorithm | RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 263.40 |
|
1763.10 |
|
— | — | 575.92 |
|
354.00 |
|
646.57 |
|
310.72 |
|
SR (%) | 100 | 100 | 84 |
|
0 | 0 | 100 | 100 | 100 | 100 | 28 |
|
100 | 100 |
Results for RIW-PSO and
Problem | Ackley | Griewank | Dixon-Price | Levy | Noisy Quartic | Noncontinuous Rastrigin | Rastrigin | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 959.88 |
|
628.00 |
|
— | — | — |
|
— | — | 107 |
|
141.00 |
|
SR (%) | 64 |
|
36 |
|
0 | 0 | 0 |
|
0 | 0 | 4 |
|
4 |
|
|
||||||||||||||
Problem | Rosenbrock | Rotated Ellipsoid | Schwefel | Schwefel 2.22 | Sphere | Step | Sum Squares | |||||||
|
||||||||||||||
Algorithm | RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
RIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 1523.36 | 2813.14 | — | — | — | — | 1139.64 |
|
595.52 |
|
2881.00 |
|
526.28 |
|
SR (%) | 44 |
|
0 | 0 | 0 | 0 | 100 | 100 | 100 | 100 | 4 |
|
100 | 100 |
Results for LDIW-PSO and
Problem | Booth | Easom | Michalewicz | Schaffer's f6 | Salomon | Shubert | Trid-6 | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration |
|
81.16 | 107.16 |
|
— | — | 279.90 |
|
— |
|
|
208.20 | 287.48 |
|
SR (%) | 100 | 100 | 100 | 100 | 0 | 0 | 84 |
|
0 |
|
|
96 | 100 | 100 |
Results for LDIW-PSO and
Problem | Ackley | Griewank | Dixon-Price | Levy | Noisy Quartic | Noncontinuous Rastrigin | Rastrigin | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 500.92 |
|
— |
|
— | — | 309.58 |
|
— | — | 40.86 |
|
45.13 |
|
SR (%) | 100 | 100 | 0 |
|
0 | 0 | 76 |
|
0 | 0 | 88 |
|
96 |
|
|
||||||||||||||
Problem | Rosenbrock | Rotated Ellipsoid | Schwefel | Schwefel 2.22 | Sphere | Step | Sum Squares | |||||||
|
||||||||||||||
Algorithm | LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 155.20 |
|
566.44 |
|
— |
|
478.12 |
|
385.52 |
|
73.79 |
|
340.56 |
|
SR (%) | 100 | 100 | 100 | 100 | 0 |
|
100 | 100 | 100 | 100 | 96 |
|
100 | 100 |
Results for LDIW-PSO and
Problem | Ackley | Griewank | Dixon-Price | Levy | Noisy Quartic | Noncontinuous Rastrigin | Rastrigin | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 785.08 |
|
— |
|
— | — | 545.13 |
|
— | — | 93 |
|
256.17 |
|
SR (%) | 100 | 100 | 0 |
|
0 | 0 | 32 |
|
0 | 0 | 16 |
|
48 |
|
|
||||||||||||||
Problem | Rosenbrock | Rotated Ellipsoid | Schwefel | Schwefel 2.22 | Sphere | Step | Sum Squares | |||||||
|
||||||||||||||
Algorithm | LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 594.96 | 419.12 | 1495.45 |
|
— | — | 820.56 |
|
639.88 |
|
147.52 |
|
595.80 |
|
SR (%) | 100 | 100 | 88 |
|
0 | 0 | 100 | 100 | 100 | 100 | 92 |
|
100 | 100 |
Results for LDIW-PSO and
Problem | Ackley | Griewank | Dixon-Price | Levy | Noisy Quartic | Noncontinuous Rastrigin | Rastrigin | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Algorithm | LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration | 1245.68 |
|
1050.23 | 1060.44 | — | — | 928.00 |
|
— | — | — |
|
518.00 | 718.21 |
SR (%) | 100 | 100 | 52 |
|
0 | 0 | 4 |
|
0 | 0 | 0 |
|
4 |
|
|
||||||||||||||
Problem | Rosenbrock | Rotated Ellipsoid | Schwefel | Schwefel 2.22 | Sphere | Step | Sum Squares | |||||||
|
||||||||||||||
Algorithm | LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
LDIW-PSO |
|
|
||||||||||||||
Best fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Mean fitness |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Std. Dev. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Av. iteration |
|
2078.70 | — |
|
— | — | 1576.75 |
|
1033.04 |
|
408.92 |
|
968.72 |
|
SR (%) | 92 | 92 | 0 |
|
0 | 0 | 80 |
|
100 | 100 | 52 |
|
100 | 100 |
The results in Tables
To further demonstrate the efficiency of the proposed local search technique,
Presented in Table
The results presented in Table
Wilcoxon signed rank test on mean fitness obtained by RIW-PSO and
Measurement | Scaled problems | Nonscaled problems | ||
---|---|---|---|---|
Dim = 10 | Dim = 20 | Dim = 30 | ||
|
13 | 11 | 12 | 3 |
|
0 | 2 | 2 | 0 |
|
1 | 1 | 0 | 4 |
|
−3.190 | −2.274 | −2.606 | −1.604 |
|
0.001 | 0.023 | 0.009 | 0.190 |
|
0.600 | 0.430 | 0.490 | — |
Median | ||||
RIW-PSO | 0.847 | 0.144 | 0.272 | −1.000 |
|
0.000 | 0.000 | 0.000 | −1.000 |
Wilcoxon signed rank test on mean fitness obtained by LDIW-PSO and
Measurement | Scaled problems | Nonscaled | ||
---|---|---|---|---|
Dim = 10 | Dim = 20 | Dim = 30 | ||
|
12 | 12 | 13 | 3 |
|
1 | 1 | 0 | 0 |
|
1 | 1 | 1 | 4 |
|
−2.988 | −2.552 | −3.181 | −1.604 |
|
0.003 | 0.011 | 0.001 | 0.190 |
|
0.565 | 0.482 | 0.601 | — |
Median | ||||
LDIW-PSO | 0.044 | 0.021 | 0.029 | −1.000 |
|
0.000 | 0.000 | 0.000 | −1.000 |
Convergence graphs for 6 of the nonscaled benchmark problems.
Convergence graphs for 6 of the scaled benchmark problems with dimension of 30.
Presented in Table
Other than using statistical test to observe the performance of RIW-PSO,
Box plots for 6 of the scaled test problems.
A new local search technique has been proposed in this paper with the goal of addressing the problem of premature convergence associated with particle swarm optimization algorithms. The proposed local search was used to efficiently improve the performance of two existing PSO variants, RIW-PSO and LDIW-PSO. These variants have been known to be less efficient optimizing continuous optimization problems. In this paper they were hybridized with the local search to form two other variants
Further study is needed on the parameter tuning of the proposed local search technique. Empirical investigation of the behaviour of the technique in optimizing problems with noise needs further study. The scalability of the algorithms for problems with higher dimension greater than 100 is essential. Finally, the proposed algorithm can be applied to real-world optimization problems.
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors acknowledge College of Agriculture, Engineering and Sciences, University of Kwazulu-Natal, South Africa, for their support towards this work.