Particle Swarm Optimization Algorithm with Multiple Phases for Solving Continuous Optimization Problems

. An algorithm with diﬀerent parameter settings often performs diﬀerently on the same problem. The parameter settings are diﬃcult to determine before the optimization process. The variants of particle swarm optimization (PSO) algorithms are studied as exemplars of swarm intelligence algorithms. Based on the concept of building block thesis, a PSO algorithm with multiple phases was proposed to analyze the relation between search strategies and the solved problems. Two variants of the PSO algorithm, which were termed as the PSO with ﬁxed phase (PSOFP) algorithm and PSO with dynamic phase (PSODP) algorithm, were compared with six variants of the standard PSO algorithm in the experimental study. The benchmark functions for single-objective numerical optimization, which includes 12 functions in 50 and 100 dimensions, are used in the experimental study, respectively. The experimental results have veriﬁed the generalization ability of the proposed PSO variants.


Introduction
Particle swarm optimization (PSO) algorithm is a population-based stochastic algorithm modeled on the social behaviors observed in flocking birds [1,2]. As a well-known swarm intelligence algorithm, each particle, which represents a solution in the group, flies through the search space with a velocity that is dynamically adjusted according to its own and its companion's historical behaviors. e particles tend to fly toward better search areas throughout the search process [3].
Many swarm intelligence algorithms have been proposed to solve different kinds of problems, which include single-or multiple-objective optimization and real-world applications. ese algorithms include particle swarm optimization algorithm [1,2], brain storm optimization (BSO) algorithm [4][5][6], and pigeon-inspired optimization (PIO) [7], just to name a few. Usually, a kind of swarm intelligence algorithm has many variants, which have different search strategies or parameters. Take the PSO algorithm as an example; for single-objective optimization, there are adaptive PSO algorithm [8], time-varying attractor in PSO algorithm [9], interswarm interactive learning strategy in PSO algorithm [10], triple archives PSO algorithm [11], social learning PSO algorithm for scalable optimization [12], PSO variant for mixed-variable optimization problems [13], etc. For multiobjective optimization, there are adaptive gradient multiobjective PSO algorithm [14], coevolutionary PSO algorithm with bottleneck objective learning strategy [15], normalized ranking based PSO algorithm for many-OB-JECTIVE optimization [16], etc. Besides, the classical PSO algorithm and its variants have been used in various realworld applications [17], such as search-based data analytics problems [18]. Different variants of optimization algorithms have different components and strategies during the search process. e designing of proper components and strategies is vital for a search algorithm before solving the problem. e building blocking thesis could be embedded into the genetic algorithm [19]. Building blocks indicate the components from all levels to understand and recognize the complex things or structures. An evolutionary computation algorithm could be divided into several components with the building block thesis. In the PSO algorithm, there are several components. e different settings of parameters, structures, and strategies of the PSO algorithm could perform differently on the same problem. us, the proper setting of the optimization algorithm is vital for the solved problem. However, there is no favorable method that could find out the best setting before the optimization. Many approaches have been introduced to analyze the components of the PSO algorithm. For example, the setting of parameter size in the PSO algorithm was surveyed and discussed in [20].
Based on the building block thesis, a PSO algorithm with multiple phases was proposed in this paper. is paper could be useful for understanding the search components in the PSO algorithm and designing the PSO algorithm for specific problems. It has two targets in this paper.
(1) For the theoretical analysis of the PSO algorithm, the effectiveness of different components of various PSO algorithms, such as inertia weight, acceleration coefficient, and topology structures is studied (2) For the application of the PSO algorithm, more effective PSO algorithms could be designed for solving different real-world applications e remainder of this paper is organized as follows. e basic PSO algorithm and topology structure were briefly introduced in Section 2. Section 3 gives an introduction to the proposed PSO algorithms with multiple phases. In Section 4, comprehensive experimental studies are conducted on 12 benchmark functions with 50 or 100 dimensions to verify the effectiveness of the proposed algorithms, respectively. Finally, Section 5 concludes with some remarks and future research directions.

Backgrounds
PSO algorithm is a widely used swarm intelligence algorithm for optimization. It is easy for understanding and implementation. A potential solution, which is termed as a particle in PSO algorithm, is a search point in the solution space with D-dimensions. For each particle, it is connected with two vectors, i.e., the vector of velocity and the vector of position. Normally, the number of particles is from 1 to S and the index of the particles or solutions is represented by notation i. e number of dimensions is from 1 to D and the index of the dimensions is represented by notation d. e S is the total number of particles and D is the total number of dimensions. e position of the ith particle is represented as x id is the value of the dth dimension for the ith solution, where i � 1, 2, . . . , S, and d � 1, 2, . . . , D. e velocity of a particle is labeled as

Particle Swarm Optimization Algorithm.
e basic process of PSO algorithm is given in Algorithm 1. ere are two equations, the update of velocity and position for each particle, in the basic process of PSO algorithm. e framework of canonical PSO algorithm is shown in Figure 1. e fitness value is evaluated based on the calculation on all dimensions, while the update on the velocity or position is based on each dimension. e update equations for the velocity v id and the position x id are as follows [17,21]: where w denotes the inertia weight, c 1 and c 2 are two positive acceleration constants, and rand is a random function to generate uniformly distributed random numbers in the is termed as personal best at the tth iterations, which refers to the best position found so far by the ith particle, and p t n � [p t n1 , p t n2 , . . . , p t nd , . . . , p t nm ] is termed as local best, which refers to the position found so far by the members in the ith particle's neighborhood that has the best fitness value. e parameter w was introduced to control the global search and local search ability. e PSO algorithm with the inertia weight is termed as the canonical/classical PSO. e position value of a particle was updated in the search space at each iteration. e velocity update equation is shown in equation (1). is equation has three components: the previous velocity, cognitive part, and social part. e cognitive part indicates that a particle learns from its own searching experience, and correspondingly, the social part indicates that a particle can learn from other particles or learn from the good solutions in the neighborhood.

Topology
Structure. Different kinds of topology structures, such as a global star, local ring, four clusters, or Von Neumann structure, could be utilized in PSO variants to solve various problems. A particle in a PSO with a different structure has a different number of particles in its neighborhood with different scope. Learning from a different neighbor means that a particle follows a different neighborhood (or local) best; in other words, the topology structure determines the connections among particles and the strategy used in the propagation process of searching information over iteration.
A PSO algorithm's ability of exploration and exploitation could be affected by its topology structure; i.e., with a different structure, the algorithm's convergence speed and the ability to avoid premature convergence will be various on the same optimization problem, because a topology structure determines the search information sharing speed or direction for each particle. e global star and local ring are the two typical topology structures. A PSO with a global star structure, where all particles are connected, has the smallest average distance in the swarm, and on the contrary, a PSO with a local ring structure, where every particle is connected to two near particles, has the largest average distance in swarm [22]. e global star structure and the local ring structure, which are two commonly used structures, are analyzed in the experimental study.
ese two structures are shown in Figure 2. Each group of particles has 16 individuals. It should be noted that the nearby particle in the local structure is mostly based on the index of particles.

Particle Swarm Optimization with
Multiple Phases e search process could be divided into different phases, such as the exploration phase and the exploitation phase. A discussion has been given on the genetic diversity of an evolutionary algorithm in the exploration phase [23]. To enhance the generalization ability of the PSO algorithm, multiple phases could be combined into one algorithm. e PSO algorithm is a combination of several components. Each component could be seen as a building block of a PSO variant. Some changeable components in PSO algorithms are shown in Figure 3. e setting of a component is independent of other components. Normally, the parameters' settings, topology structure, and other strategies should be determined before the search process. ere are three wellknown standard PSO algorithms, namely, the standard PSO algorithm by Bratton and Kennedy (SPSO-BK) [24], the standard PSO algorithm by Clerc (SPSO-C) [25], and the canonical PSO (CPSO) algorithm By Shi and Eberhart [26].
Different search phases are emphasized for PSO variants because it performs differently when solving different kinds of problems.
us, to utilize the strengths of different variants, a PSO algorithm could be a combination of several PSO variants. For example, the global star structure could be used at the beginning of the search to obtain a good exploration ability, while the local ring structure could be used at the ending of the search to promote the exploitation ability.
e basic procedure of the PSO algorithm with multiple phase strategy (PSOMP) is given in Algorithm 2.
e setting of phases could be fixed or dynamically changed during the search. To validate the performance of different settings, two kinds of PSO algorithms with multiple phases are proposed.

Particle Swarm Optimization with Fixed Phases.
One PSOMP variant is the PSO with fixed phases (PSOFP) algorithm. e process of the PSO algorithm with multiple fixed phases is shown in Figure 4. Two standard PSO variants, the CPSO with star structure and the SPSO-BK with ring structure, are combined for the PSOFP algorithm. In the experimental study, both variants perform the same iterations. e PSOFP algorithm started with the CPSO with the star structure and ended with the SPSO-BK with the ring structure. All parameters are fixed for each phase during the search. e PSOFP algorithm switches from one variant to another after the fixed number of iterations.

Particle Swarm Optimization with Dynamic Phases.
e other PSO variant is PSO with dynamic phases (PSODP) algorithm. As the name indicates, the phase could be dynamically changed for this algorithm during the search process. As the same with the PSOFP algorithm, the PSODP algorithm started with the CPSO with the star structure and ended with the SPSO-BK with the ring structure. However, the number of iterations is not fixed during the search. e process of the PSO algorithm with multiple dynamic phases is shown in Figure 5. e global best (gbest) position is the best solution found so far. e gbest not changing for k iterations indicates that the algorithm may be stuck into local optima.
e PSO variant will be changed after this phenomenon occurs several times. For the setting of the PSODP algorithm, the search phase will change from CPSO with star structure to SPSO-BK with a ring structure. e condition of phase change is the gbest stuck into a local optimum; i.e., the gbest is not changed for 100 iterations.
(1) Initialize each particle's velocity and position with random numbers (2) while not reaches the maximum iteration or not found the satisfied solution do (3) Calculate each solution's function value; (4) Compare function value between the current position and the best position in history. For each particle, if current position has a better function value than pbest, then update pbest as current position; (5) Select a particle that has the best fitness value from the current particle's neighborhood, this particle is termed as the neighborhood best (nbest); (6) for each particle do (7) Update particle's velocity according equation (1) Table 1 shows the benchmark functions which have been used in the experimental study. To give an illustration of the search ability of the proposed algorithm, a diverse set of 12 benchmark functions with different types are used to conduct the experiments. ese benchmark functions can be classified into two groups. e first five functions f 0 − f 4 are unimodal, while the other seven functions are multimodal. All functions are run 50 times to ensure a reasonable statistical result necessary to compare the different approaches, and the value of global optimum is shifted to different f min for different functions. e function value of the best solution found by an algorithm in a run is denoted by

Benchmark Test Functions and Parameter Setting.
. e error of each run is denoted as error � f( x → best ) − f min . ree kinds of standard or canonical PSO algorithms are tested in the experimental study. Each algorithm has two kinds of structure, the global star structure and the local ring structure. ere are 50 particles in each group of PSO algorithms. Each algorithm runs 50 times, 10000 iterations in every run. e other settings for the three standard algorithms are as follows.  (1) Initialize each particle's velocity and position with random numbers; (2) while not reaches the maximum iteration or not found the satisfied solution do (3) Calculate each solution's function value; (4) Compare function value between the current position and the best position in history (personal best, termed as pbest). For each particle, if current position has a better function value than pbest, then update pbest as current position; (5) Selection a particle which has the best fitness value among current particle's neighborhood, this particle is termed as the neighborhood best; (6) for each particle do (7) Update the particle's velocity according equation (1); (8) Update the particle's position according equation (2); (9) Change the search phase: update the structure and/or parameter settings; ALGORITHM 2: Basic procedure of PSO algorithm with multiple phase strategy.  Tables 2 and 3. e results for unimodal functions f 0 − f 4 are given in Table 2, while results for multimodal functions f 5 − f 11 are given in Tables 2 and 3. e aim of the experimental study is not to validate the search performance of the proposed algorithm but to attempt to discuss the combination of different search phases. is is a primary study on the learning of the characteristics of benchmark functions.
To validate the generalization ability of PSO variants, the same experiments are conducted on the functions with 100 dimensions. All parameters' settings are the same for functions with 50 dimensions. e results for unimodal functions f 0 − f 4 are given in Table 4, while the results for multimodal functions f 5 − f 11 are given in Tables 4 and 5. From the experiment results, it could be seen that the proposed algorithm could obtain a good solution for most benchmark functions. Based on the combination of different phases, the proposed algorithm could be performed more robustly than the algorithm with one phase. e convergence graphs of error values for each algorithm on all functions with 50 and 100 dimensions are shown in Figures 6 and 7, respectively. In these figures, each curve represents the variation of the mean value of error over the iteration for each algorithm. e six PSO variants are tested with the proposed two algorithms. e notation of "S" and "R" indicates the star and ring structure in the PSO variants, respectively. For example, "SPSOBKS" and "SPSOBKR" indicate the SPSO-BK algorithm with the star and ring structure, respectively. e robustness of the proposed algorithm also could be validated from the convergence graph. Tables 6 and 7 give the total computing time (seconds) of 50 runs in the experiments for functions with 50 and 100 dimensions, respectively. Normally, the evaluation speed of the PSO algorithm with ring structure is significantly slower than the same PSO algorithm with a star structure.  Discrete Dynamics in Nature and Society  Step quartic noise

Computational Time.
Discrete Dynamics in Nature and Society  Discrete Dynamics in Nature and Society

Conclusions
e swarm intelligence algorithms perform in varied manner for the same optimization problem. Even for a kind of optimization algorithm, the change of structure or parameters may lead to different results. It is very difficult, if not impossible, to obtain the connection between an algorithm and a problem. us, choosing a proper setting for an algorithm before the search is vital for the optimization process. In this paper, we attempt to combine the strengths of different settings for the PSO algorithm. Based on the building blocks thesis, a PSO algorithm with a multiple phase strategy was proposed in this paper to solve singleobjective numerical optimization problems. Two variants of the PSO algorithm, which were termed as the PSO with fixed phase (PSOFP) algorithm and PSO with dynamic phase (PSODP) algorithm, were tested on the 12 benchmark functions. e experimental results showed that the combination of different phases could enhance the robustness of the PSO algorithm.
ere are two phases in the proposed PSOFP and PSODP methods. However, the number of phases could be increased or the type of phase could be changed for other PSO algorithms with multiple phases. Besides the PSO algorithms, the building block thesis could be utilized in other swarm optimization algorithms. Based on the analysis of different components, the strength and weaknesses of different swarm optimization algorithms could be understood. Utilizing this multiple phase strategy with other swarm algorithms is our future work.

Data Availability
e data and codes used to support the findings of this study have been deposited in the GitHub repository.

Conflicts of Interest
e authors declare that they have no conflicts of interest.