A Framework for Constrained Optimization Problems Based on a Modified Particle Swarm Optimization

This paper develops a particle swarm optimization (PSO) based framework for constrained optimization problems (COPs). Aiming at enhancing the performance of PSO, a modified PSO algorithm, named SASPSO 2011, is proposed by adding a newly developed self-adaptive strategy to the standard particle swarm optimization 2011 (SPSO 2011) algorithm. Since the convergence of PSO is of great importance and significantly influences the performance of PSO, this paper first theoretically investigates the convergence of SASPSO 2011. Then, a parameter selection principle guaranteeing the convergence of SASPSO 2011 is provided. Subsequently, a SASPSO 2011-based framework is established to solve COPs. Attempting to increase the diversity of solutions and decrease optimization difficulties, the adaptive relaxation method, which is combined with the feasibility-based rule, is applied to handle constraints of COPs and evaluate candidate solutions in the developed framework. Finally, the proposedmethod is verified through 4 benchmark test functions and 2 real-world engineering problems against six PSO variants and some well-known methods proposed in the literature. Simulation results confirm that the proposed method is highly competitive in terms of the solution quality and can be considered as a vital alternative to solve COPs.


Introduction
Over the last few decades, constrained optimization problems (COPs) have rapidly gained increasing research interests, since they are frequently encountered in different areas such as path planning [1], resource allocation [2], and economic environmental scheduling [3] to name but a few.Generally, solving a constrained optimization problem is to optimize a predefined objective function under some equality and/or inequality constraints [4,5].Nevertheless, owing to the nonlinearity in either the objective or constraints, or both, efficiently solving COPs remains a big challenge [4,5].Therefore, far more effective optimization algorithms are always needed.
Due to their population-based nature and promising search ability to produce high-quality solutions, even for complex optimization problems [6], evolutionary algorithms (EAs), such as genetic algorithm (GA) [2], simulated annealing (SA) [3], and differential evolution (DE) [7], have been proposed for solving different COPs.As one of the most powerful EAs, thanks to its simplicity and high convergence speed, particle swarm optimization (PSO) has been widely and successfully applied to solve different COPs in recent years [8][9][10][11][12].
Yet, since the basic PSO algorithm suffers from some drawbacks such as stagnation and poor ability in balancing exploration and exploitation, its optimization efficiency may be restricted [13,14].In order to improve the performance of the PSO, these weaknesses must be overcome.Moreover, when designing a PSO algorithm, the convergence of PSO is paramount because this property significantly influences the performance of PSO [15,16].To date, despite some studies investigating the convergence to an equilibrium point of PSO [16][17][18][19], the optimality of this point is not clearly established.Actually, it is still difficult to theoretically analyze the global or local convergence (i.e., the global or local optimality of this equilibrium point) of PSO due to its stochastic nature [15,16].
So far, many researchers have committed themselves to developing different PSO algorithms in order to enhance the performance of PSO.Liu et al., proposed a hybrid PSO 2 Mathematical Problems in Engineering algorithm which hybridizes PSO algorithm with differential evolution (DE) algorithm in [20].To tackle the stagnation issue, they proposed a new DE algorithm to evolve the personal best experience of particles in their hybrid PSO [20].For sufficiently balancing the exploration and exploitation capabilities of PSO, Taherkhani and Safabakhsh [21] proposed a novel stability-based PSO algorithm, in which an adaptive approach is developed to determine the inertia weight of each particle in different dimensions.Furthermore, by considering the stability condition and the adaptive inertial weight, the cognitive and social acceleration parameters of their proposed PSO algorithm are adaptively determined [21].Through extensive simulations on different benchmark test functions and a real-world application, the effectiveness and superiority of their proposed PSO have been validated in [21].
Additionally, among the currently existing PSO variants, based on the best knowledge of the authors, the standard particle swarm optimization (SPSO 2011) algorithm [22,23] may be one of the most recent standard versions for PSO.By randomly drawing a point in a hypersphere which is centered on three points, the current position of the particle, a point a little "beyond" the personal best position of the particle, and a point a little "beyond" the global best position of the swarm, the nonstagnation property can be achieved in SPSO 2011 [22,23].However, since its three control parameters (i.e., the inertial weight and the cognitive and social acceleration parameters) are constant and there is no distinction between the cognitive acceleration parameter and the social acceleration parameter, SPSO 2011 cannot dynamically adjust the exploration and exploitation abilities.Besides, the convergence and the stability of SPSO 2011 have not been investigated in [22,23].
Considering the advantage and the disadvantage of SPSO 2011, we propose a modified PSO algorithm, called SASPSO 2011, which is developed based on SPSO 2011.The main consideration of the development of SASPSO 2011 is to exploit the advantage (i.e., nonstagnation property) and overcome the shortcoming (i.e., poor ability in balancing exploration and exploitation) of SPSO 2011, so that the performance of SASPSO 2011 can be enhanced.To this end, particles in SASPSO 2011 first follow the same moving rules defined in SPSO 2011 to update their velocities and positions to prevent stagnation in SASPSO 2011.Then, a new self-adaptive strategy is developed for fine-tuning the three control parameters of particles in SASPSO 2011 to well balance the exploration and exploitation abilities of SASPSO 2011.
Although SASPSO 2011 is developed based on SPSO 2011, there are significant differences between these two algorithms, since (1) a novel self-adaptive strategy is proposed for finetuning the three control parameters of particles in SASPSO 2011, (2) the stability and the local convergence of SASPSO 2011 are investigated, (3) the convergence behavior of particles in SASPSO 2011 is investigated, (4) a parameter selection principle that can guarantee the local convergence of SASPSO 2011 is provided.
After analytical investigation on SASPSO 2011, this paper designs a SASPSO 2011-based framework for solving COPs.In order to easily handle constraints of COPs and release the burden of implementing the optimization algorithm, the adaptive relaxation method [4,5] is combined with the feasibility-based rule [24,25] to handle constraints of COPs and evaluate candidate solutions in the framework established.To verify the proposed method, it is compared to six state-of-the-art PSO variants and some methods proposed in the literature by solving 4 benchmark test functions and 2 real-world engineering problems.The simulation results show that the proposed method is highly competitive in finding high-quality solutions.Furthermore, the search stability of the proposed method is comparable with that of SAIWPSO [21] and outperforms those of the other compared methods.Thus, the proposed method can be considered as an effective optimization tool to solve COPs.
The remainder of this paper is organized as follows.After briefly reviewing SPSO 2011, SASPSO 2011 is presented in Section 2. Section 3 theoretically investigates some properties such as the stability, the local convergence, the convergence behavior of particles, and parameter selection principle pertaining to SASPSO 2011.The SASPSO 2011-based framework for COPs is described in Section 4. Simulations and comparisons are performed in Section 5. Section 6 summarizes this paper by way of a conclusion and options of future work.

Particle Swarm Optimization (PSO)
2.1.Review of SPSO 2011.Inspired by birds flocking and fish schooling, Eberhart and Kennedy [26] first proposed PSO in 1995.The aim of original PSO is to reproduce the social interactions among agents to solve different optimization problems [27].Each agent in PSO is called a particle and is associated with a velocity that is dynamically adjusted accordingly to its own flight experience, as well as those of its companions.Since the first introduction of PSO in 1995, many different PSO algorithms have been proposed, among which SPSO 2011 [22,23] may be one of the most recently proposed PSO algorithms.Because our proposed PSO algorithm is developed on the basis of SPSO 2011, this subsection will briefly describe SPSO 2011.
In [22], it is suggested that if In SPSO 2011, the particle can always explore the surroundings of the explored region with a nonnull velocity, since a random point    () is added to the particle's velocity as shown in (2).Hence, the nonstagnation property can be achieved in SPSO 2011 [22,23].The authors in [22] also propose to set the three control parameters of SPSO 2011 as follows: 2 = 0.5 + ln 2. (4)

Description of SASPSO 2011.
When applying PSO to solve an optimization problem, it is necessary to properly control the exploration and exploitation abilities of PSO in order to find optimal solutions efficiently [13,14,28].Ideally, on one hand, in the early stage of the evolution, the exploration ability of PSO must be promoted so that particles can wander through the entire search space rather than clustering around the current population-best solution [13,14,28].On the other hand, in the later stage of the evolution, the exploitation ability of PSO needs to be strengthened so that particles can search carefully in a local region to find optimal solutions efficiently [13,14,28].Proverbially, the exploration and exploitation capabilities of PSO heavily depend on its three control parameters.The basic philosophies concerning how the three control parameters influence such abilities of PSO can be summarized as follows: (1) a large inertia weight enhances exploration, while a small inertia weight facilitates exploitation [13,14,28]; (2) a large cognitive component, compared to the social component, results in the wandering of particles through the entire search space, which strengthens exploration [14,27]; (3) a large social component, compared with the cognitive component, leads particles to a local search, which strengthens exploitation [14,27].
According to the basic philosophies noted above, although SPSO 2011 is a nonstagnation algorithm, it cannot strike a good balance between exploration and exploitation, since its three control parameters remain unchanged and there is no difference between  1 and  2 .Considering the weakness of SPSO 2011, we propose a modified PSO algorithm, which is developed based on SPSO 2011 and is named SASPSO 2011.The main purpose of the development of this PSO is to adaptively adjust the exploration and exploitation abilities of SASPSO 2011.To achieve this goal, a novel self-adaptive strategy that is used to update the three control parameters of particles in SASPSO 2011 is proposed as follows: where  5) to (7), with increasing iteration number , it is clear that   and  1 decrease, while  2 increases in SASPSO 2011.Therefore, according to the aforementioned basic philosophies, SASPSO 2011 may start with high exploration, which will be reduced over time, so that exploitation may be favored in the later phase of the evolution.Note that, following the update rule of a fixed   , the balance between exploration and exploitation varies only with respect to the iteration number .
We also adapt the balance of the search in SASPSO 2011 using an additional parameter   .From ( 5) and ( 6), it is trivial that   and  1 decrease as   increases.On the other hand, the variation in  2 becomes larger as   increases according to (7).This implies that, for large   , the exploration capability of SASPSO 2011 tends to be retained.According to (11), a large   indicates that the personal best position of the particle is far away from the global best position of the swarm.Therefore, in the case where   is large, it is natural to facilitate the exploration capability of the algorithm, so that the particle is promoted to a global search and can quickly move closer to the global best position.
In contrast, in the case where   is small, the exploitation ability of the algorithm takes over the exploration ability more rapidly as   decreases.It is also natural to strengthen the exploitation ability of the algorithm in the case where   is small, because, according to (11), small   implies that the personal best position of the particle is close to the global best position of the swarm.In this case, through strengthening the exploitation ability of the algorithm, particles tend to a local search around the global best position, so that the possibility of improving the quality of the global best solution can be increased.
Briefly, by utilizing the proposed self-adaptive strategy, the three control parameters of the algorithm can be adaptively adjusted, complying with the basic philosophies of PSO development.Hence, SASPSO 2011 is expected to improve the ability in finding high-quality solutions.Figure 1 demonstrates the tendency of these changes in the three control parameters with respect to different values of   .Note that   = 0.9,   = 0.1,  1 =  2 = 2.5,  1 =  2 = 0.5, and  max = 100 in Figure 1.

Analytical Investigations on SASPSO 2011
3.1.Mathematical Theory of Convergence and Some Basic Concepts.When designing a PSO algorithm, the convergence of the algorithm remains a key issue [15].However, owing to the stochastic nature of PSO, it results in difficulties in theoretically investigating the global or local convergence of the algorithm [15,17].Fortunately, Solis and Wets [29] have studied under which conditions the stochastic algorithms such as PSO can be considered as either globally or locally convergent search algorithms.In [29], it has been proven that only if a stochastic algorithm satisfies the algorithm condition and the convergence condition can the algorithm be considered as a local convergence algorithm.Since we will use these two conditions to analyze the local convergence of SASPSO 2011, these two conditions and some relevant notations presented in [15,29] are first reproduced below for convenience.
Optimality Region.Let  be the mapping that integrates the position of a particle () with the current global best solution   .In a minimization optimization problem, the optimality region can be mathematically described as follows [15]: where  * denotes the optimal solution of  on ;  denotes the objective function;  denotes the search space;  is a positive coefficient.
The algorithm condition on mapping  stipulates that the solution obtained at current iteration is not worse than the previous ones.In other terms, at each iteration, if the newly obtained fitness dominates the previously best one, then the new one replaces the previously best fitness.Otherwise, the best fitness remains unchanged.
Convergence Condition.Based on the definition of the optimal region   given by (12), the convergence to a local optimum is obtained by the sufficient condition: for all   ∈ , there exists  ∈ R > 0 and 0 <  ∈ R < 1, such that [15] where   represents a probability measure;   denotes the solution obtained by the optimization problem at iteration ; dist(, ) is the distance between a point  and a set .For more details about dist(, ), the reader is referred to [15].
The convergence condition implies that a stochastic algorithm can be considered as an optimization problem if the particle has a nonzero probability to move closer to the optimality region   by a minimum distance  at each iteration or if the particle is already located at the optimality region   with a probability no less than .Theorem 1. Suppose  is a measurable function,  represents a measure subset of R  , and a stochastic algorithm satisfies both the algorithm and convergence conditions.Then, considering the sequence {  } ∞ =0 searched by the algorithm, the following condition holds [29]: where   (  ∈   ) denotes the probability that the algorithm converges to the optimal region   .Theorem 1 indicates that only if a stochastic algorithm satisfies the algorithm condition and convergence condition stated above, can it be considered as a local convergence algorithm and the local optimality of the algorithm can be at least guaranteed.The proof of this theorem can be found in [29].a minimization optimization problem, after each iteration, mapping  in SASPSO 2011 is updated as follows:

Stability Analysis for SASPSO 2011.
Prior to verifying that SASPSO 2011 satisfies the convergence condition, we first investigate the stability of SASPSO 2011.The aim of the stability analysis is to find boundaries of the three control parameters to guarantee the trajectory convergence of particles in SASPSO 2011.Note that this study focuses on the deterministic model stability analysis [17] for SASPSO 2011 in the case where    =   ().Without loss of generality, by omitting the subscript  of each variable in ( 1)-( 3) for simplicity, the update rules of particles in SASPSO 2011 can be rewritten into a matrix form as follows: where Solving |E − A| = 0, where E is the identity matrix with the same size of A, the characteristic equation of the dynamic system ( 16) is derived as follows: where two roots, denoted by  1,2 , are obtained as follows: In the context of the dynamic system theory, the necessary and sufficient condition for the convergence of system (16) is that magnitudes of  1 and  2 are less than 1 [15,30].Therefore, system (16) As it appears from (19) that  1 and  2 are two real or complex numbers, we will discuss two cases where  1 and  2 are two real and complex numbers separately to analyze the convergence of system (16).
(1) The first case is where  1,2 are two complex numbers, denoted as  1,2 ∈ C. Lemma 2. For system (16),  1,2 ∈ C, if and only if Proof.For system (16), it is obvious that Solving the right-hand side of ( 22) by classical approach, Lemma 2 can be easily proven.Now, let us find conditions on  and  guaranteeing the convergence of system (16) Proof.Note that the magnitude of a complex number  can be calculated as || = √ In the case where  1,2 ∈ C, the convergent region of SASPSO 2011 is shown in Figure 2.
(2) The second case is where  1 and  2 are two real numbers, denoted as  1,2 ∈ R. Lemma 4. For system (16),  1,2 ∈ R, if and only if Proof.For system (16), it is clear that Solving the right-hand side of ( 28) by classical approach, Lemma 4 can be easily proven.
Then, let us find conditions on  and  guaranteeing the convergence of system (16) in the case where  1,2 ∈ R. According to (19) and (20) Thus As  1,2 ∈ R, it is clear that Solving the right-hand inequalities in (31) yields According to Lemma 4,(27) must hold in the case where Then, considering both cases where  1,2 ∈ C and  1,2 ∈ R together, system (16), that is, SASPSO 2011, converges, if and only if Figure 3 illustrates the convergent region of SASPSO 2011 in both cases where  1,2 ∈ R and  1,2 ∈ C.Only if any parameter selection of  and  locates in this triangle area as shown in Figure 3 does SASPSO 2011 guarantee the trajectory convergence of particles.Figure 4 When SASPSO 2011 converges, lim →∞ ( + 1) = lim →∞ () and lim →∞ ( + 1) = lim →∞ ().Therefore, substituting these two equations into (35) where   and   denote the personal best position of the particle and the global best position of the swarm, respectively.
Through the stability analysis, it can be concluded that if and only if the convergence condition given by ( 34) is satisfied, the trajectories of particles can converge to the equilibrium point given by (36).Here, note that only the trajectory convergence of the particle can be assessed through the stability analysis.The global or local optimality of the equilibrium point cannot be assessed through the stability analysis.

Verification of the Convergence Condition.
As stated in Section 3.1, only if a stochastic algorithm satisfies the algorithm and convergence conditions can it be considered as local convergence algorithm; namely, the local optimality of the algorithm can be at least guaranteed.In Section 3.2.1, it is proven that SASPSO 2011 satisfies the algorithm condition.Therefore, we still need to assess that SASPSO 2011 satisfies the convergence condition in order to investigate the local convergence of the algorithm.
Before proving that SASPSO 2011 satisfies the convergence condition described in Section 3.1, we first reproduce some notations presented in [15,29] for convenience.Let  0 denote the worst particle in a swarm with  particles.In a minimization optimization problem, the worst particle is the one with the largest cost function  in the swarm.Hence,  0 can be defined as  0 = argmax{(  )}, where 1 ≤  ∈ N ∈ .From the worst particle, the convex compact set  0 can be defined as the convex compact set in which all points have a fitness value smaller than or equal to ( 0 ) [15,29]: By the definition of  0 , it can be assessed that   and   in (36) are in  0 .Since  0 is convex and   and   are in  0 , any point along the line connecting   and   is thus in  0 .Since the equilibrium point given by ( 36) is on the line connecting   and   , we can conclude that the equilibrium point given by ( 36) is in  0 .Moreover, in Section 3.2.2, it is proven that the trajectories of particles in SASPSO 2011 converge to the equilibrium point given by (36), if and only if the convergent condition given by ( 34) is satisfied.Although the optimality of this equilibrium point cannot be stated from this proof, it allows us at least to conclude that, regardless of the initial position of the particle, the particle can converge to the equilibrium point given by (36), which is in  0 .In other terms, SASPSO 2011 can always generate a new point that can be sampled arbitrarily close to the point given by (36), which is in  0 .Thus, we can conclude that there always exists a nondegenerated sampling hypersphere which guarantees that a new point can be sampled arbitrarily close to the equilibrium point given by (36), which is in  0 .
As stated in Section 2.1, the stagnation is prevented in SPSO 2011 by adding an arbitrary point    () to the particle's velocity as given in (2) [22,23].Since the same moving rules defined in SPSO 2011 are used in SASPSO 2011, SASPSO 2011 is a nonstagnant algorithm.Considering the nonstagnation property of SASPSO 2011, the algorithm can always improve the fitness of the global best position   with a nonzero probability.Thus, we can conclude that, given any starting point in  0 , SASPSO 2011 guarantees a nondegenerate sampling volume with a nonzero probability of sampling a point closer to the optimality region   , as described by the convergence condition in Section 3.1.Hence, it is sufficient to conclude that SASPSO 2011 satisfies the convergence condition.Note that the authors in [15] used the same method to prove that their PSO algorithm satisfies the convergence condition.Since SASPSO 2011 satisfies both the algorithm and convergence conditions, it is a locally convergent algorithm.

Convergence Behavior of Particles in SASPSO 2011.
Before particles converge to the equilibrium point given in (36), they may oscillate in different ways around the equilibrium point as a result of different values of  and .Since different convergence oscillations may influence the quality of the final solution searched by particles [17,18], it is necessary to investigate the oscillation behavior of particles.Four typical oscillations of particles in SASPSO 2011 are shown in Figure 5.
Nonoscillatory convergence behavior, as shown in Figure 5(a), leads particles to only search on one side of the equilibrium point, which may be useful when the search space is bounded.Particles exhibit nonoscillatory convergence when  1 and  2 are two real roots and at least one of them is positive, which is equivalent to 0 ≤ (1+−) 2 −4 and 0 < 1+  − .Harmonic oscillation, as demonstrated in Figure 5(b), may be beneficial in the exploitation stage, since particles smoothly oscillate around the equilibrium point.Harmonic oscillation behavior occurs when  1 and  2 are complex; that is, (1+−) 2 −4 < 0. Zigzagging convergence, as illustrated in Figure 5(c), may also facilitate exploitation as particles zigzag around the equilibrium point, which may be suitable for solving optimization problems with rugged search spaces.Particles display zigzagging convergence behavior when at least one of  1 and  2 has a negative real part; that is,  < 0 or 1 +  −  < 0. The combined harmonic behavior with zigzagging behavior, as visualized in Figure 5(d), may be beneficial for the transition from exploration to exploitation due to its mixed nature.The combined harmonic behavior with zigzagging behavior emerges when at least one of the two complex roots  1 and  2 has a negative real part; that is, If the boundaries of the coefficients associated with these oscillations are known beforehand, one may easily design an adaptive method to change values of these coefficients, so that the convergence of PSO can be guaranteed and the quality of the final solution found by particles could be improved.

Parameter Selection
Principle for SASPSO 2011.In Section 3.2.2, it is proven that if and only if the convergent condition given by ( 34) is satisfied, SASPSO 2011 can converge to the equilibrium point given by (36).Now, we need to answer how to set the initial and final values of ,  1 , and  2 to guarantee the convergence of SASPSO 2011.Please note that, without loss of generality, the subscript  is omitted from ,  1 , and  2 for simplicity.
Since   ,   ,  1 ,  1 ,  2 , and  2 are predefined parameters, the convergent condition given by (39) can be easily satisfied by setting proper values of these parameters.Figure 6 shows the convergent position and velocity trajectories of the particle in SASPSO 2011 under the suggested parameter selection:   = 0.9,   = 0.1,  1 =  2 = 2.5, and  1 =  2 = 0.1.To easily and efficiently solve COPs, the task of how to handle constraints of COPs must be addressed.Among the existing handling constraint techniques, the penalty function method may be the most common approach [9].By adding penalty terms to the objective function, the penalty function method can transform a constrained optimization problem into an unconstrained one.The drawback of the method is that determining proper values for the penalty factors is time-consuming and problem-dependent, which requires the previous experience of the users [31].To overcome the shortcoming of the penalty function method and increase the diversity of solutions, the adaptive relaxation method [4,5], which is integrated with the feasibility-based rule [24,25], is applied to handle constraints of COPs and evaluate candidate solutions in this paper.

Applying SASPSO 2011 for Solving COPs
In the adaptive relaxation method, in order to handle equality and inequality constraints of COPs, the total constraint violation value of particle  is first calculated as follows [4,5]: where ℎ  1 ,   2 ,  1 ,  2 ,  1 , and  2 have the same definitions as those in ( 43) and (44).After calculating the total constraint violation value of each particle, the median of total constraint violation values of all particles is assigned to the initial constraint relaxation value .If the total constraint violation value of a particle is less than , the particle is temporarily considered as a feasible solution; otherwise, it is temporarily considered as a nonfeasible solution.During the evolution,  is gradually reduced according to the fraction of feasible particles (  ) with respect to the relaxed constraints.Given a swarm with  particles, from iteration  to iteration  + 1,  is updated as follows [4]: From (47), it is clear that more nonfeasible particles are allowed into the next iteration in the early phase of the evolution, which can consequently add some diversification to the swarm [4].With the evolution continuing, the relaxation value  adaptively decreases when more and more feasible solutions are found.Hence, the feasible region gradually reduces until it converges to the real feasible region, which may increase the possibility of finding optimal solutions [5].
Since  is adaptively updated, no additional penalty factor is needed in the adaptive relaxation method, which can thus reduce the optimization difficulties [5].
After calculating the fitness and constraint violation values of each particle, the feasibility-based rule [24,25] is applied to evaluate and select the elite solution between any two candidate solutions.The feasibility-based rule can be described as follows: (1) for any two solutions with same constraint violation value, the solution with better fitness value is preferred; (2) for any two solutions with different constraint violation values, the solution with smaller constraint violation value is preferred.
As the fitness and constraint violation information is considered separately in the feasibility-based rule, no additional parameter is needed when using this rule, which can decrease the optimization difficulties [24,25].Moreover, although the nonfeasible solutions violate some constraints, they may also contain some information that is useful for finding good solutions [5].When this useful information is considered, the likelihood of finding high-quality solutions may be increased.This is one main reason why the nonfeasible solutions are considered in the feasibility-based rule.
Except the equality and inequality constraints, each design variable must satisfy its boundary constraints, as shown in (45).When any variable   ( = 1, 2, . . ., ) cannot satisfy its boundary constraints, the saturation strategy is applied to modify   as follows [9]: where   ,    , and    have the same definitions as those in (45).

Numerical Simulations and Analysis
To verify the proposed method, it is tested through 4 benchmark test functions and 2 real-world engineering problems against time-varying PSO (TVPSO) [9], constrained PSO (CPSO) [10], fine grained inertia weight PSO (FGIWPSO) [28], stability-based adaptive inertia weight PSO (SAIWPSO) [21], SPSO 2011 [22,23], random PSO (RPSO) [32], and some well-established methods proposed in the literature.Note that the 4 benchmark test functions and 2 real-world engineering problems are listed in the Appendix.In order to reduce the random discrepancy, a Monte Carlo experiment with 50 independent is conducted for each studied problem.After 50 independent runs, the best, mean, and worst results, as well as the standard deviation of each method, are examined and compared.In each run, the final solution of each method is obtained after 4000 iterations of 100 particles on MATLAB 2012B software on a Windows 8 personal computer with i3-2350 @ 2.30-GHz and 2 GB RAM.The simulation parameters for the different methods are shown in Table 1.

Simulations on 4 Benchmark
Test Functions.This subsection shows the simulations results of all methods for the 4 benchmark test functions.The mathematical characteristics of the 4 benchmarks are shown in Table 2, where NV denotes the number of variables, LI and NI are, respectively, the numbers of linear and nonlinear inequality constraints, and LE and NE are, respectively, the numbers of linear and nonlinear equality constraints.After performing 50 independent runs, the best, mean, worst, and standard deviation results of all methods for the four test functions are summarized in Tables 3-6, respectively.Note that, as shown in Tables 3-6, the statistical results of Krohling and Coelho [33] and Mezura-Montes and Coello Coello [34] are extracted from their corresponding references.In all cases, the best results with respect to the best, mean, and worst values and standard deviation are highlighted in boldface in these tables.Figure 7 displays the fitness curves of the best results of all PSO methods for the four test functions.
From Tables 3-6, it is evident that SASPSO 2011 and SAIWPSO are the first and second most efficient algorithms in terms of the average solution, which implies that SASPSO (1) Set simulation parameters and randomly generate an initial swarm (2) Obtain   ,   and  at the initial iteration (3) while  ≤  max do (4)   = 0 % set the number of feasible solutions to be 0 (5) for  = 1 :  do (6)   () ←   () + ( 1 [  () −   ()] +  2 [  () −   ()])/3 % calculate center of   () (7) Randomly generate    () within the hypersphere (  (), ‖  () −   ()‖) (8)   () ←     () +    () −   () % update velocity of particle  (9)   () ←   () +   () % update position of particle  (10) Modify each dimension of   () by the saturation strategy given by (48) (11) viol if viol  ≤ () do (13) Particle is feasible and Particle is non-feasible and   =   (16) end if (17) Calculate fitness value (  ()) of particle  (18) Update the personal best position   () of particle  by the feasibility-based rule (19) end for (20) Update the global best position   () of the swarm by the feasibility-based rule (21)    functions.The reason why our proposed algorithm and SAIWPSO are more robust than the other algorithms may be that these two algorithms are stability-based algorithms.By setting proper values of the three control parameters, the stability and convergence of these two algorithms can be guaranteed.It is notable from Tables 3-6 that the difference of the standard deviation between our proposed method and SAIWPSO is not significant for each test function.Therefore, summarizing the simulation results shown in Tables 3-6, it can be concluded that the proposed method is highly competitive in solving the 4 benchmarks in terms of the solution quality.Moreover, the search robustness of the proposed method is comparable with that of SAIWPSO and outperforms those of the other methods.

Application on Two Real-World Engineering Problems.
To further verify the proposed method, it is applied to solve two practical engineering problems: the tension compression spring design problem and the three-bar truss design problem.Note that the performance of the proposed algorithm is still compared with those of the aforementioned PSO algorithms and some other well-known methods proposed in the literature.

Tension Compression Spring Design
Problem.The statistical results of different methods for the tension compression spring design problem are shown in Table 7.Note that, in this table, the statistical results of PSO-DE [20] and CDE [35] are extracted from their corresponding literature.8.
From Table 7, it can be seen that SASPSO 2011 and PSO-DE [20] provide the best performance in terms of the average solution, and SAIWPSO ranks the second in terms of the average solution.From this table, it is also interesting to find that SASPSO 2011 is the best one in terms of the best and worst solutions.Therefore, it can be concluded that our proposed method performs similar to PSO-DE [20] and slightly outperforms the other methods in terms of the solution quality.Moreover, according to the standard deviation of each method, as shown in Table 7, it is apparent that SAIWPSO and SASPSO 2011 are the most robust and second most robust algorithms in solving this problem.
From Table 8, it is worth noting that the best solutions searched by all methods are feasible, since the design variables  1 ,  2 , and  3 and constraints  1 ,  2 , and  3 of these best solutions satisfy the corresponding constraints of the tension compression spring design problem.This, to some extent,  reflects the feasibility of these methods in solving the tension compression spring design problem.

Three-Bar Truss Design Problem. The statistical results
of different methods for the three-bar truss design problem are shown in Table 9, in which the statistical results of Ray and Liew [36] and CSA [37] are obtained from their corresponding literature.Table 10 shows the information of the best solution of each method for this design problem.The fitness curves of the best results searched by all PSO algorithms are exhibited in Figure 9.
It is trivial from Table 9 that SASPSO 2011 and SAIWPSO are the most sufficient and second most sufficient algorithms in finding the best, mean, and worst solutions.Therefore, we can conclude that our proposed method is highly competitive in solving this problem in terms of the solution quality.Additionally, it is notable from Table 9 that SAIWPSO and SASPSO 2011 provide the least and second least standard deviations in solving this problem, which indicates that these two algorithms are more stable than the other algorithms in solving this problem.Besides, it can be found from Table 10 that the best solution searched by each method satisfies the constraints of the three-bar truss design problem, which, to a certain degree, reflects the feasibility of each method in solving this problem.

Conclusion and Future Work
In this study, a modified PSO algorithm, called SASPSO 2011, is proposed based on SPSO 2011.In order to enhance the performance of SASPSO 2011, a novel self-adaptive strategy is developed for updating the three control parameters of particles in the algorithm.By fine-tuning the three control For adding some diversification to the swarm and decreasing optimization difficulties, the adaptive relaxation method [4,5] is integrated with the feasibility-based rule [24,25] to handle constraints of COPs and evaluate candidate solutions in our developed framework.The proposed method is verified by 4 benchmark test functions and 2 real-world applications against six stateof-the-art PSO variants and several well-known methods presented in the literature.The simulation results show that the proposed method is highly competitive in finding highquality solutions.Furthermore, the search robustness of the proposed method is comparable with that of SAIWPSO [21] and outperforms those of the other compared methods.Hence, the proposed method can be considered as an efficient optimization approach to solve COPs.
There are some issues that deserve some future study.Since the three control parameters of PSO also determine the convergence speed of PSO, the first issue is how to theoretically analyze the sensitivity of the convergence speed of SASPSO 2011 to its three control parameters.Another issue that deserves some future study is to modify SASPSO 2011, so that the global convergence of the algorithm can be achieved.Moreover, more advanced mechanisms can be developed for better updating the three control parameters of SASPSO 2011, so that the search robustness of the algorithm can be further enhanced.One potential way is to follow the idea proposed by Taherkhani and Safabakhsh [21] to update each control parameter in different dimensions.Last but not least, we are also considering the possibilities of developing some new constraint handling techniques, since how to handle constraints of COPs influences the performance of the obtained solution.(5) The tension compression design problem can be mathematically presented as follows [20,35] Proof of SASPSO 2011 3.2.1.Verification of the Algorithm Condition.The algorithm condition can be validated through applying mapping  used in SASPSO 2011.Let {  ()} ∞ =0 represent the global best position set searched by particles in SASPSO 2011.In

Figure 1 :
Figure 1: Changes of the three control parameters under different   in SASPSO 2011.

Figure 6 :
Figure 6: Convergent position and velocity trajectories of the particle in SASPSO 2011 under the suggested parameter selection.
Best fitness curves for  1

Figure 7 :
Figure 7: The best fitness curves of different PSO algorithms for the four benchmark test functions.

Figure 9 :
Figure 9: The best fitness curves of different PSO algorithms for the three-bar truss design problem.
in the case where  1,2 ∈ C. It is trivial that the system converges if and only if max{| 1 |, | 2 |} < 1.
Lemma 3. System (16) converges in the case where  1,2 ∈ C, if and only if 2  +  2  , where   and   denote the real and imaginary parts of .Hence, for  1,2 ∈ C, it is clear that For  1,2 ∈ C, according to Lemma 2, (21) must be held.Hence, when considering conditions that  1,2 ∈ C and max{| 1 |, | 2 |} < 1, for  1,2 ∈ C, system (16), that is, SASPSO 2011, converges, if and only if 1 and  2 are the number of equality and inequality constraints, respectively;    and    represent the lower and upper bounds of   .

Table 1 :
The simulation parameters for different methods.

Table 2 :
Mathematical characteristics of 4 benchmark test functions.
4.3.The SASPSO 2011-Based Framework for Solving COPs.Let  denote the size of the swarm and let  max denote the maximum iteration number.The algorithmic scheme of applying SASPSO 2011 to solve COPs is illustrated in Algorithm 1.

Table 5 :
Statistical results of different methods for test function  10 ( * = 7049.248).the best worst solutions for  07 ,  10 , and  1 .Moreover, it is clear from Tables 3-6 that SAIWPSO and SASPSO 2011 are the most stable and second most stable algorithms, since they can provide the least and second least standard deviations in solving the 4 benchmark test

Table 8
exhibits the details of the best solution searched by each

Table 7 :
Statistical results of different methods for the tension compression spring design problem.

Table 8 :
The best results of different methods for tension compression spring design problem.

Table 9 :
Statistical results of different methods for the three-bar truss design problem ("NA" means unknown).