Multipopulation Ensemble Particle Swarm Optimizer for Engineering Design Problems

Particle swarm optimization (PSO) is an eﬃcient optimization algorithm and has been applied to solve various real-world problems. However, the performance of PSO on a speciﬁc problem highly depends on the velocity updating strategy. For a real-world engineering problem, the function landscapes are usually very complex and problem-speciﬁc knowledge is sometimes unavailable. To respond to this challenge, we propose a multipopulation ensemble particle swarm optimizer (MPEPSO). The proposed algorithm consists of three existing eﬃcient and simple PSO searching strategies. The particles are divided into four subpopulations including three indicator subpopulations and one reward subpopulation. Particles in the three indicator subpopulations update their velocities by diﬀerent strategies. During every learning period, the improved function values of the three strategies are recorded. At the end of a learning period, the reward subpopulation is allocated to the best-performed strategy. Therefore, the appropriate PSO searching strategy can have more computational expense. The performance of MPEPSO is evaluated by the CEC 2014 test suite and compared with six other eﬃcient PSO variants. These results suggest that MPEPSO ranks the ﬁrst among these algorithms. Moreover, MPEPSO is applied to solve four engineering design problems. The results show the advantages of MPEPSO. The MATLAB source codes of MPEPSO are available at https://github.com/zi-ang-liu/MPEPSO.


Introduction
Particle swarm optimization (PSO), proposed by Kennedy and Eberhart [1], is a stochastic and population-based optimization algorithm. PSO is one of the most popular optimization algorithms since its fast convergence rate, easy implementation, and effective performance for many optimization problems [2]. PSO has been successfully applied in various fields, such as engineering design problems [3][4][5][6], feature selection problems [7][8][9], scheduling problems [10][11][12], and supply chain management [13][14][15]. To further improve the performance of PSO, many PSO variants have been proposed by adapting the control parameters [16][17][18], designing new search strategies [19][20][21][22], introducing new population topologies [23,24], and hybridizing PSO with other algorithms [25][26][27][28], etc. e performance of PSO on a specific problem highly depends on the velocity updating strategy [29]. For example, PSO with time-varying acceleration coefficients (PSO-TAVC) [30] performs well on unimodal problems, while performs poorly on multimodal problems. On the other hand, the distance-based locally informed PSO (LIPS) [31] is specially designed for solving multimodal problems. Furthermore, it has been illustrated that using different PSO searching strategies for a specific problem at different stages may improve the performance of the algorithm [29]. erefore, it is important to find suitable searching strategies for a certain problem at different stages.
However, real-world engineering problems are usually very complex and problem-specific knowledge is sometimes unavailable. As a result, it is hard to analytically find the appropriate strategy for these problems. One way to select a suitable PSO searching strategy for real-world engineering problems is to adopt a trial-and-error approach. However, this approach is usually very time-consuming. On the other hand, by incorporating several different PSO searching strategies into one algorithm, and let the algorithm dynamically change the searching strategies during the computation process may enable us to solve various problems efficiently.
ese previously proposed algorithms [29,[32][33][34] have been tested on classical benchmark functions or the CEC 2005 test suite. However, few of them have been applied to solve engineering design problems. In HPSO, the particles randomly change their search strategies. In HCLPSO, the proportion of strategies keeps the same. SL-PSO uses a probability model to update the adoption probability of a strategy, and EPSO uses a success rate to update the adoption probability of a strategy. Instead of using these methods, the novelty of this paper is that MPEPSO dynamically assigns the particles in the reward subpopulation to the different PSO strategies based on their performances. e purpose of this paper is to propose a novel PSO algorithm based on the ensemble approach and examine the performance of this algorithm in solving four well-known engineering design problems including the tension/compression spring design problem, the pressure vessel design problem, the cantilever beam design problem, and the gear train design problem. e tension/compression spring design problem is a nonlinear programming problem and the pressure vessel design problem is a mixed-integer nonlinear programming problem. e presence of integer variables along with continuous variables often occurs in engineering design and that adds to the complexity of the optimization problem [35].
In this paper, three existing simple and efficient PSO search strategies, linearly decreasing inertia weight PSO (LDWPSO) [16], unified PSO (UPSO) [36], and comprehensive learning PSO (CLPSO) [20], are incorporated into the proposed MPEPSO algorithm. ese three strategies are selected because their searching behaviors are different yet complementary, so that they can support each other during the computational process. LDWPSO performs well on unimodal problems, but suffers from the premature problem and usually converge to the local optimum quickly. CLPSO has better performance on multimodal problems and is able to maintain the population diversity, while performs poorly on unimodal problems [34]. Different from the former two algorithms, a population topology is used in UPSO, so that the particles are guided by their neighbors. Also, UPSO has balanced exploration and exploitation ability, by combining the global PSO and local PSO [36]. In MPEPSO, the particles are divided into four subpopulations including three indicator subpopulations and one reward subpopulation. e fundamental idea of this ensemble method is similar to the approaches in [37,38]. e indicator subpopulations have a relatively small population size compared with the reward subpopulation. Particles in the three indicator subpopulations update their velocities by LDWPSO, UPSO, and CLPSO searching strategies, respectively. A certain number of iterations is defined as a learning period. MPEPSO keeps records of the improved function values by the three indicator subpopulations during the learning period. At the end of one learning period, the reward subpopulation is allocated to the PSO strategy that has the best performance during the learning period. e performance of MPEPSO is examined on the CEC 2014 test suite on both 10-dimensional and 30-dimensional problems. Furthermore, MPEPSO is applied to solve four real-world engineering design problems. e results suggest that the proposed MPEPSO performs well in solving various problems by assembling the different features of these three algorithms.
is study makes several contributions to the current literature. First, a novel multipopulation ensemble particle swarm optimizer is proposed. is is the first time that the multipopulation ensemble strategy is used to develop a new PSO algorithm. Second, most previous studies of the ensemble approach-based PSO algorithms have focused on solving unconstrained benchmark problems, while our proposed algorithm is applied to solve four constrained engineering design problems. ird, the proposed algorithm is evaluated by comparing it with other algorithms on the CEC 2014 test suite. e mean, best, worst, and standard deviation values of the function error values are provided, and the Wilcoxon signed-rank test is conducted to determine the significance of the difference between the proposed algorithm and other algorithms. e remaining part of the paper proceeds as follows: Section 2 gives a review of the related work. Section 3 presents a detailed introduction of MPEPSO. Computation results are reported in Section 4. Finally, Section 5 summarizes the conclusions.

Particle Swarm Optimization.
PSO is a population-based algorithm. In PSO, the position of a particle represents a solution for the problem. Particles update their velocities by social and cognitive experience. e velocity V d i and position X d i are usually updated by the following equations [2]: where i(i � 1, 2, . . . , N) represents the index of particles, N represents the population size, d(d � 1, 2, . . . , D) represents the dimension index, D represents the dimensionality of the 2 Mathematical Problems in Engineering problem, rand1 d i and rand2 d i are two different random numbers within [0, 1], c 1 and c 2 are the accelerate coefficients, pbest d i represents the personal best position of particle i in dimension d, and gbest d represents the global best position of the swarm in dimension d.
Only the accelerate coefficients need to be tuned in PSO. c 1 and c 2 are the weights of the two directions, pbest and gbest. ese two accelerate coefficients are set to 2 or 1.49445 in many studies [2]. e time-varying acceleration coefficients approach is another way to set these two parameters [30]. With a large c 1 and small c 2 at the early stage, the global exploration ability of particles is enhanced. By setting a small c 1 and large c 2 at the latter stage, the local exploitation ability of particles is improved.

Ensemble Approaches in PSO Algorithms.
Ensemble approaches have been widely applied in population-based algorithms. See [39] for a detailed review of ensemble approaches in population-based algorithms. PSO algorithm is one of the popular population-based algorithms. Several researchers have also proposed the improved PSO variants based on the ensemble approach.
Wang et al. [29] incorporate four PSO searching strategies into their proposed algorithm. e execution probabilities of the four strategies are obtained by their proposed self-adaptive learning mechanism. Furthermore, their proposed algorithm has been applied to solve the economic load dispatch problem. e results indicate that this algorithm is able to solve complex large-scale economic load dispatch problem.
Lynn and Suganthan [33] divided all the particles into two subpopulations, an explorative subpopulation and an exploitative subpopulation. Particles in the explorative subpopulation are guided by the personal best experiences of particles in this subpopulation, while particles in the exploitative subpopulation are guided by the personal best experiences of all the particles. e experimental results indicate that their proposed algorithm performs well on the CEC 2005 benchmark functions.
Later, Lynn and Suganthan [34] incorporated five PSO searching strategies into their algorithm. e successful memory and the failure memory of each strategy are recorded during the learning period. e PSO strategy with a higher success rate is expected to have a higher adaptation probability.
e computational results suggest that their proposed algorithm performs well on four types of benchmark functions in the CEC 2005 test suite. e former studies that applied ensemble approaches in the PSO algorithm have shown that incorporating PSO searching strategies with different features into one algorithm may improve the performance of the PSO algorithm.
Several ensemble approaches have been used to develop new PSO variants. Wang et al. [29] used a probability model to control the probability of a strategy. Lynn and Suganthan [34] record the success rate of a strategy to control the adoption probability of a strategy. Different from the former studies, our proposed algorithm contains three indicator subpopulations. By evaluating the performance of the three subpopulations, the reward subpopulation is dynamically allocated to the good performed strategy. Also, few of the former proposed ensemble approach-based PSO variants have been applied to solve engineering design problems. To fill this gap, in this paper, the proposed algorithm is also applied to solve four real-world engineering design problems.

Multipopulation Ensemble Particle
Swarm Optimizer e main idea of MPEPSO is to incorporate PSO searching strategies with different features into one algorithm and dynamically allocate particles to the best-performed searching strategy. In this way, the advantaged PSO searching strategy can use more computational resources to improve the performance of the proposed algorithm.
Particles in MPEPSO are divided into three indicator subpopulations and one reward subpopulation. Each indicator subpopulation contains the same small number of particles, and particles in different indicator subpopulations update their velocity by different searching strategies. Indicator subpopulations are used to evaluate the performance of the corresponding searching strategy on a problem. e reward subpopulation is a group of particles that change their searching strategy periodically. e indicator subpopulations have a relatively small number of particles compared with the reward subpopulation. In this paper, particles in the three indicator subpopulations update their velocities by LDWPSO, UPSO, and CLPSO strategy, respectively. A learning period is defined as a fixed number of iterations. At the end of a learning period, the performance of each indicator subpopulation is evaluated to select the best-performed searching strategy, and then the reward subpopulation is allocated to the corresponding strategy. erefore, the appropriate PSO searching strategy can have more particles to find a better solution. Figure 1 is used as an example of a multipopulation in the proposed algorithm with twenty particles. Each indicator subpopulation has four particles and eight particles are assigned to the reward subpopulation. e particles in the reward subpopulation are dynamically allocated to the bestperformed indicator subpopulation.

PSO Variants.
To enable MPEPSO to solve various problems, it is important to select the component strategies that have different capabilities. In this paper, three PSO searching strategies are incorporated in MPEPSO, including LDWPSO [16], UPSO [36], and CLPSO [20]. e reason that we select these three strategies is that their searching behaviors are different yet complementary. ey can support each other during the computational process, by dynamically adopting these strategies. LDWPSO has a strong exploitation ability and it is very efficient in solving unimodal problems, while CLPSO is good at exploration and has high performance in solving multimodal problems. UPSO has a population topology that the particles are guided by their neighbors, and it combined the exploration ability of local PSO and the exploitation ability of global PSO. e velocity of a particle in UPSO is affected by both the local and global PSO variants.

Linearly Decreasing Inertia Weight PSO (LDWPSO).
LDWPSO is one of the most famous PSO variants proposed by Shi and Eberhart [16]. e velocity V d i in LDWPSO is updated as follows: By introducing the inertia weight ω into the original PSO algorithm, LDWPSO can improve the performance of the original PSO [2]. In this algorithm, ω is linearly decreased from ω max to ω min as follows: where FEs indicates the number of functions evaluations, MaxFEs indicates the maximum number of function evaluations. A large inertia weight is used for global search, and a small inertia weight is suitable for local search [20]. In many cases, ω is linearly decreased from 0.9 to 0.4 [16].

Unified Particle Swarm Optimization (UPSO).
In the PSO algorithm, it is important to balance the exploration behavior and exploitation behavior of the particles. UPSO [36], proposed by Parsopoulos and Varhatis, has combined the local and global versions of the PSO algorithm to balance the exploration and exploitation ability of this algorithm.
In the local version of PSO, each particle only exchanges information with its neighbors. In this paper, the ring topology is adopted to decide the neighbors of a particle. In the ring topology, particle (i − 1) and particle (i + 1) are defined as the neighbors of particle i. In the global version of PSO, the whole population is the neighbors of particles. e velocities of particles in the global and the local version PSO are shown as follows: where X represents the constriction factor, n best i is the personal best position of the best neighborhood of X i , velocity G d i is obtained by the global version PSO, and velocity L d i is obtained by the local version PSO. e constriction factor has been introduced to analyze the convergence behavior of the PSO algorithm, and the PSO algorithm with the constriction factor is a special case of the algorithm with inertia weight [40].
After obtaining the global velocity G d i and the local velocity L d i for each particle, these two directions can be unified to calculate the new velocity U d i by the following equation: where Indicator subpopulation 1 that is updated by LDWPSO Indicator subpopulation 2 that is updated by UPSO Indicator subpopulation 3 that is updated by CLPSO Reward subpopulation and u represents the unification factor that balances the global and local components, r d i follows a normal distribution. Finally, the position of the particle i in d dimension can be updated in the similar way to the original PSO algorithm as follows: By unifying the local PSO and the global PSO, UPSO has combined its exploration and exploitation abilities [36]. (CLPSO). CLPSO, designed by Liang et al. [20], is a popular and efficient PSO variant with a learning strategy that enables particles guided by the experience of the whole population. e velocity is updated as the following equation:

Comprehensive Learning Particle Swarm Optimizer
where indicates which particles' pbest the particle i should follow. Two particles are randomly selected from the population. e particle with better fitness will be used as f i . See [20] for a detailed introduction. e decision that the particle i learns from its own pbest or other's pbest depends on the learning probability Pc i . e previous study [20] indicates that the value of Pc i affects the performance of CLPSO on multimodal problems. In CLPSO, each particle has a different value of Pc i , so that particles can have different levels of exploration and exploitation ability. e equation to obtain Pc i is developed empirically. e learning probability Pc i is calculated as follows: where N represents the population size and a and b are used to tune the learning probability.
In the original PSO, particles are guided by the global best experience and their own personal best experience. However, CLPSO enables particles to learn from the personal best experience of all the particles, and the global best experience is not used. Comparing with the original PSO, particles in CLPSO are not easily be attracted to the global best experience and quickly converge to a local optimum. erefore, this novel learning strategy keeps the diversity of the population [20]. Experimental results suggest that CLPSO has good performance in solving multimodal problems [20].

Proposed
Algorithm. MPEPSO has one reward subpopulation pop r and three indicator subpopulations that represent by pop h , where h ∈ 1, 2, 3 { } is the index of the PSO searching strategy that corresponds to the LDWPSO, UPSO, and CLPSO. λ h is the proportion of pop h in the population. e value of λ h effects how many particles are in the indicator subpopulations. A large λ h makes fewer particles in the reward subpopulation, and the reward subpopulation will be less effective. On the other hand, a small λ h makes fewer particles in the indicator subpopulations, and it will be less accurate to evaluate the performance of each strategy. Empirically, we set λ h � 0.1 in this paper. e population size of the indicator subpopulation N h is calculated as follows At each iteration, N h particles are randomly selected from the total population and allocated to subpopulation pop h . Particles in pop h update their velocities by strategy h. Naturally, the number of particles in the reward subpopulation N r is obtained by A learning period (LP) is defined as a certain number of iterations. During each learning period, if particle i that uses strategy h can produce a better solution comparing with its personal best position pbest i , the strategy improvement Δf h will be recorded by At the end of the learning period, by comparing the value of Δf h , the best-performed strategy will be selected and indexed by k: e reward population is represented by pop r . Particles in pop r will be allocated to pop k and update their velocities by the best-performed strategy k. By dynamically allocating the reward subpopulation to a good performed strategy at a different learning period, the appropriate PSO searching strategy will have more particles to find a better solution.
erefore, the performance of MPEPSO is expected to be enhanced. e learning period is used to tune the learning speed during the computation process. We empirically set the learning period at 10 in the proposed algorithm. e pseudocode of the MPEPSO algorithm is described in Table 1. iter represents the number of iterations, h is the index of strategies including LDWPSO, UPSO, and CLPSO, and X min and X max define the search range. rand is a D-dimensional vector with random numbers within [0, 1].

Experimental Setting.
e performance of MPEPSO is tested on the CEC 2014 benchmark functions [41,42]. is is one of the most used test suites for evaluating the performance of evolutionary and swarm intelligence algorithms. is test suite includes 30 different types of unimodal, simple multimodal, hybrid, and composition functions, as summarized in Table 2.
According to the recommended experiment settings in [41], for each benchmark function in the CEC 2014 test suite, MPEPSO and the compared algorithms run 30 times independently, and the maximum number of function evaluations MaxFEs is set to 10, 000 D. e experimental results Mathematical Problems in Engineering are obtained on both 10-dimensional and 30-dimensional problems. e values of the mean error and standard deviation are recorded. Two evaluation measures are adopted in this paper. First, the algorithms are ranked based on the mean error values. For each benchmark function, the mean error values of the algorithms are ranked from the smallest to the highest. Second, the Wilcoxon signed-rank test is conducted at the 5% significance level to determine the significance of the difference between the mean error values obtained by MPEPSO and other algorithms. e null hypothesis is that the data in the vector x and data in the vector y come from a distribution with zero median. h � 1 indicates a rejection of the null hypothesis, and h � 0 indicates a failure to reject the null hypothesis at the 5% significance level. Pairwise comparison is conducted over the results of 30 simulation runs between MPEPSO and other PSO variants. e symbol "+" represents that MPEPSO is significantly better than the compared algorithm, the symbol " � " represents that there is no significant difference between MPEPSO and the compared algorithm, and the symbol "− " represents that MPEPSO is significantly worse than the compared algorithm.
MPEPSO is coded using MATLAB 2019b and executed on a computer with Intel Core i7-8565U CPU (1.80 GHz), 8 GB RAM, and Windows 10 operating system.

Comparisons with PSO Variants.
To comprehensively evaluate the performance of the proposed algorithm, the experimental results of MPEPSO are compared with six state-of-the-art PSO variants. ese compared algorithms are listed as follows: (i) LDWPSO: linearly decreasing inertia weight PSO [16] (ii) UPSO: unified PSO [36] (iii) CLPSO: comprehensive learning PSO [20] (iv) DMS-PSO-HS: dynamic multiswarm particle swarm optimizer with harmony search [26] (v) LISP: distance-based locally informed PSO [31] (vi) SL-PSO: social learning PSO [43] LDWPSO, UPSO, and CLPSO are classical PSO variants studied by many researchers, and they are the component strategy in the proposed algorithm. DMS-PSO-HS, LIPS, and SL-PSO are efficient PSO variants that attract many researchers' attention. Also, these three algorithms are developed by different mythologies. DMS-PSO-HS is developed by hybridizing PSO with other algorithms. LIPS is developed by proposing a new topology structure, distancebased neighborhood. SL-PSO introduces a new searching strategy. To comprehensively evaluate the performance of MPEPSO, these six PSO variants are used for the comparisons.
Shao et al. [26] proposed the DMS-PSO-HS algorithm by hybridizing the dynamic multiswarm PSO (DMS-PSO) [44] with the harmony search (HS) algorithm [45]. DMS-PSO-HS has a better performance on most multimodal and unimodal problems comparing with the DMS-PSO and HS algorithm. LIPS [31], proposed by Qu et al., is designed to solve multimodal problems. In LIPS, particles are guided by several local bests. Inspired by social learning mechanisms, Cheng and Jin [43] proposed the SL-PSO algorithm. SL-PSO works consistently well on both low-dimensional problems and large-scale problems.
To conduct a fair competition, population size in SL-PSO is determined by the built-in function in the paper [43], and for algorithms that not mentioned how to set the population size N, the population size is set as N � 50, which is a widely used configuration in population-based algorithms. e parameters for the comparison algorithms are set based on the original code of papers [16,20,26,31,36,43]. e parameter settings for each algorithm are listed in Table 3.  (11) Initialize positions X � X min + rand × (X max − X min ); Initialize velocities V � zeros(N, D); Evaluate the function values F(X i ), i � 1, 2, . . . , N; Mathematical Problems in Engineering In DMS-PSO-HS, HMCR represents the harmony memory consideration rate, PAR is the pitch adjustment rate, SS is the number of subswarms, and SP is the subpopulation size of each subswarm. In LIPS, nsize is the neighborhood size. In SL-PSO, M is the base swarm size, α and β are two coefficients to tune the learning probability and the social influence, respectively.
According to the descriptions in CLPSO and LDWPSO [16,20], the inertia weight ω is linearly decreased as shown in equation (3). In MPEPSO, the inertia weight ω is also linearly decreased from 0.9 to 0.4. In addition, a time-varying acceleration coefficients approach [30] is also adopted, that is, the acceleration coefficients are linearly changed during the computation process. According to this study [30], c 1 is linearly decreasing from 2.5 to 0.5 and c 2 is increasing from 0.5 to 2.5 as shown in Table 3. LP � 10 and λ h � 0.1 are set empirically.

Computational Results on 10-Dimensional Problems.
In this section, the performance of MPEPSO is compared with other PSO algorithms on 10 dimensions. e rank of each algorithm, mean error, and standard deviation values on 10-dimensional problems are shown in Table 4. e results of Wilcoxon signed-rank tests are shown in Table 5. Furthermore, the p value and the test decision h of the Wilcoxon signed-rank tests are shown in Table 6.
For unimodal problems (F1 ∼ F3), MPEPSO ranks first for functions F1 and F3. For simple multimodal functions (F4 ∼ F16), MPEPSO gives the best results on functions F5, F14, and F16 and provides the second-best results on functions F6, F9, F11, F13, and F15. For hybrid functions (F17 ∼ F22), MPEPSO provides the best and second-best performance for all functions except for function F18, in which CLPSO outperforms the proposed algorithm according to the Wilcoxon signed-rank tests. For composition functions (F23 ∼ F30), MPEPSO performs within the top three for all the problems except for functions F28 and F30. Statistically, UPSO and CLPSO outperform the proposed algorithm for these two functions. Overall, MPEPSO has the best performance on 10 functions and the secondbest performance on 10 functions out of 30 functions.
As shown in Table 5 Table 7. Algorithms are ranked based on the mean error values. MPEPSO has the best rank. e average rank of MPEPSO is 2.233333, while the rank of CLPSO is 2.866667. Also, the Hybrid Function 6 (N � 5) 2200 Composition functions  Algorithm Parameter settings LDWPSO [16] ω � 0.
Mathematical Problems in Engineering average ranks of other algorithms are larger than 4. erefore, MPEPSO performs well for the CEC 2014 test suite on 10-dimensional problems in comparison with the other six PSO variants.
In addition, the best values of MPEPSO and other algorithms are shown in Table 8. MPEPSO obtains better values on 10 problems. e worst values of these algorithms are shown in Table 9. MPEPSO obtains the best results in 8 problems.
e median convergence graphs of six functions are shown in Figure

Computational Results on 30-Dimensional Problems.
In this section, the performance of MPEPSO is compared with other PSO algorithms on 30 dimensions. e computational results are shown in Table 10, and the results of Wilcoxon signed-rank tests are shown in Table 11. Also, the p value and the test decision h of the Wilcoxon signed-rank tests are shown in Table 12. has the best performance on 12 functions and the secondbest performance on 7 functions out of 30 functions.
As shown in Table 13, the average rank of MPEPSO is 2.2. e average ranks of other algorithms are much larger than the average rank of MPEPSO. MPEPSO ranks first for the CEC 2014 test suite on 30-dimensional problems in comparison with the other six PSO variants.
Furthermore, Table 14 shows the best values of seven algorithms on 30-dimensional problems. MPEPSO ranks first on 12 problems. e worst values of these algorithms are shown in Table 15. MPEPSO obtains the best results on 10 problems. e median convergence graphs of six functions are shown in Figure 3. e same functions are selected as in Section 4.2. Overall, the convergence behaviors of MPEPSO on 30 dimensions and 10 dimensions are similar. However, for hybrid function F21, MPEPSO performs better on 30 dimensions than 10 dimensions. All the algorithms have similar performance on F25, and MPESPO performs worse than UPSO, SL-PSO, and CLPSO.

Comparisons with Other Metaheuristic Algorithms.
To further investigate the effectiveness of MPEPSO, the performance of MPEPSO is compared with that of other metaheuristic algorithms.
ese compared algorithms are listed as follows: (i) CS: cuckoo search [46] (ii) FA: firefly algorithm [47] (iii) SCA: sine cosine algorithm [48] (iv) GCMBO: monarch butterfly optimization with greedy strategy and crossover operator [49] (v) L-SHADE: success-history-based adaptive DE with linear population size reduction [50] CS [46] and FA [47] are classical nature-inspired metaheuristic algorithms that have received considerable scholarly attention. L-SHADE [50] is the winner of the CEC 2014 competition. ese algorithms also have been successfully applied to solve real-world problems, such as underwater glider path planning [51]. A recent study indicates that FA is the most successful among nature-inspired algorithms on the CEC 2014 test suite, while CS has better performance compared with FA on 10-dimensional problems [52]. is study also shows that L-SHADE is still one of the most effective algorithms on the CEC 2014 test suite [52]. We also compare our algorithm with two recently proposed algorithms, SCA [48] and GCMBO [49]. e parameter settings for each algorithm are listed in Table 16. e parameters for CS and FA are set based on the

Computational Results on 10-Dimensional Problems.
In this section, the performance of MPEPSO is compared with that of other metaheuristic algorithms on 10 dimensions. e rank of each algorithm, mean error, and standard deviation values on 10-dimensional problems are shown in Table 17. e results of Wilcoxon signed-rank tests are shown in Table 18. As shown in Table 18, according to the results of Wilcoxon signed-rank tests, MPEPSO is significantly better than CS, FA, SCA, GCMBO, and L-SHADE on 13, 19, 30, 29, and 0 functions, respectively. It is inferior to CS, FA, SCA, GCMBO, and L-SHADE on 12, 2, 0, 0, and 22 functions while similar to them on 5, 9, 0, 1, and 8 functions, respectively.
As shown in Table 19, the average rank of MPEPSO is 2.5. L-SHADE ranks first for the CEC 2014 test suite on 10dimensional problems.

Computational Results on 30-Dimensional Problems.
In this section, the performance of MPEPSO is compared with that of other metaheuristic algorithms on 30      dimensions. e rank of each algorithm, mean error, and standard deviation values on 10-dimensional problems are shown in Table 20. e results of Wilcoxon signed-rank tests are shown in Table 21. As shown in Table 21, according to the results of Wilcoxon signed-rank tests, MPEPSO is significantly better than CS, FA, SCA, GCMBO, and L-SHADE on 16, 17, 28, 28, and 0 functions, respectively. It is inferior to CS, FA, SCA, GCMBO, and L-SHADE on 11, 10, 2, 2, and 27 functions while similar to them on 3, 3, 0, 0, and 3 functions, respectively.

Mathematical Problems in Engineering
As shown in Table 22, the average rank of MPEPSO is 2.5. L-SHADE ranks first for the CEC 2014 test suite on 30dimensional problems.

Computational Time.
According to the instructions given in [41], the computational time of the algorithms is compared on 10 and 30 dimensions. is method is widely used to evaluate the algorithm complexity, such as [34,[53][54][55].
e computational time is calculated as follows: (1) Execute the test program that described in [41] to obtain the computing time T 0 .     (4) Execute step 3 five times to calculate the mean value of T 2 , which is recorded as T 2 .

Application to Engineering Design Problems.
In this section, MPEPSO is applied to solve four real-world engineering design problems including the tension/compression spring design problem, the pressure vessel design problem, the cantilever beam design problem, and the gear train design problem. To solve these four constrained problems, the penalty function method is used to handle the constraints. e idea of the penalty function method is to incorporate constraints into the objective function, and the constrained problems can be transformed into an unconstrained problem. When a certain constraint is violated, we multiply the constraint by a penalty value and added the result to the objective function. In this paper, the penalty is set at a large value, 10E + 10, for these four problems. e performance of MPEPSO is compared with that of two recently proposed novel metaheuristic algorithms and one recently proposed PSO variant. ese algorithms are listed as follows: (i) SCA: sine cosine algorithm [48] (ii) WOA: whale optimization algorithm [56] (iii) BLPSO: biogeography-based learning particle swarm optimization [27] SCA is a population-based optimization algorithm that has received considerable attention. SCA uses a mathematical model based on sine and cosine functions to reach better solutions [48]. WOA is a nature-inspired optimization algorithm that has been studied by many researchers. WOA is inspired by the behavior of humpback whales [56]. BLPSO is a recently proposed PSO variant that introduced the migration operator of the biogeography-based optimization (BBO) algorithm [57] into CLPSO.
To conduct a fair competition, the parameters for the comparison algorithms are the same as the original codes of the papers [27,48,56]. MPEPSO keeps the same parameter setting. e parameter settings for each algorithm are listed in Table 25. In SCA, constant a is used to control the range of sine and cosine. In WOA, r → is a random vector in [0, 1] and parameter a → is linearly decreased from 2 to 0 to control the coefficient vector A → . In BLPSO, I is the maximum immigration rate and E is the maximum emigration rate. e maximum number of function evaluations is set to 10, 000 D. For each problem, all algorithms run 30 times independently.

Tension/Compression Spring Design Problem.
Coil springs are used in various real-world applications. e purpose of the spring design problem is to design a minimum-mass spring [58]. As shown in Figure 4 [59], for a given applied load P, three design variables, wire diameter d(x 1 ), mean coil diameter D(x 2 ), number of active coils N(x 3 ), need to be optimized to design a minimum-mass spring.
is is a nonlinear programming problem and the final formulation for the spring design problem is shown as follows [60]: e objective function (14) is to minimize the mass of the spring. Constraints (15)∼(18) represent the deflection constraint, the shear stress constraint, the frequency constraint, and the outer diameter constraint, respectively. Finally, constraints (19)∼(21) represent the minimum and maximum size limits of three decision variables. e best, mean, worst, and standard deviation values of the tension/compression spring design problem for the 30 runs are shown in Table 26. From this table, we can find that MPEPSO outperforms SCA, WOA, and BLPSO in terms of all comparison aspects. e best solutions of MPEPSO, SCA, WOA, and BLPSO are shown in Table 27. As shown in this table, we can find that MPEPSO obtains the minimized mass of the spring comparing with other algorithms.

Pressure Vessel Design Problem.
In this problem, a cylindrical pressure vessel is capped at both ends by hemispherical heads. e purpose of the pressure vessel design problem is to minimize the welding, the material, and forming costs [61]. As shown in Figure 5 [59], the design variables are the thicknesses of the shell T s (x 1 ), the thicknesses of the head T h (x 2 ), the inner radius of the shell R(x 3 ), and the length of the shell L(x 4 ).
In this design problem, T s and T h should be integer multiples of 0.0625 inch, while R and L are continuous. is is a mixed-integer nonlinear programming problem and the final formulation for the pressure vessel design problem is shown as follows [62]:          Figure 4: Tension/compression spring design problem.   Table 28. MPEPSO outperforms SCA, WOA, and BLPSO in terms of "best" and "mean." Meanwhile, in terms of "worst" and "std," MPEPSO also provides promising results better than SCA and WOA, and the differences between MPEPSO and BLPSO are small. Table 29 shows the best solutions of MPEPSO, SCA, WOA, and BLPSO. From Table 29, we can find that the best feasible solution provided by MPEPSO is better than other algorithms.

Cantilever Beam Design Problem.
e cantilever beam shown is built from five elements with cross section [63]. e purpose of the cantilever beam design problem is to minimize the weight of the beam. e design variables are the heights x j of the five beam elements. e final formulation for the cantilever beam design problem is shown as follows: e best, mean, worst, and standard deviation values of the cantilever beam design problem for the 30 runs are shown in Table 30. From this table, we can find that MPEPSO outperforms SCA, WOA, and BLPSO in terms of all comparison aspects. Table 31 shows the best solutions of MPEPSO, SCA, WOA, and BLPSO. From Table 31, we can find that the best feasible solution provided by MPEPSO is better than other algorithms.

Gear Train Design Problem.
e purpose of the gear train design problem is to design a gear train that the gear ratio should be close to 1/6.931 [61]. e design variables are the number of teeth on four gears, x 1 , x 2 , x 3 , and x 4 . Since the number of teeth should be integers, this is an integer programming problem and the final formulation for the gear train design problem is shown as follows: e best, mean, worst, and standard deviation values of the cantilever beam design problem for the 30 runs are shown in Table 32. From this table, we can find that MPEPSO and BLPSO provide the best results in terms of "best," while MPEPSO outperforms SCA, WOA, and BLPSO in terms of "mean," "worst," and "std." Table 33 shows the best solutions of MPEPSO, SCA, WOA, and BLPSO. From Table 33, we can find that MPEPSO and BLPSO provide the best feasible solutions.

Discussion.
e performance of MPEPSO is evaluated by different types of problems including unimodal, simple multimodal, hybrid, and composition functions. e computational results of MPEPSO are compared with its component algorithms and advanced PSO algorithms. MPEPSO performs consistently well on both 10-dimensional and 30-dimensional problems of the CEC 2014 test suite.
Firstly, the computational results show that MPEPSO performs better than its component algorithms, LDWPSO, UPSO, and CLPSO. ese results suggest that the multipopulation ensemble approach in PSO effectively allocates the reward subpopulation to the good performed strategy. erefore, a good performed strategy can have more computational expense to find better solutions.
Secondly, MPEPSO is also compared with three advanced PSO variants developed by different methodologies. DMS-PSO-HS hybridizes PSO with a harmony search, LIPS introduces a new topology structure, and SL-PSO introduces a new searching strategy. e results indicate that MPEPSO can not only outperform its component algorithms, but also R R L T s T h Figure 5: Pressure vessel design problem.  the advanced PSO variants. Moreover, the performance of MPEPSO is compared with that of five other metaheuristic algorithms. According to [52], FA and CS have a good performance on the CEC 2014 test suite comparing with other nature-inspired algorithms. SCA and GCMBO are the most recently proposed algorithms. MPEPSO has better performance than FA, CS, SCA, and GCMBO. MPEPSO is not competitive compared with L-SHADE, the winner of the CEC 2014 competition. However, the objective of this paper is not to provide the most efficient algorithm for CEC 2014. We found that the proposed MPEPSO outperforms the component PSO algorithms. e results suggest that the multipopulation ensemble approach is a promising method, and this approach has been successfully applied to develop an efficient PSO algorithm. It significantly contributes to the development of efficient PSO algorithms. In the future, it is possible to explore the ensemble of the top performed algorithms on CEC 2014, such as L-SHADE, by the multipopulation ensemble approach.
irdly, the computational complexity tests show that MPEPSO uses competitive and reasonable computation costs. e nature of MPEPSO is the dynamical combination of LDWPSO, and UPSO and CLPSO. In the tests, UPSO and CLPSO have higher computational costs compared with MPEPSO, while LDWPSO has relatively lower computational costs. MPEPSO ranks 3 in the computational complexity tests on both 10 and 30 dimensions. at means the multipopulation ensemble approach in MPEPSO is not time-consuming.
Also, MPEPSO is applied to solve four real-world engineering design problems including the tension/compression spring design problem, the pressure vessel design problem, the cantilever beam design problem, and the gear train design problem. e performance of MPEPSO is compared with two recently proposed metaheuristic algorithms, SCA and WOA, and one recently proposed PSO variant, BLPSO. e results suggest that MPEPSO can obtain the best results on all the engineering design problems comparing with SCA, WOA, and BLSPO. erefore, we confirmed the advantages of MPEPSO on solving real-world engineering design problems.
Overall, the experimental results show the advantages of the proposed algorithm, and MPEPSO is a promising algorithm.

Conclusion
In this paper, a multipopulation ensemble particle swarm optimizer (MPEPSO) is proposed to realize the ensemble of three efficient and simple PSO searching strategies, LDWPSO, UPSO, and CLPSO. e indicator subpopulations have a relatively small number of particles compared with the reward subpopulation. At the end of a learning period, the performance of each strategy is evaluated to select the best-performed strategy, and then the reward subpopulation is allocated to the corresponding strategy. By dynamically allocating the reward subpopulation to the good performed strategy at different learning periods, the appropriate PSO searching strategy will have more particles to find a better solution.
erefore, the performance of MPEPSO is enhanced. e performance of MPEPSO on the CEC 2014 test suite is compared with that of other PSO variants. e computational results show that the proposed MPEPSO can outperform the other PSO variants on these benchmark functions for both 10-dimensional and 30-dimensional problems. Furthermore, MPEPSO is applied to solve four real-world engineering design problems. MPEPSO provides better results on these problems compared with several other algorithms.
ere are several future research directions. First, we realized the ensemble of three simple PSO searching strategies in this paper. e proposed algorithm is still outperformed by L-SHADE on the CEC 2014 test suite. Further research might explore the ensemble of the top performed algorithms, such as L-SHADE [50], jSO [64], and CoBiDE [65], by the multipopulation ensemble approach. Second, a limited number of PSO strategies are incorporated in the proposed algorithm. A further study could investigate the effects of using more PSO variants [66][67][68].    Firefly algorithm GCMBO: Monarch butterfly optimization with greedy strategy and crossover operator L-SHADE: Success-history-based adaptive DE with linear population size reduction SCA: Sine cosine algorithm WOA: Whale optimization algorithm BLPSO: Biogeography-based learning particle swarm optimization.

Notation Indices
i: Index of particles d: Index of dimension h: Index of the PSO searching strategy.

Parameters
V d i : Velocity of the ith particle on the dth dimension X d i : Position of the ith particle on the dth dimension N: Total population size D: Dimension number of the problem ω: Inertia factor c: Accelerate coefficients pbest i : Previous best position of the ith particle gbest: Global best position of all the particle rand: Random numbers in the interval of [0, 1] G d i : Velocity of particle i in the global version PSO L d i : Velocity of particle i in the local version PSO X: Constriction factor nbest i : Personal best position of the best neighborhood of X i U d i : Velocity of the ith particle on the dth dimension in UPSO Data Availability e source codes are available at https://github.com/zi-angliu/MPEPSO. e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare no conflicts of interest.