Subpopulation Particle Swarm Optimization with a Hybrid Mutation Strategy

With the large-scale optimization problems in the real world becoming more and more complex, they also require different optimization algorithms to keep pace with the times. Particle swarm optimization algorithm is a good tool that has been proved to deal with various optimization problems. Conventional particle swarm optimization algorithms learn from two particles, namely, the best position of the current particle and the best position of all particles. This particle swarm optimization algorithm is simple to implement, simple, and easy to understand, but it has a fatal defect. It is hard to find the global optimal solution quickly and accurately. In order to deal with these defects of standard particle swarm optimization, this paper proposes a particle swarm optimization algorithm (SHMPSO) based on the hybrid strategy of seed swarm optimization (using codes available from https://gitee.com/mr-xie123234/code/tree/master/). In SHMPSO, a subpopulation coevolution particle swarm optimization algorithm is adopted. In SHMPSO, an elastic candidate-based strategy is used to find a candidate and realize information sharing and coevolution among populations. The mean dimension learning strategy can be used to make the population converge faster and improve the solution accuracy of SHMPSO. Twenty-one benchmark functions and six industries-recognized particle swarm optimization variants are used to verify the advantages of SHMPSO. The experimental results show that SHMPSO has good convergence speed and good robustness and can obtain high-precision solutions.


Introduction
So far, optimization algorithm has been a popular research problem. Particle swarm optimization algorithm is a population-based optimization algorithm, which was first invented by Dr. Eberhart and Dr. Kennedy in 1995 [1,2]. Particle swarm optimization algorithm is inspired by the behavior of birds looking for food. Particle swarm optimization is a search algorithm in which particles cooperate with each other. Of course, particle swarm optimization algorithm is a population intelligent optimization algorithm. In addition to particle swarm optimization algorithm, common intelligent algorithms include differential evolution algorithm (DE) [3], ant colony optimization (ACO) algorithm [4], artificial bee colony (ABC) [5], programming algorithm (FEP) [6], simulated annealing algorithm [7], neural network [8], text clustering [9], resource allocation [10], and task allocation [11]. Particle swarm optimization algorithm has been widely accepted with the advantages of rapid convergence, excellent robustness, and concise understanding.
Although particle swarm optimization algorithm has been preeminently optimized and improved, it still has some problems to be solved. In particular, when the dimension of the population becomes higher, the particle swarm optimization algorithm is extremely prone to early convergence, which makes it hard to jump out of the local optimal solution in the final stage of the algorithm. In order to continue to improve the performance of particle swarm optimization algorithm, countless particle swarm optimization researchers are committed to four improvement directions, namely, particle swarm optimization parameter adjustment, learning strategy improvement, topology selection, and integration with other algorithms. ese four aspects are described as follows: Parameter control: it includes setting inertia weight, acceleration coefficient, and population size. e particle swarm optimization algorithm with constant inertia weight balances the ability of global development and local exploration. is method limits the particle swarm optimization algorithm in a sense. Another adaptive control parameter is proposed to control local balance and global exploration through adaptive parameters. Shi and Eberhart et al. proposed a PSO algorithm with an inertia weight set to a constant value to balance the global exploration and local exploitation abilities of the algorithm [28]. e inertia weight is constant, which greatly limits the potential search ability of particle swarm optimization. Zhan et al. proposed an APSO algorithm with adaptive control parameters. APSO achieves a locally and globally stable search state through adaptive control parameters [29].
Learning strategy improvement: it includes some comprehensive learning strategies, biogeography-based learning strategies, segmentation based advantage learning strategies, domain based learning strategies, and dynamic domain learning strategies. Yang et al. proposed the SPLSO algorithm that used segment-based predominant learning. is model first segments the dimensions of each poor-performing particle; then, each segment is learned from a better-performing particle, allowing the algorithm to capitalize on the information from the better particles and avoid premature convergence [30]. Liang and Suganthan proposed a DMS-PSO algorithm with a dynamic neighborhood structure in which the learning of each particle is no longer limited to one population; instead, it also includes other populations [31].
Topology: topology can effectively use particle information in the field, but it ignores the global optimal information to a certain extent. Common topologies include ring topology, star topology, network topology, dynamic tree topology, and dynamic competition topology. Li et al. proposed an adaptive particle swarm optimization algorithm using scale-free network topology. Based on the characteristics of scale-free network topology with a power-law distribution, the algorithm can construct a corresponding neighborhood for each particle [32]. Janson et al.
constructed a dynamically changing tree topology in which each particle learns from its parent to utilize the information of each particle effectively [33].
Hybrid algorithm integration: mixing with other algorithms is one of the main research fields of particle swarm optimization algorithm, which can effectively improve the performance of the algorithm. Zhang et al. proposed the DSPSO algorithm by combining the differential mutation operation with the SLPSO algorithm [34]. Valdez et al. proposed a new hybrid approach for optimization by combining particle swarm optimization (PSO) and genetic algorithms (GAs) using fuzzy logic to integrate the results [35].
According to the above analysis, an excellent particle swarm optimization algorithm is to have better local exploration and global exploration capabilities. To achieve both, the diversity of population is essential, which can prevent the premature convergence of particle swarm optimization algorithm. Because each optimization problem is different, it is difficult for a single evolutionary strategy to meet each optimization problem. Relevant literature has shown that the combination of several strategies helps to improve the possibility of particle swarm convergence to global optimization. In order to adapt to more optimization problems, inspired by the MPCPSO algorithm, this paper proposed a joint strategy of subpopulation cooperative particle swarm optimization algorithm. e particle swarm is initialized, and the fitness values of all particles for the first time are recorded [36]. According to the fitness values, the population is divided into two populations: dominant population (DP) and poor population (PP). For the two populations, the first population and the second population adopt different evolutionary strategies, respectively, the second population adopts candidate learning strategy, and the first population adopts mean dimension learning. By comparing with other algorithms, the effect of the two strategies in this paper is feasible, and the solution of the function can be obtained at the same time.
e first section of this paper is organized as follows: Section 2 mainly introduces the related work of writing this article. Section 3 introduces the two learning strategies and SHMPSO in detail. In Section 4, the SHMPSO algorithm is tested by using twenty-one benchmark functions and six famous PSO variants. At the end of the article, Section 5 gives the relevant conclusions.

Related Work
is part mainly introduces some harvest before completing the experiment. Inspired by two particle swarm optimization algorithms, the first is the classical particle swarm optimization algorithm, and the second is the particle swarm optimization algorithm based on biogeography. Basically, all PSO variants are improved on the classic PSO.

Classic PSO.
e classical particle swarm optimization algorithm is an optimization algorithm based on the whole population. Each particle represents the possibility of a 2 Computational Intelligence and Neuroscience solution. All particles update their positions according to their own historical best positions and global historical best positions. Generally, there are D dimensions in the search space of the whole particle swarm. e position vector of particle i at a certain time t is X i � (x i1 , x i2 , . . . x i D ), and the velocity vector is . All particles of the next-generation population can be generated according to the position vector and velocity vector. e update equation for generating optimized next-generation particles is where ω is an inertia weight, c 1 and c 2 are an acceleration factor, and r 1 and r 2 are a random number between [0, 1]. ese two equations are the core equations of standard particle swarm optimization. e equation for speed and position is updated.
Although particle swarm optimization algorithm has made many improvements in recent decades, there is still an algorithmic barrier that is hard to breakthrough. For example, the algorithm is difficult to find the extreme value in the case of high dimension, and it is simple to fall into local optimization. At present, many strategies have been proposed to solve these difficulties. e following introduces the comprehensive learning strategies and biogeography learning strategies.

Particle Swarm Optimization
Algorithm with the Comprehensive Learning Strategy. Liang et al. proposed a CLPSO algorithm, which is different from the conventional particle swarm optimization algorithm introduced earlier [37]. CLPSO never learns from the previously mentioned global optimal particle, but each dimension of the particle is like the historical optimal learning of the sample constructed in its field, and the learning probability is P c . CLPSO canonical learning scheme avoids falling into local optimization in some multimodal problems. e revised scheme proposed by CLPSO is described as follows: where ω is the inertia weight, c is an acceleration coefficient, and r is a random number between (0, 1). f i (d) represents that in the D dimension, the particle i changes from f i (d) in p best that is an optimum position of particle history and . All sample vectors of particle i are defined in f i (d). In CLPSO, equation (3) shows that each different particle can learn from different particles from different dimensions. e main core methods of CLPSO are shown in the following flow chart: (1) Generate random number P between (0, 1), and judge the size of p and P c . (2) If P > P c , the current particle i updates according to its optimal position.
(3) If P < P c , the optimal particle is selected by comparing the fitness values of all particles. Let the particle replace the personal optimal position of the particle i and guide particle i to update. e algorithm employs a comprehensive learning strategy whereby the best position before other particles is a paradigm that can be learned by any particle, and each dimension of a particle has the potential to learn from a different paradigm. e new strategy allows particles to have more learning paradigms and a larger potential flight space.

Learning Particle Swarm Optimization Algorithm Based on Biogeography (BLPSO)
. BLPSO is improved according to CLPSO, which is based on biogeographic migration. BLPSO proposes a new learning strategy particle swarm optimization algorithm based on biogeography [38]. All particles in the population are updated by biogeographic migration using their own optimal location and the optimal location combination of other particles. All particles have a migration in and migration out rate, respectively, α i and β i to represent: (1) Sort each particle, and calculate the migration rate of each particle after sorting α i and β i .
(3) Generate a random number r. When r < α i , the index j sum of a particle and its migration rate is selected by the roulette selection probabilities β j . Assign j to f i . Otherwise, assign i to f i . (4) If f i is equal to particle i, randomly select a particle j that is not equal to particle i, randomly select a dimension l, and assign the dimension of j to f i (l).
Contrastingly, the biogeography-based learning strategy employs a ranking technique whereby particles can learn more from particles with high-quality personal best positions, and this effectively enhances the exploitation of the original CLPSO.

Particle Swarm Optimization Algorithm with the Hybrid Strategy of Seed Swarm Optimization
e third section mainly introduces the particle swarm optimization algorithm based on the hybrid strategy of seed swarm optimization. Section 3.1 mainly introduces the learning strategies of the mean dimension. Section 3.2 mainly introduces the candidate generation strategy based on elasticity.
e overall operation framework of this strategy will be given in Section 3.2. At the end of this section, the whole process of SHMPSO is shown in the form of pseudo code.

Learning Strategy of Mean Dimension.
e particle swarm optimization algorithm for subpopulation has been around for a long time, and it has received excellent feedback on some issues. e CMPSODMO proposed by Liu et al. in Computational Intelligence and Neuroscience 3 2017 handles multiobjective dynamic optimization problems in a complex and changing environment and uses a multiswarm-based particle swarm optimization framework to optimize problems in a dynamic environment [39]. e algorithm has achieved excellent results. In the FTPSO, proposed by Yazdani et al. in 2013, it was clearly proposed that in the algorithm, the advantages of multiple groups should be used to cover multiple peaks of the multiobjective problem in a dynamic space [40]. Inspired by TSLPSO, it was proposed by Xu et al. in 2019 [41]. TSLPSO proposes to adopt two populations and uses different strategies to iterate the two populations in the search space, respectively, and obtain significant results. Yen and Daneshyari proposed a method to exchange information among multiple swarms [42].
In the SHMPSO algorithm proposed in this paper, firstly, a population is initialized, and the population is divided into two populations by using the fitness ranking mechanism. e ranking order is ascending. In each iteration process of the algorithm, the part of particles with smaller fitness values is divided into the dominant population, and the other part is classified as the poor population. e proportional coefficient of the dominant population particle is s. e dominant populations use the mean dimension learning strategy to optimize the population, and the poor population uses the candidate generation strategy based on elasticity to learn. In this way, different evolutionary strategies for different populations can effectively ensure the diversity of populations and avoid falling into local optimization. e dominant population can guide the search direction of the whole population and make the population further converge to the solution quickly. ere is also a strange phenomenon in PSO algorithm, which is called "spiral rise," that is, the phenomenon of "two steps forward and one step backward" [43]. is means that although the fitness of the particles has been improved, the effects of the minority components of the particles have worsened. In order to overcome this problem, it is inevitable that the evaluation function needs to be changed frequently. Basically, all variants of PSO algorithm have some phenomena that the convergence speed is slow when the whole algorithm runs to the later stage. In SHMPSO, all particles are learning from the dominant population to achieve faster convergence speed.
Classical PSO requires two guiding particles, which are the current particle historical best value and the global historical best value. However, there are many algorithms that only have one guiding particle, such as the CLPSO and BLPSO mentioned above (Section 2). e advantage of this is that the speed update formula has fewer parameters and is easy to understand. e difficulty lies in how to construct this guide particle. For the dominant population, this article uses a guide particle. e following is an introduction to the evolution strategy of the dominant population.
For the whole dominant group, when the particle velocity is updated, the current particle will only receive the influence of m best generated by the comprehensive dimension learning. e velocity update equation of particles is where ω, c, and r have the same meanings as in equation (3). Inspired by MPCPSO, m best i d is obtained from equation (5). Its purpose is to help particles get rid of the local optimal state. As can be seen from equation (9), m best i d is regarded as the only learning paradigm used to guide particle motion. Suppose the algorithm falls into the local extremum, m best i d seldom actively help particles find better solutions.
In mean dimension learning, each particle learns not only from other particles but also from other related dimensions, which greatly increases the universality of particle domain learning.
where D represents the dimension of particles, r is a random number between [0, 1], and N represents the total number of particles in the dominant population. Among them, ρ is a dynamically defined value. e specific solution is as follows: e location update equation is as follows: where φ is a dynamic parameter. Its equation (8) is given below: where N 1 is the current population size (the dominant population), ave 1 refers to the average fitness of the dominant population after the first iteration of the population, and iter represents the current number of iterations. Based on the above iterative updating, the dominant population can be updated in a good direction. It can be seen from equation (7) that the convergence speed of particles of the dominant population will be accelerated to a great extent. With the acceleration of convergence speed, it is inevitable that mean dimension learning is simple to fall into local optimization. In order to avoid this situation, a differential mutation operator is introduced to increase the diversity of the population [44,45]. Here, we need an operation to randomly select two particles from the dominant population. e values of these two particles cannot be the same as those of the current iteration. At this time, we need to calculate the differential of these two random particles and make them a differential vector. is differential vector also needs to be mutated with the scaling factor of F. After mutation, it is summed with the global optimum (P g d ). e whole operation is described by the following equation: 4 Computational Intelligence and Neuroscience where P g d is the global optimal position of the current population, F is a mutation coefficient. a and b represent the index of randomly selected particles and meet the condition requirements i ≠ a ≠ b. e equation of mutation operation not only improves the search ability of the algorithm in the dominant particle swarm optimization but also expands the diversity in the process of population search. Algorithm 1 lists the pseudo code of mean dimensions learning.

Candidate Generation Strategy Based on Elasticity.
Next, the strategy of poor population is the candidate generation strategy based on elasticity. SHMPSO uses different populations and adopts different evolutionary strategies to evolve, respectively. is strategy is conducive to the rapid convergence of the population without losing the diversity of the population. e most important thing about this is how to generate candidates, which is the core part of the whole strategy. Here, the speed update equation has changed, instead of learning from the global optimization (P g d ) like the traditional particle swarm optimization algorithm, but introducing candidates and learning from candidates. New speed update equation is as follows: where ω, c 1 , c 2 , r 1 , and r 2 have the same meanings as in equation (1). e vector p best 2 i d represents the historical optimum of the current particle in the poor population. e generation method of Candidate i d is given below. Inspired by the elastic force generated by spring compression, an elastic coefficient prob is introduced here. e elastic coefficient of the poor population particle is set as prob � 0.5. is parameter size is set by the user. Like a spring, it can be stretched or compressed. Particles are sorted in ascending order according to the fitness value, the larger the value, the worse the performance of the particles. A random number r between [0, 1] is randomly generated. If the elastic coefficient prob is greater than r, then the Candidate i d is generated by the equation: where the vector of g best 1 id represents the global optimal position of the dominant population. N represents the number of particles. e vector of step zize i d is a D-dimensional vector obtained by equation (17) and generated by Levy flight. An introduction to Levy flight is given below. e generation of Levy flight random number includes two parts. e first part is the selection of random direction, and the second part is the generation of Levy distribution. Random walks are derived from Levy stability. is distribution is a simple power-law equation: where 0 < β < 2. It is an index.

Definition 1.
To determine Levy distribution mathematically, it defined by the following equation: where the parameter μ is a control displacement parameter or position, c > 0 represents the scale parameter (the scale used to control the distribution).
Definition 2. Usually, Levy distribution is defined by Fourier transform. e specific equation is given in the following: where α is a parameter in the interval [−1, 1], which is usually called skewness or scale factor. Stability index β is controlled between (0, 2), which is also commonly referred to as Levy index. β in most cases, his analytical form of integration is unknown. For random walking, the step S can be calculated by Mantegna's equation: where u and v in equation (13) obey a positive distribution: e step size can then be calculated by the following equation: where the factor 0.01 comes from the typical step factor L/100 , and L is a typical length ratio. Otherwise, Levy flight will become too radical and jump out of the design plan (waste evaluation). When the elasticity factor prob < r, in order to ensure the diversity of the population, two particles m, n are selected from the dominant population. ese two particles and the global optimal particle P g d are required which is different from each other. Compare particle X m d and X n d fitness value, and select the particle with a smaller fitness value X n d . Candidate i d is obtained by the equation: where stepzize i d and N have the same meanings as in (12). e candidate improves the diversity of the population in this way. Algorithm 2 lists the pseudo code of the candidate generation strategy based on elasticity.

Overall Framework of the SHMPSO Algorithm.
Based on the improved strategies of 3.1 and 3.2, SHMPSO is constructed. e specific steps of the SHMPSO algorithm are as follows: Computational Intelligence and Neuroscience Step 1: initialize the population and set the parameters, the mutation factor F � 0.5, the population proportion scale factor s � 0.5, and the times of falling into local optimum M � 6.
Step 2: calculate the fitness value f(X i ) and the global optimal value (P g d ) of all particles.
Step 3: sorting in ascending order according to the fitness value, taking s * N (all particles) particles as the dominant population (DP) and the rest particles as the poor population (PP) Step 4: optimizing disadvantaged populations based on flexible candidate strategy, which uses equations (2) and (10). e mean dimension learning is used to optimize the DP subpopulation through equations (4) and (7).
Step 5: when M > 6, equation (9) is used to update the position of particles in the dominant population (DP).
Step 6: recalculate the fitness values of all particles, and update the current global optimal value (P g d ).
Step 7: repeat steps 3-6 when the maximum allowed times of iteration is bigger than the times of the maximum number of iterations. e algorithm flow chart of SHMPSO is shown in Figure 1. is algorithm mainly uses the global search ability to find a better search space. e algorithm starts from the global situation, finds the current global optimal position, and can converge to the global optimal solution faster.
irdly, the general framework of the algorithm is given in Algorithm 3.

Experiment
In this section, in order to verify the reliability and efficiency of the proposed SHMPSO, twenty-one widely used benchmark functions are adopted. Comparing SHMPSO with other varieties of PSO, the results are verified. e experiment process is as follows.

Benchmark Function and Parameter Setting.
e twentyone benchmark functions listed in Table 1 are used to demonstrate the superiority of SHMPSO. In Table 1, the first column represents the function number, the second column represents the function name, the third column represents the function mathematical expression, the fourth column represents the function search range, and the fifth column Input: elitist population PP Output: position vector X i , velocity vector V i (1) for i � 1 ⟶ N1 do (2) Calculate the mean of X i ⟶ X mean i (3) Calculate the mean of X d ⟶D mean d (4) Calculate m best by equation (5)  (5) if M > 6 then %M is the number of the algorithm falls into the local optimum (6) Update X i , V i by equations (4) and (9)  (7) else (8) Update X i , V i by equations (4) and (7)  (9) if prob i > rand() % prob is an elastic factor (3) Update Candidate i d by equation (11)  (4) else (5) if prob i < rand() (6) Select two particles X m d , X n d randomly and calculate their fitness values (7) Choose particles with less fitness X i d ; % it is different from P g d (8) end if (9) Update Candidate i d by equation (18)  (10) end if (11) Update X i , V i by equations (2) and (10) (12) end for ALGORITHM 2: Strategy of candidate generation based on elasticity. 6 Computational Intelligence and Neuroscience   (1) Initialization: particle's position X i , velocity V i , F � 0.5, M � 6 (2) Calculate particle's fitness value f(X i ), P g d , fit � P g d (3) while iter < max gen do (4) Plan the number of DP, PP; N1 � s × N, N2 � N − N1; (5) Sorting population based on fitness values into DP, PP; (6) for i � 1 ⟶ N2 do (7) Update Candidate i d by equations (12) and (18)  (8) Update X i , V i by equations (2) and (10)  (9) end for (10) for i � 1 ⟶ N1 do (11) Calculate m best by equation (5)  (12) if M > 6 then (13) Update X i , V i by equations (4) and (9)  (14) else (15) Update X i , V i by equations (4) (20) if P g d < fit then (21) fit � P g d , M � 0 (22) else (23) M � M + 1 (24) end if (25) end while ALGORITHM 3: Grouping-mixed-based particle swarm optimization algorithm. 8 Computational Intelligence and Neuroscience Computational Intelligence and Neuroscience 9 shows the minimum value of the function. e tested functions include 11 unimodal functions (f 1 − f 2, f 4 − f 12 ) and 10 multimodal functions (f 3 , f 13 − f 21 ). e optimal value of all benchmark functions tested is 0. In order to compare the superior performance of the algorithm, six PSO variants are selected, including inertia weight PSO [40], ACPSO [46], SLPSO [47], CLPSO [37], BLPSO [38], and MPCPSO [36].

Selection of the Population Proportion Coefficient.
In order to get a fairer comparison result, all the parameters used in the data experiment are the same, including the maximum number of independent runs (runNumber), the number of evaluations (maxFEs), and the maximum number of iterations (maxgen) where N represents the number of all particles in the population. All the comparison algorithms tested the twenty-one benchmark functions and got the mean and standard deviation after 30 runs. e parameter settings of all PSO variants are given in Table 2. According to the experimental results, all the functions are ranked and compared. In order to test the proportion of the dominant population in the SHMPSO algorithm, parameter c is tested, and the experimental results are as follows.
rough the overall analysis of Table 3 and ranking of each function, it is concluded that when s � 0.5, when the proportion of the dominant population in this paper is kept at 50%, the population optimization effect is the best. All the following comparisons are the experimental results based on the population scale coefficient of 0.5.

Comparison of Experimental Data between SHMPSO and
Other PSO Variants. At the same time, 30, 50, and 100 dimensions are used to evaluate the given test function. e total population of SLPSO has its own definition, and all other functional population sizes are tested at (N � 100). When the particle dimension of all populations is 30 dimensions, maxgen is set to 3 × 10^3, and the function evaluation times maxFEs is set to 3 × 10^5. When the dimension of the particle is 50 dimensions, maxgen is set to 5 × 10^3, and the function evaluation times maxFEs is set to 5 × 10^5. When the particle dimension is 100 dimensions, the population's maxgen is set to 1 × 10^4, and the function evaluation times maxFEs is set to 1 × 10^6.
It can be seen from Tables 4 and 5 that the performance tested by the proposed SHMPSO is relatively stable in 30 and 50 dimensions, and the results obtained are similar. It can be seen that the performance of SHMPSO is excellent in the 30-dimensional and 50-dimensional convergence processes. SHMPSO is in f 1 , f 3 , f 5 − f 10 , f 12 , f 14 , f 16 , f 17 , f 19 , and f 20 . e optimal solution of these 14 functions can be found by the strategy proposed in this paper. ese results explain that the strategy proposed in this paper can be well applied to these functions. Of course, the effects of f 15 and f 21 become worse with the increase of dimensions, and the optimization effect of SHMPSO on some multimodal functions which are difficult to optimize needs to be improved. In f 18 , the effect of SHMPSO is not as good as that of CLPSO, BLPSO, and SLPSO. However, in the overall 21 test functions, the average rank of SHMPSO ranks first, 1.61 and 1.67, respectively.
rough the above analysis, the performance of SHMPSO is better than that of other six comparison algorithms.
In order to further verify the scalability and high efficiency of SHMPSO analysis, the proposed strategy is used to solve the 100-dimension problem, and the set parameters are the same as those in Table 4. e experimental results are shown in the following table.
In Table 6, it can be seen that SHMPSO still has high convergence accuracy and good robustness when solving high-dimensional problems. SHMPSO and MPCPSO rank first on the 100-dimensional problem. Further analysis, the performance of f 15 , f 18 , and f 21 deteriorates drastically with increasing dimension. But the performance of other functions is still very good.
It can be seen from Figures 1 and 2 that the convergence performance of SHMPSO has obtained the global optimal value on most problems. Especially in f 3 , f 8, f 13 , f 19 , f 20 , these five problems not only rank first in convergence accuracy but also the fastest convergence speed. Analyzing the reasons for convergence, there are three main reasons as follows. First of all, the information is highly shared among particles in the dominant population, and the guiding particles formed by the information sharing promote the dominant population to quickly converge to the global optimal value. Second, in order to prevent the population from falling into the local optimal value prematurely, a mutation operation is performed on the global optimal particle.
ird, the use of random dominant population particles to guide the poor population particles not only improves the convergence speed of the poor population but also increases the diversity of the population. e particles of the dominant population are selected to guide the particles of the poor population, rather than the global optimal value of the poor population to guide the poor population. ere are two reasons as follows: the first reason is that the fitness value of the poor population particles is larger than the           Computational Intelligence and Neuroscience fitness value of the population; the second reason is that the use of a single poor population global optimal particle is not conducive to increasing the diversity of the population. According to the experiments in this paper, SHMPSO has good performance, and it has good performance in most functions. e success of SHMPSO mainly depends on the strategies proposed in this paper: First, based on the flexible candidate learning strategy, elite particles are selected from the dominant population to let the poor population particles learn, and the two populations share information, effectively jumping out of the local optimal solution. In addition, the mean dimension learning strategy can make the population particles have a better search range in complex and changeable multimodal functions, greatly improve the learning samples of particles, and provide more effective information for all particles. erefore, SHMPSO has excellent properties and convergence accuracy.

Conclusion
Inspired by MPCPSO, this paper proposes a particle swarm optimization algorithm based on the strategy of subpopulation mixing. In this paper, the mean dimension learning strategy ensures the searching ability of the algorithm and the breadth of learnable samples, which provides the searching potential for the whole population. At the same time, the candidate learning strategy is used to improve the diversity of the population and prevent the population from falling into local optimum. At the same time, this paper compares SHMPSO with six well-known PSO variants to verify the effectiveness of SHMPSO proposed in this paper. Of course, SHMPSO still has some shortcomings, and f 2 and f 11 fall into local optimum. As the population searching ability needs to be improved, our future work will focus on the global searching ability of SHMPSO, and at the same time, we will deeply study the practical application of SHMPSO.

Data Availability
All data of the paper can be obtained through the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest. Computational Intelligence and Neuroscience