Multi-Objective Particle Swarm Optimization with Multi-Archiving Strategy

Although the principle of multi-objective particle swarm optimization is simple and the operability is strong, it is still prone to local convergence and the convergence accuracy is not high. In order to solve the above problems, we propose a multi-objective particle swarm optimization algorithm based on multi strategies and archives. is algorithm is mainly divided into three important parts. Firstly, in the phase of sorting the optimal solutions, the solution set is stored in two dierent archives according to dierent conditions; secondly, in order to increase the diversity of the optimal solutions, several strategies are adopted in updating archives and maintaining archives’ scale. Finally, Gaussian perturbation strategy is applied to increase the distribution of particles and improve the quality of the optimal solution set.We compare the proposed algorithmwith other algorithms and test it with dierent test indexes, Pareto graphs, and convergence graphs. e results show that this proposed algorithm has remarkable performance and the proposed method has advantages.


Introduction
With the continuous development of simulating computing, intelligent algorithms are advancing in a variety of directions. As the principle of bionic algorithm comes from natural biological behaviour and is easy to understand, a variety of algorithms emerge in the category of this kind of algorithm. It falls into two main types: one is an intelligent algorithm derived from the evolutionary behaviour of natural organisms. For example, Essam H. Houssein et al. [1] presented a multi-objective optimization algorithm based on the Slime mould algorithm (SMA) called multi-objective SMA (MOSMA); Yutao Yang et al. [2] proposed a generalpurpose population-based optimization technique called Hunger Games Search (HGS) and so on. e other is an intelligent algorithm derived from the combination of the two algorithms according to the di erent characteristics of di erent populations. For example, Zar Bakht Imtiaz et al. [3] proposed the Multi-Layer Ant Colony Optimization (MLACO), Hongliang Zhang et al. [4] presented a chaotic slap swarm algorithm (SSA) with di erential evolution (CDESSA), and so on. In addition, this kind of algorithm is mainly used to solve complex optimization problems in engineering eld due to its strong searching ability, such as the electric eld [5], the smart homes eld [6], and so on. erefore, swarm intelligence algorithm, as a branch of bionic algorithm, is also developing rapidly; among them, the development of particle swarm algorithm (PSO) [7] is particularly obvious. It is widely used in electric power [8], medical [9], satellite constellation design [10], and other industries, due to its characteristics of easy to understand and easy to operate. It was rst proposed by Eberhart and Kennedy in 1995. Its principle is to nd the optimal solutions of the rst-order linear optimization model by simulating the behaviour of birds looking for food. However, with the dependent of the research, a novel PSO was proposed by Coello in 2004 [11], and applied to resolve multiobjective optimization problems. erefore, the multi-objective particle swarm optimization (MOPSO) was produced. Up to now, the algorithm presents the characteristics of more and more diverse improvement methods and more and more significant improvement effects. e improvement methods of these algorithms are mainly divided into the following aspects: (1) Since the algorithm is prone to fall into local convergence, researchers help algorithms to get rid of this dilemma through various ways, such as improving mutation strategy, enhancing search strategy, and so on. In order to increase the quality of the optimal solution set in the decision space, Li Guoqing et al. [12] added a grid search strategy to search the particle population based on the particle position on the grid. ey used K-means clustering strategy to locate the Pareto optimal solution set in the population, and then searched for high-quality solutions in the decision space by grid search. is method improves the exploration ability of the algorithm, but due to the limitation of K-means strategy, the parameter setting has an impact on the algorithm result. Qu Boyang et al. [13] proposed a gridguided particle swarm optimizer for solving multimodal multi-objective optimization problems. e researchers divided the mesh adaptively, calculated the mesh density, and randomly deleted particles that exceeded the grid density. In addition, the inertia weight and acceleration coefficients are adjusted adaptively. ese methods effectively alleviate the problem of local convergence, but greatly increase the computation of the algorithm. (2) To implement the particle swarm strategy, the population is divided into multiple subgroups, and the parallel operation of multiple subgroups is carried out to enhance the computing capability of the algorithm. Yunfeng Zhang et al. [14] divided the swarm adaptively into multiple subgroups and updated it with different strategies to increase the population diversity. PSO is adaptively divided into several subgroups, and the local optimal particles are found in the subgroups to guide the flight of ordinary particles. ese methods can increase the diversity of particles to a certain extent. Zheng Jinhua et al. [15] proposed a dynamic MOPSO based on adversarial decomposition and neighbourhood evolution. e algorithm uses adversarial search direction to alternately update and coevolve the two populations of dynamic PSO, which increases the probability of finding the optimal solution set.
(3) PSO is combined with other algorithms through the union of various different algorithms to improve the accuracy of the algorithm. A. Francis Saviour Devaraj et al. [16] combined the firefly algorithm (FA) with PSO to improve the speed and accuracy of the algorithm, so they select the best solution of the objective space by the distance between a point and a specified straight line. Kallol Biswas et al. [17] combined grey wolf optimization (GWO), cellular automata (CA) technique, and PSO to solve the optimization problems. e above two new algorithms combine the PSO with FA and GWO, respectively, to enhance their search ability and solving ability. Finally, the new algorithms are applied to engineering field to reduce resource loss. Based on the above theoretical researches, in order to further improve the searchability of MOPSO, alleviate the problem that the algorithm is easy to fall into local convergence, and improve its convergence accuracy. In this paper, we propose a MOPSO with multi-archiving strategy (MSMOPSO) which has two archives.
e purpose is to improve the performance of MOPSO by improving the exploration ability of particles in the algorithm. Firstly, compared with MOPSO, the archive is expanded to increase the number of candidate solutions. Subsequently, the particles stored in archives are selected in different ways. For the first archive, the particles are selected by elite selection strategy, and this strategy is also applied to update this archive in the iterative process. For the second archive, we select particles based on self-organizing strategy, and then update this archive with species migration model. Finally, Gaussian mutation strategy is utilized in perturbing the positions of particles in archives, aiming to improve the solving accuracy of the algorithm by enhancing the diversity of particle population. For the latter, a physical migration model is used to update this archive. e rest of the article consists of the following sections. In the second part, the definition of multi-objective algorithms is introduced, and the prior work for this paper is described. e third part describes the improved MOPSO in detail. e fourth part describes the experimental setup. e fifth part reports the experimental conclusion and carries on the correlation analysis. e sixth part summarizes the article.

Multi-Objective Particles Swarm Optimization.
In MOPSO, the velocity and position of each particle are initialized in the solution space in a random way, then other particles search in the direction of global and individual optimality, so the whole search and update process follows the current optimal solutions to obtain the target. Assume that the total number of particles in the population is N, and the search space is D-dimensional. Particles' positions are expressed as X i � X 1 , X 2 , X 3 ......X N ; their velocity is the whole search process is as follows: (1) ij and X (t) ij represent the velocity and position components of the ith dimension of the particle in the jth generation, respectively. w represents inertial weight, which enables particles to expand the search space, its value is set to 0.729. c 1 and c 2 represent social learning coefficients, with a value of 0.7029. r 1 and r 2 are random numbers within (0, 1); P (t) ij and P (t) gj represent the local and global optimal positions experienced by the particle in the jth dimensional component, respectively.
Related definitions of multi-objective problems (MOPs) and Pareto font have been proposed in Ref. [18]. e typical feature of MOPSO is that it has multiple optimal value solutions. erefore, it is necessary to add an external archive to store high-quality particles to form a candidate solution set. In addition, there are various methods to select the optimal solutions, but the core basis of selection is Pareto ranking value for these. If the particle sorting value is one, it proves that this particle has a relatively higher mass in the swarm. e overall process of the algorithm is as follows: Firstly, initialize particle swarm information; then, calculate the fitness values of the particles in the population, in order to determine the optimal position of the individual particles. irdly, fitness values of particles are sorted by Pareto ranking strategy, Subsequently, the noninferior solutions are stored in the external archive. At the same time, the particle position is determined according to the initial mesh partition of the objective function values. e fourth step is to use the roulette method to select the optimal particle from the external archive and take it as the global optimal position particle (gbest). Fifth, update the velocity and position of the particle, and the optimal position of the individual particle (pbest) is updated by using the dominance relation. Finally, repeat steps 2-5 to update the external archive, and retain the optimal noninferior solutions.
is process ends when MOPSO reaches the iteration stop condition.

Multiple Archives.
Since the external archive is used to store relatively high-quality particles in MOPSO, the quality of the external archive will directly affect the quality of the optimal solution set. Furthermore, the external archive is stored as follows: particles with sorting value of one obtained by each iteration are stored in it, and then the archive is extended by using elite selection strategy. However, if these particles are too concentrated, the algorithm will inevitably fall into "premature" phenomenon. erefore, in this part, we use different archiving methods to store potential solutions of particles according to different adaptive conditions.

e First Archive.
In the initialization stage, Pareto ranking is carried out on particles first. Meanwhile, if there is only one particle whose sorting value is one in swarm, it will be stored in the first archive. In the iterative process, we compare the fitness values of the particles in the population and the fitness values of the particles in the archive and then adopt the elite selection strategy to select the superior particles. If there are particles like this, they will be stored in archive, and the inferior particles will be deleted from the archive. rough this way, we can achieve the purpose of archiving an update.

3.3.
e Second Archive. During initialization, Pareto ranking is carried out on particles in the population first. Meanwhile, if there are multiple particles with sorting value of one, they will be stored in the second external archive. en, in the iterative process, in order to increase the diversity of particles, these particles are regarded as the leaders. e specific operation of the second archive is to select particles around the leaders through the self-organizing grouping strategy. In other words, select particles with excellent performance as potential solutions and put them into the archive to achieve the purpose of archiving expansion. In addition, the physical migration model is used to update the archive to avoid excessive agglomeration of particles. e self-organizing grouping strategy and physical migration model are presented in section 2 and section 2.

Archive Expansion Based on Self-Organizing Strategy.
A large number of studies show that during the process of multi-objective calculations, the decision variables can be divided into multiple subgroups according to certain rules, which can improve the efficiency of algorithms and the quality of solutions. erefore, in this stage, we select high-quality particles in swarm through the form of subgroups. e commonly used clustering algorithm is K-means clustering algorithm [19], but the K-means algorithm has three obvious shortcomings: (1) Solutions of the K-means algorithm depends on the selection of the initial point; if the quality of these points are poor, its results will not reach the optimal; (2) e value of K in K-means algorithm needs to be set in advance, and the selection of K value has a great influence on final results; (3) During the K-means clustering, this clustering algorithm is prone to the appearance of empty groups, which makes it fall into local convergence. us, we utilize another strategy, namely, self-organizing strategy. is strategy refers to the formation of a certain structure of the objective variables spontaneously in accordance with certain rules, which are widely used in the multimodal MOPs [19]. In MSMOPSO, this strategy is applied to select particles in the population. In addition, when applying it, there is no need to set the number of groups in advance. What is more, it can group the swarm adaptively, which can effectively avoid the occurrence of empty group phenomenon. e clustering mechanism of self-organization strategy applied to the algorithm is as follows: firstly, the entire population is dynamically grouped according to leaders. If there are multiple leaders, the group will be divided according to the Euclidean distance between leaders and other particles in the population. At the same time, if a large number of duplicate particles are generated in the archive due to the proximity distance of two leaders, we will delete the second archive's duplicate particles. e results show that when the group radius is 1/20, its effect is the best [20]. e specific situation is shown in Figure 1. As shown in Figure 1, take the two-dimensional test function as an example; the particles in the population are marked on the coordinate axis and the vertical axis coordinates represent particles' fitness values. It is assumed that there are three particles whose sorting vale is one (marked by triangles in Figure 1) and then the Euclidean distance d between the particle in population and the triangle is calculated. If d is less than the previously set value, this particle and triangle belong to the same subgroup (the circular region in the diagram represents the dynamic group established around the triangle and the Euclidean distance). If the value of d is too large, the particle is redundant, and no grouping operation is performed on this particle (such as rhomboid particle in Figure 1). So, this particle was not assigned because the value of d was greater than the previously set value. In addition, if there are overlapping particles between subgroups, such as the quadrangle symbol in Figure 1, this symbol is stored directly in the archive without special grouping. e stored procedure is shown in Figure 2. e rectangular table represents the second archive, and particles in it indicate that they are saved at this time. It is obvious from Figure 2 that according to the self-organizing clustering strategy, all particles around leaders are stored.

External Storage Update Based on Species Migration
Model. Species migration model is used in Biogeography-Based Optimization, which is mainly to simulate the migration of species between multiple habitats. e experimental results show that this model has good convergence and is effective to solve the algorithm [21]. is model is mainly divided into two stages: species migration and species mutation. e migration rules followed are shown in Figure 3: S stands for the number of species, P represents the migration rate of this habitat, λ represents the immigration rate of this habitat, µ represents the emigration rate of this habitat, E represents the maximum of emigration rate function, I represents the maximum of immigration rate function, S 0 represents the equilibrium point, and S max represents the maximum capacity of habitat, that is, the size of this species can be accommodated. According to Figure 2, when species quantity S is 0, the habitat emigration rate µ is 0, while the immigration rate λ reaches the maximum. When the species number reaches S max , the species immigration rate λ is 0, and the species emigration rate µ reaches the maximum. When the species number is S 0 , the immigration rate equals to the emigration rate, indicating that the habitat is in equilibrium at this time, that is, the migration effect for species is the best, and the habitat after migration is of the  best quality. λ S represents the immigration rate under the equilibrium state, and µ S represents the emigration rate under the equilibrium state. erefore, λ S and µ S are adopted in the application of MSMOPSO. ey are calculated as follows: e meanings of related parameters are the same as those in Figure 3.
In MSMOPSO, the second archive is regarded as the habitat of organisms, particles stored in this archive are regarded as the species living in the habitat, and the initial update of this archive is completed by migrating the species in the habitat. In addition, the maximum value of immigration rate and emigration rate in the above process is 1. erefore, based on the above formula, MSMOPSO adopts the following formula to calculate emigration rate and immigration rate based on species, so as to complete the preliminary update of the second archive. As shown in Figure 3, the probability of habitat mutation is inversely proportional to the probability of species number. e specific form is shown in the following formula: R k represents the migration rate of particles in the archive, P k represents the immigration rate of particles in the archive, and the calculation formula is as follows: S max represents the number of species in the archive; K represents the number of species traversed. Besides, species in the migration process cannot improve the diversity of species in the habitat if it only considers the immigration rate and emigration rate. erefore, it is necessary to select a species in the migrated habitat for mutation disturbance, and then move the disturbed species into the habitat. Referring to this idea, MSMOPSO introduces random disturbance factor "a" at this stage. at is, some random particles in this archive are mutated and disturbed to produce a new incoming particle. is method can increase the diversity of particle population and increase the possibility of obtaining the optimal solution set. Refer to article [22], for the value of random disturbance factor. erefore, the migration disturbance strategy is shown in the following formula: a is the random number between (0, 1.2), X ij represents the particles disturbed by migration. is process is shown in Figure 4. In Figure 4, the solid line represents the species migration process and the dotted line represents the species disturbance process. e archive on the left side of Figure 4 shows that after the archive expansion (the process shown in Figure 3) is completed, the associated particles are removed based on the emigration rate of individuals. At the same time, according to the cumulative probability of particle emigration rate in the archive, some particles that need to be disturbed are determined; then the disturbed particles are randomly stored in the archive again. e right part of Figure 4 represents the archive after this process has been completed, from which two advantages of applying the species migration model can be clearly seen: first, the storage size within the archive is reduced; second, it increases the diversity of particles in this archive.

First Archive Scale Maintenance.
In the first archive, if the archive size exceeds the limit, its particles are placed in an adaptive grid, and then the archive size is maintained according to the density of the particles in the adaptive grid.
at is, if the particles in the grid reach a preset value of maximum density (the maximum density set in MSMOPSO is 10), firstly, judge whether there are duplicate lines, if so, then perform the deduplication eliminatiny, otherwise, conduct deletion operation for the particles in it.

Second Archive Scale Maintenance.
In the second archive phase, if the archive size exceeds the limit, the archive size is reduced in two ways. One way is to move some particles out of this archive through the emigration rate in the physical migration model. Specifically, a random number is generated to compare with the corresponding emigration rate of particles within the habitat. If the former is less than the latter, it means that this particle needs to be emigrated. e second method is to randomly delete some particles in this archive, if the size of habitat still exceeds the limit after species migration, and this process continues until the size of this archive does not exceed the set value.
e specific process is shown as the pseudo code in Table 1:

Mutation Strategy.
e purpose of mutation strategy is to perturb the positions of a certain number of particles, so Scientific Programming as to increase the possibility of these particles finding unexplored regions, and search for a relatively high-quality solution set in these regions. Gaussian mutation strategy has strong global searchability, so it is helpful to alleviate the phenomenon of local convergence. erefore, MSMOPSO uses Gaussian perturbation mutation strategy to mutate the position information of particles in order to increase the probability of obtaining optimal solutions. In this stage, firstly, a nonlinear function is used to determine the number of variant particles in the archive, and its number varies with the number of iterations. Subsequently, the number of mutation of each particle was determined adaptively according to the number of iterations, and the number of mutation was guaranteed not to exceed the preset value (i.e., the maximum and minimum limit of the number of mutation). Assuming that the number of iterations at this time is t, the number of mutant particles and the number of mutations required by the mutant particles are shown in the following formula, which is similar to Ref. [23].
In the above formulae, t represents the current number of iterations; tmax represents the maximum number of iterations in MSMOPSO; mutrate is the rate of variation with respect to the number of particles; nump represents the number of mutated particles; length (position,1) is the number of particles in the population; unp represents the maximum number of disturbances of the particle; lnp represents the minimum perturbation of the particle; np refers to the number of disturbances per variant particle. In addition to increase the randomness of the mutation, the variation dimension of the particle is determined randomly. Similar to Ref. [24], in dimensions to be changed, the position of components of the decision variable is changed by combining the upper and lower proportions of perturbation and Gaussian variation. It is nothing that in MSMOPSO, ∆d is regarded as the perturbation range of component position, that is, the variation range of particles is limited, so as to avoid producing too many particles that are too far away from the variation particles. e calculation formulae are as follows: r 2 represents the random number generated through Gaussian distribution, but the value is limited here, that is, the value cannot exceed 1; rld refers to the lower limit of variation; rud refers to the upper limit of variation; X jmin is the lower bound of the jth dimension of particle X; X jmax represents the upper bound of the jth dimension of particle X; ld refers to the disturbance lower bound of the decision component in the disturbance process; ud refers to the disturbance upper bound of the decision component in the disturbance process; ∆d represents the perturbation step term of the decision variable. In addition, in order to prevent the position of the disturbed component beyond the set boundary, it is necessary to reset the component beyond the boundary. 6 Scientific Programming According to the above part, MSMOPSO uses two archiving methods to store the better particles. en, in the iteration process, the archived particles are updated, and these archived particles are applied to guide the flight direction and step size of other particles in the population. erefore, the speed and position of particles are updated through the following formulae in MSMOPSO: In the iteration process, if the Pareto ranking value of the archived particle increases, it indicates that the particle performs better than other particles after updating (archived updating) and mutation (mutation strategy). Such particles are regarded as pbest particles, which guide other particles in flight, rep stands for particles in the archive.

Basic
Framework of the Algorithm. MOPSO puts potential solutions in a single archive, so the diversity of solutions is limited. is algorithm improves the diversity of candidate solution set by adding archiving strategy. Its flow steps are as follows: Step 1. Initialize the particle population, and set related parameters; Step 2. Calculate the position and velocity of particles; Step 3. Calculate the fitness value of particles in the population, and perform Pareto ranking on the fitness value; Step 4. Mark the particles whose sorting value is one, and select pbest according to the fitness value of the particles; Step 5. Determine whether there are multiple particles with a sorting value of one. If there are more than one particle, they will be stored in the second archive; otherwise, the particle is stored in the first archive; Step 6. Update the first and second archives, respectively. e first archive uses the elite selection strategy to select related particles in the particle population. In the second archive, the physical migration model is used to carry out the in-migration, out-migration, and mutation operations for the particles; Step 7. Select gbest particles in the corresponding archiving strategy, and guide other particles to carry out flight operations; Step 8. Judge whether the algorithm reaches the iteration stop condition at this time. If the iteration stop condition is reached, the operation of the whole algorithm will be terminated. Otherwise, the mutation strategy is applied to the particles in the archive and step 2 is returned. e flow framework of the algorithm is shown in Figure 5:

Experimental Settings
In this experiment, MATLAB (R2020b) is used to test MSMOPSO; other simulations were run on Windows 10 platform using Intel i7-7700, 3.60 GHz processor with 16 GB RAM. Other test indicators are set as follows.
Apply the self-organizing policy to extend the second archive according to for k � 1:Max 14.
Calculate the particle migration rate R k in the second archive according to formula (3). 15.
Calculate the particle immigration rate P k in the second archive according to formula (4) 16.
Delete the particle k in the archive 19. end 20.
Perturbed the particle k and produced the new particle according to formula (5). 22.
In the second archive, replace the particle k with this new particle 23. end 24. end 25.
Generate the updated second archive, Sbest1 26.
Delete the extra parts 28. end e significance of bold values is presented in the form of " %+X" in Table 1.  Table 2, where D represents the dimension of the given problems, N means the group size, FEs signifies fitness evolution number, and Problems means test problems.
To fairly test the characteristics of algorithms, each algorithm uses the same parameter settings when testing the benchmark function. As shown in Table 3, the parameters  for each algorithm are set to their default values. As Reported by Arcuri and Fraser (2013), using default algorithm parameter values effectively reduces the risk of better parameterization bias [12].

Comprehensive Evaluation Indexes.
In this algorithm, Inverted Generation Distance (IGD) [40] evaluation index and Hyper Volume (HV) [41] evaluation index are adopted to comprehensively evaluate the performance of the algorithm.

IGD Evaluation Index.
is algorithm adopts IGD evaluation index, which evaluates the convergence and distribution performance of the algorithm mainly by calculating the minimum distance sum between each individual on the real Pareto front surface and the individual set obtained by the algorithm. e smaller the value is, the better the convergence and distribution performance of the algorithm is. IGD evaluation index is calculated as follows: Among them, the P is for uniform distribution on the true Pareto point set, |P| is for distribution in the true Pareto set of points on the surface of the individual number. Q is the Pareto optimal solution set obtained by the algorithm and d(v, Q) is the minimum Euclidean distance between individual V and population Q in P.

HV Evaluation
Index. HV evaluation index refers to the hyper volume index, that is, the volume of the region in the target space surrounded by the nondominated solution set and the reference points obtained by the algorithm. is index can simultaneously evaluate the convergence and distribution of the solution set in space. Convergence refers to the approximation degree between the solution set and the real Pareto front, distribution refers to the wide distribution of the whole solution set in the target space. A larger HV value indicates that the algorithm can dominate a larger area in the target space and has better diversity. e formula of HV index is as follows: where δ represents the Lebesgue measure, which is used to measure volume. |S| is the number of nondominated solution set, v i represents the supervolume consisting of the reference point and the ith solution in the nondominant solution set.

Comparison of Experimental Results
MSMOPSO amplifies the archiving strategy, selects different archiving strategies according to different adaptability conditions; then, in the second archive, self-organizing dynamic clustering strategy and species migration model are applied to update it; finally, Gaussian perturbation strategy is applied to change the positions of particles in the population, so the optimal solution set is obtained. In order to further detect the superiority of the performance of this algorithm, it is compared with other evolutionary algorithms, and experimental results are described below. Table 4 shows the results of IGD evaluation indexes obtained by this algorithm and other MOPSOs. e standard deviation part is marked by parentheses. It can be obtained from  Table 5 shows the results of IGD evaluation indexes obtained by MSMOPSO and other MOEAs. e standard deviation part is marked by parentheses. As shown in Table 5, 14 of the 22 test functions of MSMOPSO have better results compared with the solving mean of other algorithms, ARMOEA has 3 better results, MOEAD has 1 better result, NSGAIISDR has 2 better results, gNSGAII has 0 better result, RVEA has 1 better result, and GrEA has 1 better result. It shows that MSMOPSO has better convergence performance and particle distribution than MOEAs. In addition, in the test functions DTLZ1, UF1, UF2, UF6-8, UF10, ZDT1, ZDT2, ZDT4, and ZDT6, the standard deviation of IGD index of MSMOPSO is smaller than that of other algorithms, indicating that the convergence performance and particle distribution performance of MSMOPSO are more stable than other MOEAs.

Comparison of IGD Data.
Based on the comparison of IGD indexes in Table 4 and Table 5, it is found that MSMOPSO performs relatively well both for MOPSOs and MOEAs, especially when detection functions are DTLZ1, ZDT4, ZDT6, and UF series (UFs). It is shown that the Gaussian mutation strategy increases the possibility of finding better solutions, and more optimal solutions are stored by the multi-archiving mechanism, which can further improve the exploration ability of the algorithm.
is can enable MSMOPSO to find as many noninferior solutions as possible. In addition, according to the statistical results, the variance of IGD index value of the proposed algorithm for MOPs of DTLZ1, UF2, UF7, UF8, ZDT4, and ZDT6 is smaller, indicating that the algorithm has certain stability in solving specific optimization problems, and has some advantages in convergence to the optimal solution set.  Table 6 shows the solution results of HV evaluation index of MSMOPSO and other MOPSOs. e standard deviation part is marked by parentheses. As shown in Table 6, among the 22 test functions, 10 of the results of MSMOPSO are better than those of other algorithms. SMPSO has 2 better results, MPSOD has 1 better result, MOPSOCD has 0 better result, NMPSO has 5 better results, CMOPSO has 4 better results, and dMOPSO has 0 better result. It indicates that the results of MSMOPSO are more widely distributed in objective space. In addition, in the test functions UF1, UF2, UF7, ZDT2, and ZDT3 of MSMOPSO, the standard deviation of HV index is smaller than that of other algorithms; it illustrates that MSMOPSO has better robustness compared with other algorithms. Table 7 shows the solution results of the HV evaluation index of MSMOPSO and other MOEAs, and the standard deviation part is marked by parentheses. As shown in Table 7, 13 of the 22 test functions of this algorithm have better results compared with the solving mean of other algorithms, ARMOEA has 4 better results, MOEAD has 1 better result, NSGAIISDR has 1 better result, GrEA has 1 better result, and RVEA has 1 better result. It shows that the spatial distribution of MSMOPSO is better than other MOEAs. In addition, in the test functions UF1, UF2, UF7, ZDT1, ZDT2, and ZDT3 of this algorithm, the standard deviation of HV index is smaller than that of other algorithms, indicating that this algorithm has good stability compared with other algorithms.

Comparison of HV Data.
Based on Table 6 and Table 7, it is found that compared with MOPSOs and MOEAs, the HV index value of this algorithm is relatively large in DTLZ1, UF1-3, UF7-10, and ZDT3-4 detection functions. According to the characteristics of HV index, the larger the HV value of this algorithm, the better the spatial distribution of decision variables in it. On this basis, statistical results show that the variance of HV index value of MSMOPSO is smaller for DTLZ1, UF7, and ZDT4 test functions, indicating that it has better stability and significant solving ability when solving these two kinds of MOPs. In conclusion, the application of Gaussian mutation strategy can improve the spatial distribution of particles in this proposed algorithm, and then the multi-archive is used to store the noninferior solutions, further indicating that these strategies can effectively improve the development ability of this algorithm for the target space and effectively alleviate the disadvantage of the algorithm easily falling into local convergence.

Comparison of Box Plots.
In this part, some test functions are taken as examples to store the IGD index of each running result of MSMOPSO, which is displayed in the form of box graph and compared with the calculation results of other comparison algorithms. It is found that the distance between the upper and the lower quartile loci of MSMOPSO is closer than that of other algorithms, indicating that the distribution of experimental data is more concentrated, so MSMOPSO is more stable. Meanwhile, for outliers in the data, "+" is used to represent them in the box charts. It can be seen from these plots that there are fewer outliers in the experimental data of MSMOPSO, indicating that the solution results are better. To sum up, the performance of MSMOPSO is more significant than other algorithms, both in terms of solution results and algorithm stability. e box plots are shown in Figure 6, where 1 stands for MPSOD, 2 stands for MOPSOCD, 3 stands for SMPSO, 4 stands for NMPSO, 5 stands for ARMOEA, 6 stands for MOEAD, 7 stands for NSGAIISDR, 8 stands for gNSGAII, and 9 stands for MSMOPSO.
14 Scientific Programming Comparing Figures 7 and 8, it is found that the algorithm is evenly distributed on the ZDT4 and the optimal solution set is completely distributed on the Pareto front, while other test algorithms are not attached to the front surface. In addition, due to the large numerical value of solution results obtained by other test functions, there is a large difference between them and the optimal solution set that forms the front surface, so it is not shown in other comparison diagrams. For the DTLZ1, only MSMOPSO is evenly distributed on the Pareto front surface, while the optimal solution sets of ARMOEA, MOEAD, and NSGAIISDR are too scattered, so they do not converge to the front surface. e solution values of MPSOD, MOPSOCD, SMPSO, NMPSO, and gNSGAII are large and the distribution of the solution set is too scattered, so the Pareto front does not appear.

Complexity of MSMOPSO.
In MSMOPSO, the first archive is updated by elite selection strategy, self-organizing clustering strategy and species migration strategy are applied to expand and update the second archive, and finally, the particles of the whole swarm are randomly disturbed by Gaussian mutation strategy. erefore, the time complexity of this algorithm mainly depends on the above three aspects. For the first archive, the elite strategy is applied to select the particles in the archive and the swarm, so the time MOEAD on DTLZ1 In addition, the (tic, toc) order was used to record the running time of MSMOPSO in 1000 evaluations, and the results were compared with MOEAs and MOPSOs. e comparison results are shown in Table 8 and Table 9. As can be seen from the table, the calculation time of MSMOPSO for different detection functions is larger than that of other algorithms, but the difference is relatively small. It is worth noting that these results of running time depend on the influence of computer hardware, software, and other factors, so the running result is only for reference.   Figure 9 shows the change trajectory of IGD index of each algorithm under 10000 evaluation times. Figure 9(a) represents IGD index's changes of ZDT1 test function between MSMOPSO and evolutionary algorithm ARMOEA, MOEAD, NSGAIISDR, and gNSGAII. It can be seen from Figure 9 that the trajectory of MSMOPSO reaches a stable state before 1000 iterations, indicating that the convergence rate of MSMOPSO is fast. e convergence rates of ARMOEA, MOEAD, and NSGAIISDR are similar. According to the termination status of each algorithm in this process, it can be seen that the IGD index value of MSMOPSO is the minimum, which further illustrates that the MSMOPSO has significant convergence performance. In Figure 9(b), the IGD changes of MSMOPSO compared with MOPSOCD, NMPSO, SMPSO, and MPSOD have been presented. It can be clearly seen from Figure 9(b) that MSMOPSO achieves convergence before the 1000th iteration, and the subsequent iteration process remains stable, while IGD index values of other algorithms tend to be stable slowly after the 7000th iteration. It shows that the convergence speed of MSMOPSO is fast and it has certain stability.

Conclusions
In this paper, we propose a new MOPSO called MSMOPSO to deal with MOPs. In MSMOPSO, different archiving strategies and Gaussian mutation strategy are applied. en, it was compared with MOPSOs and MOEAs on different detection functions. In addition, we use box plots, IGD index, HV index, and front diagrams to test its performance. rough IGD and HV indexes, it was found that MSMOPSO had stronger convergence ability and more significant robustness. rough the comparison between box plots and front diagrams, it is found that the solution effect of MSMOPSO is better. What is more, we can see the convergence rate of MSMOPSO is faster by the IGD index trajectory graph.
e results show that different update strategies adopted for different archives can improve the shortcoming of MOPSO which is easy to fall into local convergence. And, Gaussian mutation strategy increases the density of the particles in the population.
Further research on MSMOPSO can be divided into two parts: algorithm optimization research and practical application research. Most researchers apply MOPSO to solve optimization problems in static environment and then solve MOPs in dynamic environment. erefore, improving MSMOPSO and using it to solve the optimization problem in a dynamic environment will be one of the focuses of future work. Another is to test the performance of MSMOPSO in real-world areas such as the smart grid [42].

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.