Research Multiobjective Particle Swarm Optimization Based on Ideal Distance

Recently, multiobjective particle swarm optimization (MOPSO) has been widely used in science and engineering, however how to eﬀectively improve the convergence and distribution of the algorithm has always been a hot research topic on multiobjective optimization problems (MOPs). To solve this problem, we propose a multiobjective particle swarm optimization based on the ideal distance (IDMOPSO). In IDMOPSO, the adaptive grid and ideal distance are used to optimize and improve the selection method of global learning samples and the size control strategy of the external archive, and the ﬁne-tuning parameters are introduced to adjust particle ﬂight in the swarm dynamically. Additionally, to prevent the algorithm from falling into a local optimum, the cosine factor is introduced to mutate the position of the particles during the exploitation and exploration process. Finally, IDMOPSO, several other popular MOPSOs and MOEAs were simulated on the benchmarks functions to test the performance of the proposed algorithm using IGD and HV indicators. The experimental results show that IDMOPSO has the better convergence, diversity, and excellent solution ability compared to the other algorithms.


Introduction
Multiobjective optimization problems (MOPs) exist widely in engineering applications and scientific research problems [1]. Traditional methods have significant limitations when solving MOPs, therefore, scholars have proposed many excellent multiobjective optimization algorithms to solve complex MOPs [2][3][4]. PSO is an optimization algorithm based on swarm intelligence proposed by Kennedy et al. [5]. Due to its rapid convergence and few parameters, PSO has achieved suitable applications in the field of single-objective optimization. Coello et al. proposed multiobjective particle swarm optimization (MOPSO) [6] by integrating some strategies with the need for real problems.
At present, the challenges of using PSO to solve MOPs are as follows: the first one is how to maintain the size of external archives effectively, and the second one is how to select global learning samples that take into account diversity and convergence; the last one is how to balance the convergence and diversity of algorithm. In response to these challenges, scholars have made unremitting efforts and proposed many excellent improved MOPSOs [7][8][9][10][11]. For example, Han et al. [7] proposed an adaptive MOPSO based on a hybrid framework of solution distribution entropy and population interval information to improve the algorithm's performance (AMOPSO), where a globally optimal solution selection mechanism based on entropy was first proposed. Its purpose is to analyze the current evolutionary trend and balance the convergence and diversity of AMOPSO. Zhang et al. [8] proposed a particle update strategy. A mutation operator is used to balance the searchability, adjust the parameters, and update the global leader. Leng et al. [9] proposed a MOPSO based on the optimal grid distance (GDMOPO), where grid strategies are introduced to the maintenance strategy of external archives and the selection of global learning samples. e algorithm can effectively balance the convergence and distribution of the algorithm. However, it lacks a strategy to prevent the algorithm from falling into a local optimum. Zhang et al. [10] proposed a MOPSO based on a competition mechanism (CMOPSO), in which elite particles are proposed to guide the population evolution. e algorithm provides a reference for scholars to improve MOPSO. However, its performance is not good when optimizing large-scale problems. Han et al. [11] proposed an adaptive gradient MOPSO based on a multiobjective gradient method and an adaptive parameter adjustment mechanism to improve the computational performance (AGMOPSO). e algorithm has a fast convergence speed and higher accuracy, increasing its computational complexity. Altogether, these algorithms have achieved some improvement for MOPSO performance, and there is still a lot of improvement room according to the no free lunch theorem. We proposed a novel MOPSO (IDMOPSO) based on a new external archive update and global learning sample selection strategy with an adaptive grid and ideal distance. In IDMOPSO, the cosine factor-based mutation strategy is introduced to explore the object space in MOPs. e fine-tuning parameter intervenes in the speed of particles whose methods can enhance the convergence and distribution and balance local search capabilities and global search capabilities. e main innovations of IDMOPSO are as follows: (1) Introduce the grid coordinates and ideal distance into the external archive, and propose an external archive size control strategy based on grid coordinates and ideal distance to improve the convergence and diversity of the algorithm. (2) Introduce grid coordinates in the objective space, then use the grid density to evaluate the distribution of noninferior solutions and the ideal distance to predict the convergence of IDMOPSO. (3) e cosine control factor mutates individual positions to improve the ability to trap local optimization in the optimization process. Meanwhile, fine-tuning parameters are introduced to adjust the inertia weight reduction mechanism to maintain the development and exploration ability of the algorithm.
In the remainder of this paper, related work and multiobjective particle swarm optimization (MOPSO) are discussed in Section 2. e details of the proposed IDMOPSO are presented in Section 3. Section 4 presents the comparative results of the DTLZ, UF, and ZDT benchmark problems. Finally, conclusions and future research are presented in Section 5.

Basic Definitions of MOPs.
A minimum MOP with n decision variables and m objectives is described as follows: is the i-th constraint of p inequality constraints, and h j (x) � 0(j � 1, 2, . . . , q) is the j-th constraint of q equality constraints.
Definition 2. Pareto-optimal: x * ∈ X is the Pareto-optimal solution (nondominated solution or noninferior solution) on X, only if the following condition is satisfied: Definition 3. Pareto-optimal set: for the multiobjective optimization problem, the optional solution set can be defined as follows: Definition 4. Pareto-optimal front: the curved surface consists of the objective function values corresponding to all pareto-optimal solutions in the pareto-optimal solution set P * is called the pareto front

Multiobjective Particle Swarm Optimization.
As PSO has achieved good results in solving single-objective optimization problems, scholars have extended PSO to the field of multiobjective optimization by introducing the idea of external archiving and the principle of Pareto dominance. In contrast to single-objective optimization problems, in MOPSOs, a set of noninferior solutions stored in the external archive will be generated in each iteration. en the noninferior solutions will be selected as global learning exemplars. In summary, the critical point of adopting PSO to deal with MOPs is how to select the global learning exemplar and maintain the size of the external archive. In MOPSO, the particles update their velocity and position using equations (6) and (7), respectively: 2 Discrete Dynamics in Nature and Society where t is the iteration number, and w is the inertial weight that ensures that the particle can expand the search space. r 1 and r 2 are random numbers generated uniformly in the range [0,1], c 1 and c 2 are the learning factors. pbest i is the individual optimal for the i th particle, and gbest is the global optimum for the swarm.

Related Work.
To improve the performance of MOPSO, many scholars have continuously explored many excellent strategies [12]. However, most of the current MOPSOs have difficulties selecting the optimal solution in the external archive, balancing the local and global search, etc. Deng et al. [13] proposed three strategies: a differential mutation, a feasible solution space transformation strategy, and a new multi-group mutation mechanism. e proposed strategies improve diversity and convergence and accelerate the convergence speed. Zhang et al. [14] proposed a decomposition-based multiobjective archiving method (DAA) to speed up the convergence of the algorithm, where the entire objective space is evenly divided into subspaces according to a set of weights, and a distance-based standardization method combined with Pareto dominance is proposed to balance the algorithm search. Deng et al. [15] designed a hybrid mutation strategy based on local domain advantages and SaNSDE. e mutation strategy is used in the early stage to ensure search efficiency. e SaNSDE is used in the later stage to prevent search stagnation. In addition, a differential evolution algorithm and quantum evolution algorithm are used to enhance the diversity of the population. ey use quantum rotation to speed up the convergence rate, which effectively balances the local search ability and global search ability of the algorithm. For simple operation and fast convergence of MOPSO, many improvements for MOPSOs have emerged [4]. All improvements can be summarized into the following categories: (1) MOPSO based on Pareto dominance, which guides the population evolution by selecting individual optimal and global optimal as learning exemplars, such as CMOPSO [10]. (2) MOPSO based on decomposition, such as MPSO/D [16], dMOPSO [17], and MMOPSO [18]. (3) MOPSO based on indicators such as R2HMOPSO [19]. (4) MOPSO based on sortings, such as GMRMOPSO [2] and GDMOPSO [9]. In 2004, Coello et al. proposed MOPSO [6], which provided a new method for PSO to solve MOPs. Deb et al. combined the crowded distance method in NSGAII [20] and proposed a crowded distance to select global learning exemplars. Nebro et al. proposed SMPSO [21], in which the speed limit was added to optimize the population. Zhang and Li used the aggregation method in MOEAD [3] to decompose MOPs into single-objective optimization problems and proposed MOPSO/D [22]. Martinez and Coello Coello decomposed MOPs into multiple subproblems in dMOPSO [17] and optimized each subproblem separately. In [18], Chen et al. used the decomposition idea to transform MOPs into a single-objective optimization problem and proposed a new speed update mechanism. In addition to the MOPSO mentioned above based on crowding distance and decomposition, many representative MOPSOs have appeared in recent years, such as GMRMOPSO [2], MOPSO-ASFS [23], AMPSO/ESE [24], MOPSONN [25], and others. In particular, in MOPSONN, the author proposed an archive updating mechanism based on the nearest neighbor method using the competition mechanism. e archive is updated according to the nearby distance. In the late iteration, the maximum cost rule and the sum of costs rule are used to update the archives. e current MOPSO improvement has some effect in dealing with MOPs. However, these methods have shown some improvement space when optimizing certain problems. Motivated by the above-mentioned external archive and global selection strategy of the learning sample, we propose a novel MOPSO (IDMOPSO) based on the adaptive grid and ideal distance.

Grid Construction.
e diversity of the population is an essential criterion for evaluating the performance of an algorithm [6,11]. In our proposed IDMOPSO, we adopted a strategy to map the objective function space to the grid. e number of particles in the grid was regarded as the density of the grid [9]. e smaller the number of particles in the grid, the smaller the grid density, and vice versa. When the grid density in several grids is the same, these grids are considered to have the same diversity. Here, the density is zero if the grid is on a particle. Several definitions related to grid construction are provided as follows.

Definition 5.
e upper and lower boundary of the grid coordinates: in the objective space, the maximum and minimum values of the m-th objective are max f m (x) and min f m (x), and the upper (U m ) and lower boundary (L m ) of the m-th object are defined as follows: where c is the number of grid divisions that can be selected according to different optimization problems.
Definition 6. Grid coordinates: if the function value of particle x on the m-dimensional is f m (x), the corresponding grid coordinates are where d m � U m − L m /c represents the grid width. Figure 1 shows a specific example mapping an objective space to grid Discrete Dynamics in Nature and Society space where the grid coordinates of particle p are (2, 3), and the grid widths are d 1 and d 2 , respectively.

Ideal Distance and σ Dominance.
To evaluate the quality of the particles effectively, this study uses the position information of particles in the grid to find the virtual positive ideal point, ideal negative point, and ultimate ideal point in the grid. is paper considers as the ideal positive and ideal negative points. e ultimate ideal point is described by representing the minimum value of the j-th target in the current evolution process and f max j represents the maximum value of the j-th target in the current evolution process. f 0 j represents is 0 of the j-th objective value in the current evolution process.
Let the Euclidean distance of the particle i to Z * be its positive ideal distance, Let the Euclidean distance of the particle i to Z − * be its negative ideal distance, Let the Euclidean distance of particle i to Z 0 be its ultimate ideal distance.
In this study, the ideal point is a virtual reference point that does not exist in actual problems.
us, the ideal positive point has relatively better properties, whereas the ideal negative point has the opposite. erefore, if a solution is close to the positive ideal and ultimate ideal point, this solution is optimal. Equation (13) gives σ dominance related to positive, negative, and ultimate ideal distances. Where For any particle P 1 and P 2 in the external archive, if P 1 and σ dominate P 2 , it must satisfy σ(P 1 ) < σ(P 2 ) in the rule of σ dominance. For example, Figure 2 shows the particle , p 4 � (4.5, 2.5) in the external archive, and the ideal and negative ideal points are Z * � (2, 2.5) and Z − * � (4.5, 5), respectively. By calculation, (all values are kept in two decimal places), and then σ(p 1 ) � 5.38, σ(p 2 ) � 4.61, σ(p 3 ) � 5.39, σ(p 4 ) � 5.15. So p 2 , σ dominate p 1 , p 3 , and p 4 . When selecting noninferior solutions, because of the large number of noninferior solutions, we cannot compare them to each other, so ideal points are usually introduced as a reference. Here, the ultimate and positive ideal points are closest to the lower left of the coordinate axis than all noninferior solutions. erefore, the solution closest to the positive and the ultimate ideal point is the optimal solution, guiding other particles to fly to the optimal position.

New Speed Update Mechanism and Mutation Operation.
Equations (6) and (7) are generally used to update the speed and position of the particles for most MOPSOs. However, because of specific differences among the particles, if the same method is used to update all particles, it will go against the advantages of some individuals for swarm evolution. erefore, we introduce the fine-tuning parameters to adjust the speed of the particles, where the parameter h determines the inheritance of the previous speed of the particles to coordinate the global and local search of the algorithm. In addition, the cosine factor is introduced to mutate the position of the particles to improve the ability to jump out of the local optimal solutions. e specific speed and position update formulas are as follows: Here, h � e − 0.5T/maxT is the fine-tuning parameter, and T is the current number of iterations. maxT is the maximum number of iterations and α is the adjustment factor for the mutation intensity. e following simulation test function determines the value of α: Here, the α value is set as 1, 2, 3, 4, and 5 on five test functions. e optimal value was determined by analyzing the influence of different α values in the HV [26] and IGD [27]. Table 1 shows the average value of IGD obtained by iterating 200 times and independently running 30 times on the ZDT1−4 and ZDT6 test functions whose mathematical expressions are shown in [28], under different α values. From Table 1, when α � 3α � 4 and α � 5, IGD values are best for ZDT1, ZDT3, ZDT4, ZDT6, and ZDT2, respectively. Similarly, Table 2 shows the average HV value obtained by iterating 200 times and independently running 30 times on ZDT1−4 and ZDT6 under different α values. From Table 2, we can see when α � 3 HV values are best for ZDT1, ZDT3, ZDT4, and ZDT6, and when α � 5 HV values are best for ZDT2. From the above analysis, we can conclude that when α � 3, the comprehensive performance of the proposed algorithm is the best, so α in equation (15) is taken as three in this study.

Maintenance of External
Archives based on the σ Dominance Principle. MOPSO usually adopts external archives to store noninferior solutions generated by each iteration in the algorithm running process. en, it uses a specific mechanism to control their size. To provide the dominant candidate solution for global learning exemplars, the maintenance strategy combining the adaptive grid and σ dominance principle is adopted to control the size of external archives. When the size of the external archive exceeds the preset maximum value, some particles are deleted. First, the adaptive grid is used to evaluate the distribution of noninferior solutions in the external archive and then find the areas where the particles are densely distributed in the external archive. Next, the dominance of the noninferior solutions is evaluated using the σ dominance principle. e particles with poor dominance are deleted. Figure 3 gives one example about the σ dominance principle. Because particles 3, 4, 5, and 6 have the highest grid density, and particle 6 is dominated by particles 3, 4, and 5 under the σ dominance principle, particle 6 is deleted.
To further test the effectiveness of the σ dominance principle, Figure 4 shows the different results when applying the σ dominance principle and the random selection strategy to maintain the size of the external archive, where the algorithm performs 200 iterations under 30 runs running independently on ZDT1 and ZDT2. As shown in Figure 4, the σ dominance principle has a smaller GD [29] value than the random selection strategy, indicating that the adaptive grid and σ dominance principle can effectively maintain external archives compared with the random selection strategy.

Selection Strategy of Global Learning Exemplar Based on the σ Dominance Principle.
e selection of the global learning exemplar is the key factor because the global learning exemplar will lead the whole swarm fly to move to the true Pareto frontier. erefore, if some particles with poor convergence are selected, the convergence speed of the algorithm may be slower and affect the convergence accuracy. If some particles with poor distribution are selected, the algorithm can easily fall into the local optimum. Here, the combination strategy of the adaptive grid and the σ dominance principle is adopted to select global learning examples. e selection steps were as follows: Step 1. Select the grid with the smallest grid density. Suppose there are many grids with the same density. In that case, the particles in the same grid are randomly selected as candidate solutions for global learning exemplars.
Step 2. Among the candidate solutions selected in Step 1, the candidate solution with the smallest σ value is selected as the global learning exemplar.
In Figure 3, comparing the grid density, we can see that particles 1 and 2 have the smallest, and the distribution of particles is more uniform. erefore, the particles in the grid were used as candidate solutions for the global learning sample. Because particle 1 is better than particle 2 under the σ dominance principle, particle 1 is used as the global learning exemplar to guide the swarm to fly. To test the effectiveness of the σ dominance principle in the selection Discrete Dynamics in Nature and Society   strategy of the global learning exemplar, Figure 5 shows the different results under the σ dominance principle and random selection strategy to select global learning exemplars where the algorithm performs 200 iterations under 30 runs running independently on ZDT1 and ZDT2. It can be seen from Figure 5 that the σ dominance principle makes the learning exemplars obtain a smaller IGD value. e proposed strategy first uses an adaptive grid to analyze the density of particles, and areas with sparse particles are confirmed where the dominant particles are selected as learning samples. is strategy maintains the diversity of the swarm. It effectively improves the probability of the swarm moving to the true Pareto frontier.

IDMOPSO Algorithm Flow.
A flowchart of the IDMOPSO is shown in Figure 6. e specific process is as follows: Step 1. Set the relevant parameters of IDMOPSO and initialize the population and velocity Step 2. Calculate the fitness value of the particles and perform Pareto nondominated sorting Step 3. For the swarm, determine the individual learning exemplars (pbest) for each particle.
Step 4. Establish an external archive. en, use the adaptive grid and σ dominance principle to control its size and update the particles in the archive.
Step 5. Use the adaptive grid and σ dominance principle to select a particle in the external archive as the global learning exemplars (gbest) to guide the swarm evolution.
Step 6. Use equations (14) and (15) to update the position and velocity of each particle, respectively.
Step 7. Determine whether the termination condition is met; if satisfied, the iteration ends; otherwise, return to Step 2.

Experimental Conditions and Parameter Settings.
To effectively evaluate the performance of IDMOPSO, we compared IDMOPSO with specific popular and novel algorithms, including four MOPSOs (MPSO/D [16], dMOPSO [17], SMPSO [21], and NMPSO [30]) and four MOEAs (MOEA/D [3], MOEADD [31], MOMBI-II [32], IMMOEA) [33]. e swarm size of all algorithms was set to 200, and Table 3 lists the main parameter settings for all algorithms. In Table 3, div is the division network number of cells; c 1 and c 2 are the parameters of the velocity equation used in the MOPSO; p c and p m are the crossover probability and mutation probability, respectively; η c and η m are the distribution indices of SBX and PM, respectively; F and CR are parameters set by the authors in differential evolution; T is the number of divisions in the GA; K is the number of reference vectors; A is the threshold of variances; E is the tolerance threshold; R is the record size of nadir vectors. e source code of the comparison algorithm is obtained from platEMO [34]. 22 benchmark functions were selected as the test function set, including ZDT1−4, ZDT6, DTLZ1−7 [28], and UF1-10 [35]. Table 4 lists the relevant parameters of each test function, where D represents the dimensionality of the decision variables, M represents the number of objectives, and FEs represents the maximum number of evaluations.

Evaluation Index.
To test the performance of all algorithms, the HV [26] and IGD [27] indices were used to evaluate the performance of all algorithms. e IGD measures the average value of the minimum distance between the real Pareto frontier and the noninferior solution obtained by the algorithm. e smaller the IGD value, the better is the comprehensive performance of the algorithm. Furthermore, HV measures the ratio of the noninferior solution obtained by the algorithm covering the space enclosed by the real Pareto front and the reference point (RP). e larger the HV value, the better the comprehensive performance of the algorithm. Tables 5 and 6 show the average and standard deviation values of IGD and HV obtained by running 30 times independently. e best mean values obtained by all algorithms on the test function are indicated in bold. It can be concluded that the comprehensive performance of IDMOPSO is better than that of MPSO/D, SMPSO, NMPSO, and dMOPSO, where the number of the best IGD (14) and HV (14) of IDMOPSO are higher than those of the other algorithms. MPSO/D had no best IGD and HV values, SMPSO obtained three best IGD and four best HV values; NMPSO obtained three best IGD and four best HV values, and dMOPSO obtained two best IGD values.

Comparison of IDMOPSO with Four MOPSOs.
On UF1-10, IDMOPSO had the best performance, where it obtained the nine best IGD and ten best HV values. In the ZDT series, IDMOPSO obtained the best IGD values for ZDT1, ZDT2, and ZDT4, and the best HV values were obtained for ZDT1, ZDT2, ZDT3, and ZDT4. Even if the performance on ZDT6 is not the best, it is better than that of MPSO/D and dMOPSO. In the DTLZ, IDMOPSO obtained the best IGD on DTLZ1 and DTLZ6 but did not perform best on other DTLZ test functions. From the above experimental analysis, it can be found that IDMOPSO achieved good results in the optimization of ZDT and UF. When optimizing the irregular three-dimensional DTLZ2−4, IDMOPSO has an average performance, which indicates that IDMOPSO needs further improvement in optimizing the irregular MOPs. In other words, in 22 test functions, IDMOPSO obtained the number of better IGD and HV than MPSO/D, SMPSO, NMPSO, and dMOPSO in most test functions means that the overall performance of IDMOPSO is better than that of other MOPSOs.
For further observation, Figs 7-10 show the spatial distribution of the noninferior solution sets obtained by MPSO/D, SMPSO, dMOPSO, NMPSO, and IDMOPSO, respectively. Figure 7 shows that the distribution and convergence of IDMOPSO and SMPSO are better than those of MPSO/D, dMOPSO, and NMPSO on ZDT1. Figure 8 Discrete Dynamics in Nature and Society 7 shows that IDMOPSO have better convergence and distribution, and the other three algorithms still need to be improved in terms of convergence and distribution on ZDT2. Additionally, Figures 9 and 10 show that the distribution and convergence of IDMOPSO are significantly better than those of the other four MOPSOs on UF9 and UF10. From the above analysis, it can be concluded that the convergence and diversity of IDMOPSO are better than those of the other four MOPSOs, which fully demonstrates the effective use of an adaptive grid and ideal distance to maintain external archive. Tables 7 and 8 Figure 7 shows that the distribution and convergence of IDMOPSO are better than

Comprehensive Performance Analysis of the Algorithm.
To compare the comprehensive performance of IDMOPSO and the other eight algorithms, Table 9 shows the ranking of IGD mean values on 22 test functions by sorting the dictionary order. Here, IDMOPSO obtained thirteen best IGD, MOEA/D obtained one best IGD, MOEADD obtained two best IGD, MOMBI-II obtained two best IGD, dMOPSO and MPSO/D had not best IGD, SMPSO obtained the two best IGD, and NMPSO obtained one best IGD. From Table 9, it can be concluded that the comprehensive performance of IDMOPSO is better than that of the other eight algorithms.

Conclusion
Multiobjective particle swarm optimization (MOPSO) has been widely used in science and engineering, however providing a reasonable solution among many noninferior solutions is critical in all applications. In this paper, we propose a multiobjective particle swarm optimization based on ideal distance (IDMOPSO), where an ideal point and the degree of approximation is introduced as the criterion to measure the quality of the particle and is applied to the control of the external archive size and the selection of global learning exemplars. In addition, the cosine control factor is used to dynamically adjust the flying of particles in the swarm, which effectively improves the algorithm's optimization ability and convergence ability. Furthermore, the cosine factor is applied to mutate the position of the particles to prevent the algorithm from falling into the local optimum.
In the experiment simulation, the nine algorithms IDMOPSO, MPSO/D, SMPSO, NMPSO, dMOPSO, MOEA/ D, MOEADD, IMMOEA, and MOMBI-II were tested on the ZDT, UF, and DTLZ series functions. e experimental results show that IDMOPSO has stronger competitiveness than other algorithms in IGD and HV indicators. erefore, IDMOPSO can be regarded as an effective algorithm for solving multiobjective optimization problems.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.