Multiobjective Particle Swarm Optimization Based on Cosine Distance Mechanism and Game Strategy

The optimization problems are taking place at all times in actual lives. They are divided into single objective problems and multiobjective problems. Single objective optimization has only one objective function, while multiobjective optimization has multiple objective functions that generate the Pareto set. Therefore, to solve multiobjective problems is a challenging task. A multiobjective particle swarm optimization, which combined cosine distance measurement mechanism and novel game strategy, has been proposed in this article. The cosine distance measurement mechanism was adopted to update Pareto optimal set in the external archive. At the same time, the candidate set was established so that Pareto optimal set deleted from the external archive could be effectively replaced, which helped to maintain the size of the external archive and improved the convergence and diversity of the swarm. In order to strengthen the selection pressure of leader, this article combined with the game update mechanism, and a global leader selection strategy that integrates the game strategy including the cosine distance mechanism was proposed. In addition, mutation was used to maintain the diversity of the swarm and prevent the swarm from prematurely converging to the true Pareto front. The performance of the proposed competitive multiobjective particle swarm optimizer was verified by benchmark comparisons with several state-of-the-art multiobjective optimizer, including seven multiobjective particle swarm optimization algorithms and seven multiobjective evolutionary algorithms. Experimental results demonstrate the promising performance of the proposed algorithm in terms of optimization quality.


Introduction
In the field of engineering, aviation scheduling, optimal control, and others, most of the optimization problems are multiobjective optimization problems (MOPs) [1]. MOPs are different from single objective optimization problems. More objective functions need to be optimized, which have the characteristics of conflict or influence each other [2]. is means that it is impossible for all the objective function values to be optimal, in which the optimal solution for one objective function may be the worst solution for another objective. erefore, a set of trade-off solutions, known as Pareto optimal set, is adopted to represent the best possible compromises among objectives in MOPs. e practical problems are considered to have the characteristics of high-dimensional nonlinearity and strong constraints, so classic optimization algorithms (conjugate gradient method [3], Newton method [4], simplex algorithm [5], etc.) can no longer solve MOPs effectively. With the development of science and technology, the emergence of intelligent control makes multiobjective optimization reach a more advanced stage. e method of optimal control conditions can take different paths. For example, the introduction of the deformable MEMS device in [6] has a positive effect on improving optimal control. In addition, the intelligent optimization algorithm, which belongs to the bionic algorithm, has also attracted the attention of researchers. Among them, the particle swarm optimization (PSO) algorithm [7], which has the advantages of simple operation, fast velocity, wide application range, and few setting parameters, has become the focus of more researchers.
PSO derived from the simulation of complex adaptive systems which was an evolutionary computation method based on swarm intelligence was proposed by Kennedy and Eberhart in 1995. It was developed inspired by the social behavior of a swarm of animals like birds. In PSO, individuals were called particles and each particle represents a potential solution. e swarm consists of a group of particles flying through the search space searching for the optimal solution, like birds searching for food. Individuals called "particles" in PSO "flow" through the ultradimensional search space. e position change of particles in the search space was based on the individual's social and psychological intention surpassing other individuals successfully. It could communicate with other individuals and change its structure and behavior according to the process of "learning" or "accumulating experience." erefore, changes in the velocity and position of particles will be affected by the experience of other particles.
With the development of intelligent algorithms, the relative simplicity and the practical success of single objective optimizer have motivated researchers to extend the usages of PSO from the single objective optimization problems into MOPs. In 2002, Coello et al. extended PSO from a single objective to multiple objectives, which was used to solve MOPs for the first time [8]. In the research of multiobjective particle swarm optimization (MOPSOs), there are at least two fundamental issues to be addressed. e first issue is how to use a standard to select excellent global leader as the learning sample of all particles flight to guide other particles in the population. Due to the important influence of the leader in the search direction, the random selection of global learning samples in the external archives may lead the algorithm to be trapped into a local optimum. At present, most of MOPSOs based on dominance use the infinite external archive to store nondominated solutions, so the maintenance and update of the external archive are also very important. e second issue is how to balance convergence and diversity of the swarm. It is crucial to the performance of MOPSOs, because PSO-based multiobjective optimizations are very likely to be trapped into the local optimum (or one of many optima) of MOPs due to their fast convergence.
In this article, a novel multiobjective particle swarm optimization based on cosine distance mechanism and game strategy was proposed, which was called GCDMOPSO. To maintain the update mechanism of the external archive, the cosine distance was used to delete the worst particles in the external archive. At the same time, the same number of particles was selected in the candidate set to supplement the deleted particles in the external archive to maintain the update of the external archive dynamically. e main contributions of this article were as follows: (1) Dynamic maintenance of the external archive updates. After each iteration of the algorithm, the nondominated solutions selected from the candidate set were added to the external archive. When the number of nondominated solutions in the external archive exceeded the maximum size, the cosine distance was used to compare the degree of crowding of the nondominated solutions in the archive, and the most crowded solutions were deleted. en, it could also identify the removed solutions and update the crowding degree of all solutions in the domain (i.e., after deleting the most crowded solutions, recalculate the cosine distance of all other solutions). is method achieves better diversity and preservation.
(2) e method by which the individual was selected. In the update process of this algorithm, the fitness value of each individual was calculated through nondominated sorting, which will generate individuals of the same ranking value. e individuals with the same ranking value were selected into the candidate set, and the Euclidean distance between each individual and the origin of the coordinate was calculated. en the Euclidean distance from each individual to the coordinate origin was sorted in ascending order. In order to maintain the updating of external archive dynamically, when we delete the particles in the external archive, we need to put the same number of individuals into the archive.
(3) e selection of the global leader. Based on the recently developed competitive group optimizer and combined game mechanism, this article proposed a novel global leader selection strategy based on the game mechanism. Randomly select two nondominated solutions in the external archive, and compare the cosine distances of the two nondominated solutions, respectively. e winner was selected as the global leader, leading other particles to fly. We can keep all obtained solutions converging along the real Pareto front. e remaining part of this article is structured as follows. Section 2 describes the related definitions of the MOPs and MOPSOs briefly, as well as the related works from which the main ideas are inspired for designing the new algorithm in this article. en, the details of the proposed GCDMOPSO are described in Section 3. Section 4 is the experimental part of GCDMOPSO. GCDMOPSO is compared with some selected MOPSOs and MOEAs in this article. Finally, the conclusions are drawn in Section 5.

MOPs.
In this section, the definition of the fundamentals of the MOPs is presented. e mathematical forms of the MOPs are described as follows: MOPs are different from single objective problems, so the same problem-solving ideas cannot be adopted by the former. It is impossible for a certain solution of MOPs to achieve optimal results for all objectives at the same time, and different solutions cannot be compared due to different objective functions. erefore, when solving a MOP, a set of solutions are usually obtained, and these solutions have different effects for different objective functions. e solutions in this set are called the nondominated solutions or Pareto optimal solutions. e following is a detailed introduction to the related concepts.
Definition 2. For Pareto optimal, x * ∈ X is the Pareto optimal solution on X, if and only if the following conditions are satisfied ∃x ∈ X: x≺x * .
at is, there is no better solution than x * in the set X, so x * is the optimal solution in X, which is also called nondominated solution or noninferior solution.
Definition 3. For Pareto optimal set, for the MOPs, the optimal solution set can be defined as follows: Definition 4. For Pareto optimal front, the curved surface consisting of the objective function values corresponding to all Pareto optimal solutions in Pareto optimal solution set is called Pareto front:

Multiobjective Particle Swarm
Optimization. MOPSO is an improvement of PSO. In PSO, the individual birds in the population are abstracted as massless particles. Each particle has its own velocity and position. e position and velocity of i particle are expressed as , respectively. Searching for food in N is the space, and food is considered as the optimal solution. Particles are updated according to the following formula: e right side of equation (6) consists of three parts. e first part is the inertia quantity, where w is the inertia weight. Its size determines how much the particle inherits to the current velocity. If the value of w is large, the overall search capability of the algorithm will be enhanced; if the value of w is small, the local search function of the algorithm will be improved. w is generally limited to a random number less than 1. e second part is the cognition of the individual, which represents the movement of the individual to the best position according to his historical flight experience. Among them, pbest represents he optimal position of the individual, r 1 is a random number normally distributed in the interval (0, 1), and c 1 is the learning factor, representing the degree of particles learning. e third part is the amount of social cognition, which leads to the amount of particles that move to the global optimal position. gbest represents the global optimal position, r 2 is a random number normally distributed in the interval (0, 1), and c 2 is the learning factor, where c 1 � c 2 � 2 is usually taken. e coordination of these three parts determines the overall performance of the algorithm.
With the deepening of research, many scholars have extended PSO to MOPSO, so that the algorithm is more suitable to solve MOPs. In MOPs, the number of the optimal solutions is not unique due to the increase of constrained objective. Combined with PSO, the difference of MOPSO is not only the selection of the historical optimal position and the global leader under multiple constraints but also the storage of the historical optimal position and the global leader. erefore, MOPSOs used the external archive mechanism to solve storage problems and used the external archive to save the nondominated solutions generated during the search in the entire swarm. e nondominated solutions in the external archive are not dominated by any other particles in the external archive. erefore, all nondominated solutions in the external archive should meet the two following requirements: (a) e nondominated solutions in the external archive collection do not have a mutual dominance relationship, and it is impossible to compare which of the nondominated solutions are better. (b) e introduced particles are stronger than the solutions in the original external archives, and the weaker solutions in the original external archives should be eliminated.

Existing MOPSOs.
e first PSO variant was proposed by Coello et al. [9]. e authors incorporated the concept of Pareto advantage into the method of PSO. e local optima and the global optima in the swarm were determined by the Pareto dominance principle. For the first time, the secondary storage library (i.e., the external archive) was used to store the nondominated solutions obtained after each iteration.
is was the first time that PSO has been used to solve Computational Intelligence and Neuroscience MOPs. Compared with classic MOEAs such as NSGA-II [10] and PAES [11], the first MOPSO proposed was more competitive in solved MOPs, but it was unable to solve MOPs with complex landscapes. To address this issue, Sierra and Coello et al. [12] proposed an improved PSO-based multiobjective optimization, in which Pareto advantage and congestion factor were used to select a list of available leading solutions; and the swarm was divided into three subswarms simultaneously; then different mutation operators were suggested for different subswarms divided by users in advance. In addition, the experience of this algorithm used ε dominance to fix the size of the external archive. Experimental results show that the performance of the improved optimization on MOPs with multiple local fronts is more competitive. A speed-constrained MOPSO was proposed by Nebro et al., called SMPSO [13], in which the velocity of all particles was restricted in order to tackle MOPs with multimodal landscapes. e SMPSO allowed new effective particle positions to be generated when the velocities were too large. Other features of the SMPSO included polynomial mutation as turbulence factor and the external archive was comprised of nondominated solutions which were found during the search process. However, most of MOPSOs could not solve MOPs effectively due to the fact that velocities in such algorithms were too rapid. e above MOPSOs only used a single search strategy to update particle's velocity. So, Lin et al. proposed a novel MOPSO based on multiple search strategies [14], which used a decomposition method to transform MOPs into a set of aggregation issues, and then allocated each particle accordingly to optimize each aggregation issue. is algorithm designed two search strategies to update the velocity of each particle. After that, all nondominated solutions visited by particles were preserved in an external archive, and the evolutionary search strategy was further executed to exchange useful information between them. ese multiple search strategies enabled this novel MOPSO to handle various MOPs more effectively.
In contrast to the MOPSOs where the global optimal solution is determined by dominance relations, Zhang and Li used the framework of MOEA/D [15] to try to embed the decomposition mechanism into the PSO-based multiobjective optimization for the first time and proposed a MOPSO by decomposing a MOP into a number of single objective optimization problems [16]. e algorithm used the PSO search method instead of the genetic operator. Later, an improved version of this multiobjective optimization called SDMOPSO [17] was proposed by Al Moubayed et al. In SDMOPSO, the global optima were only selected from the neighborhood of particles, and crowded files were used to preserve the diversity of swarm leaders. Dai et al. divided the solution space into multiple subspaces and retained only one optimal solution in each subspace so that the nondominated solutions can be evenly distributed. is MOPSO was based on object space decomposition [18]. Based on the decomposition method, Martłnez and Coello also proposed a version of multiobjective optimization called dMOPSO [19], in which the global leader was determined according to the scalar aggregation value. Moreover, a memory reinitializationstrategy was used when a particle reached a certain value. e main aim of this approach was to preserve diversity and to avoid trapping in local fronts. Although the improvement of this algorithm holds a lower computational cost than most of the other MOPSOs which often need to maintain an archive, it is difficult to converge to the true Pareto front when dealing with complex models.
In 2020, Alkebsi and Du proposed a novel MOPSO. is algorithm was a novel archive update mechanism based on the nearest neighbor method, called MOPSONN [20]. In the early stage of this algorithm, the external archive was updated based on the nearby distance measurement. In later generations, two new rules were used, namely, the maximum cost rule and the cost sum rule, to update the archive. ese two archive update strategies updated the nondominated solutions in the archives.
In addition, a few scholars have improved the MOPSOs from the aspect of parameter setting to make the MOPSO more optimized [21]. In view of the effective analysis of the abovementioned existing algorithms, this article combined with the cosine distance update mechanism and the meshing strategy. A novel multiobjective game particle swarm optimization based on the cosine distance update mechanism was proposed, which effectively improves the convergence and diversity of solving MOPs. e following section describes the proposed algorithm in detail.

Acronyms in the GCDMOPSO.
In order to read the article more clearly, a table of acronyms is listed in this article. e specific contents are shown in Table 1.

The Proposed the GCDMOPSO
In this section, the details of our proposed GCDMOPSO are introduced. e algorithm generates a new population from all individuals initialized randomly. e particles of this population will generate many levels according to their dominance relationship. e first-level individuals generated by the nondominated relationship flow into the candidate set, and a new external file is further created. en, based on the grid technology and the cosine distance strategy, the individuals introduced in the candidate set are screened to dynamically maintain the external archive. At the same time, the nondominated solutions in the external archives are screened through game strategy as the global leader to guide other individuals to fly. After that, this program updates the velocity and position of the group according to equations (6) and (7).

Selection of Introduced Particles.
Any individual only chooses the appropriate type of talents as the learning object, and only the outstanding individuals will be selected into the external archives as leaders to lead other individuals to update and iterate. According to the previous MOPSOs, the program calculated the fitness value of each individual and randomly selected individuals with the same ranking value as candidate solutions to enter the external archive to guide other individuals to fly. Due to the fact that the fitness value was calculated to generate the first-level ranking value after the iterative update of the algorithm may have the same value, the random selection method in the previous algorithm could not better select the candidate solution. is article has improved it in this part. As shown in Figure 1, in our algorithm, a candidate set is added. e fitness value of each individual is calculated, and the first-level individuals flow into the candidate set. At the same time, the candidate set is regarded as a grid, and the Euclidean distance from the fitness value of each individual to the origin of the coordinate is recalculated. en the distance from each individual to the origin of the coordinate is sorted in ascending order, and individuals closer to the origin of the coordinates are selected into the external archive. If the nondominated solutions in the external archive do not reach the maximum size, all individuals in the candidate set are entered into the external archive according to the individual fitness ranking value, and they are stored; if the nondominated solutions in the external archive reach the maximum size, the individuals with the smaller cosine distance in the external archive will be eliminated.
In other words, in order to maintain the number of particles in the external archive mechanism at stable level, when a certain number of particles are deleted, the same number of particles will be added from the candidate set.

Maintenance and Update of External Archives.
Archiving strategy is an important part of MOPSOs. Excellent maintenance capabilities can not only improve the search efficiency of the algorithm but also improve the convergence of the algorithm on the other hand. is article mainly adopts the external archive scheme to store the Acronyms e full name of an acronym MOPs [1] Multiobjective optimization problems PSO [7] Particle swarm optimization MOPSOs Multiobjective particle swarm optimization algorithms MOEAs Multiobjective evolutionary algorithms GCDMOPSO Multiobjective particle swarm optimization based on cosine distance mechanism and game strategy MOPSO [9] Handling multiple objectives with particle swarm optimization NSGA-II [10] A fast and elitist multiobjective genetic algorithm PAES [11] Approximating the nondominated front using the Pareto archived evolution strategy SMPSO [13] A new PSO-based metaheuristic for multiobjective optimization MMOPSO [14] A novel multiobjective particle swarm optimization with multiple search strategies MOEA/D [15] A multiobjective evolutionary algorithm based on decomposition SDMOPSO [17] A novel smart multiobjective particle swarm optimization using decomposition dMOPSO [19] A multiobjective particle swarm optimizer based on decomposition MOPSONN [20] A fast multiobjective particle swarm optimization algorithm based on a new archive updating mechanism IGD [22] Inverted generational distance NMPSO [23] Particle swarm optimization with a balance able fitness estimation for many-objective optimization problems MOPSOCD [24] An effective use of crowding distance in multiobjective particle swarm optimization MPSO/D [18] A new multiobjective particle swarm optimization algorithm based on decomposition NSGA-III [25] An evolutionary many-objective optimization algorithm using reference point-based nondominated sorting approach, part I: solving problems with box constraints MOEAIGDNS [26] A multiobjective evolutionary algorithm based on an enhanced inverted generational distance metric SPEAR [27] A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization SPEA2 [28] Improving the strength Pareto evolutionary algorithm IBEA [29] Indicator-based selection in multiobjective search Parameters set by the author in differential evolution CR Parameters set by the author in differential evolution div e division network number of cells pbest Personal best particle gbest Global best particle Computational Intelligence and Neuroscience nondominated solutions generated during the entire iterative update. e maintenance principle of the external archive mainly uses the cosine distance measurement mechanism. e cosine distance measurement mechanism is usually used in the field of text classification. Since the text space and the multiobjective space are both multidimensional spaces, they have certain similarities at the same time. erefore, the cosine distance measurement mechanism is applied to the multiobjective optimization. If a dimension is represented by a vector, the dimension of a vector can be regarded as a single objective. e cosine distance between objectives can be used to determine the density relationship between individuals.
Definition 5. For weight ratio, suppose that the population size is N and the objective function value of particle i is expressed as For the i − th particle, the weight ratio of the objective function value in the k dimension is as follows: Definition 6. For cosine distance, suppose that the objective vector of any particle i is expressed as . . , W im ); according to the cosine formula, the cosine distance between two objectives is In this article, in order to better control the size of the external archive, the size of the external archive is set to 200. As shown in Figure 2, the objective space is divided into k subregions. en a subregion with highest density is selected, and the cosine distance between each nondominated solution in each subspace and its neighboring particles is compared.
e smaller cosine distance between the nondominated solution and its neighboring particles, the greater the density of the nondominated solution and the poorer distribution.
e GCDMOPSO calculates the cosine distance between the nondominated solution and its neighbor particles according to Definitions 5 and 6 and sorts the cosine distance in ascending order. en, the nondominated solutions with minimum cosine distance, minimum angle, and maximum density are selected for dynamic deletion. In addition, only one nondominated solution is deleted. en the cosine distances of other nondominated solutions are recalculated, and the nondominated solution with the smallest cosine distance is deleted. e solid black dots are the remaining nondominated solutions, and the hollow circles are the deleted individuals, with a deletion rate of 40%. At the same time, the same number of individuals is selected in the set of candidate solutions to supply the nondominated solutions deleted in the external archive to maintain the update of the external archive.
Leaders guiding the optimization process are an effective way to design MOPSOs. Among the many strategies currently available, the direction that prompts particles to explore some potential areas guides the search. e cosine distance strategy proposed in this article is quite different from the random strategy proposed in the past. Figure 3 shows a schematic diagram of the comparison between the cosine distance strategy and the random strategy. First of all, all the evaluation indicators of the two strategies (ZDT1-ZDT4 and ZDT6, DTLZ1-DTLZ5, UF1-UF10) are run 30 times, respectively. e data of all evaluation indicators running 30 times are sorted in descending order into 30 levels. en all the evaluation indicators of each level are averaged. e ordinate indicates the average of all evaluation indicators for each level, and the abscissa indicates that each strategy has been run 30 times. It can be seen from Figure 3 that, in the same level, the average of the cosine distance strategy is better than the average of the random strategy significantly, which fully illustrates the feasibility of the cosine distance strategy. Figure 4 shows that the GCDMOPSO used the cosine distance strategy to detect the evolution state. Taking ZDT1 as an example, it was compared to seven state-of-the-art MOPSOs and seven classic MOEAs on ZDT1. (a) shows the convergence trajectory of the GCDMOPSO and seven  Computational Intelligence and Neuroscience MOPSOs on ZDT1; (b) shows the convergence trajectory of the GCDMOPSO and seven MOEAs on ZDT1. e experimental results indicate the promising convergence speed of the proposed GCDMOPSO in comparison with the seven state-of-the-art MOPSOs and seven classic MOEAs on ZDT1.
As further observations, Figure 5 presents the nondominated set associated with the best IGD value among 30 runs obtained by the GCDMOPSO, and then MOPSOs and MOEAs were compared on multiobjective DTLZ1.

Selection Strategy of Global Leader.
In MOPSOs, each individual has location information and velocity information, as well as the characteristics of information exchange between individuals. ese individuals can learn from the best position in history (pbest) and the best position in the world (gbest) and then their position and velocity are updated through equations (6) and (7) in Section 2 to produce a new generation of groups. e choice of the global optimal position (gbest) is closely related to the distribution of nondominated solutions. If few dense nondominated solutions are distributed in a certain area, the sparsely distributed particles are more likely to become the global optimal particles. In order to strengthen the selection pressure of gbest, it was combined with the game update mechanism. us, a novel global optimal selection strategy of the game strategy was proposed. e original game group optimizer theory divides the original population into two parts: game success and game failure. e failed part of the game strategy learns from the successful part of the game, and the population is updated iteratively on this basis. References for the specific game process can be found in [30]. e game strategy proposed in this article is different from the original game mechanism. In this article, the game is played in the external archive, and the particles to be updated are randomly selected from two individuals in the external archive. e winner of the game will become the leader, guiding the failed individuals to search for the optimal set, and the successful individuals will maintain the original speed and direction. e specific update process is shown in Figure 6. e game individuals in this game strategy were elected through nondominated sorting and grid optimal distance. e success or failure of the game is determined according to the cosine distance between the game individuals. e winner of the game acts as the global optimal individual to guide other individuals in the population to fly. In each pair of games, the individuals to be updated randomly select two nondominated solutions a and b from the external archive. e two nondominated solutions a and b are played through the cosine distance, and the game with a small cosine distance is successful. As shown in the pseudocode algorithm, the cosine distance between the nondominated solution a and individual k to be updated is small, so the nondominated solution a guides individual k to be updated to update the speed and position. e update formula is as follows: In the above formula, c 3 and c 4 are randomly generated vectors between [0, 1], X k is the position of the winner of the game, X i is the current position of the particle, and v i is the current velocity of the particle. e whole process from selecting an external archive to comparing cosine distances is called game. Because the selected nondominated solution is random, the individual to be updated is not sure which guide will be selected in the 0 e cosine distance was used to delete 40% of the particles  end. e attributes of the leader will determine the effect of individual renewal, and the leader with better attributes will lead the update better. e effect of individual update depends on the leader entirely, so it is called game.

3.4.
Steps of the GCDMOPSO. For MOPs, the objectives are mutually restricted. In MOPSOs, blindness is inevitable when controlling external archives and selecting the global optimum. is article proposes a novel strategy for external archive updates and global optimization. e main flow chart is shown in Figure 7 and the main steps of GCDMOPSO are as follows: Step 1. e population was initialized, and acceleration constants c 1 and c 2 were set to guide other parameters.
e fitness value of each individual was calculated, and nondominated sorting was performed by comparing its fitness value during the current iteration with the best historical fitness value.
Step 3. Whether the terminal conditions were met was determined. If met, output the results and terminate the algorithm. Otherwise, continue to the next step.
Step 4. A candidate set was created. By calculating the Euclidean distance from the origin of the coordinates to each individual, individuals with a shorter Euclidean distance were selected into the external archive.
Step 5. An external archive was created and the worst solution part of the external archive was deleted using the cosine distance measurement mechanism. At the same time, the candidate set was added as a storage mechanism for screened advantageous individuals' mechanism.
Step 6. e global optimal sample was selected. Using roulette and combining the game update mechanism, design a game strategy that incorporates the cosine distance measurement mechanism to select the global optimal sample.
Step 8. e fitness value of the current individuals was evaluated and ranked.

Test Problems.
Comprehensive and diverse test problems were employed in order to assess the performance of GCDMOPSO. First, the ZDT test problems were adopted. If there are only the ZDT series of test functions, they are impossible to show the superior performance of GCDMOPSO. erefore, other more difficult MOPs, the UF test problems, are used based on complex characteristics. In order to further test the performance of GCDMOPSO in processing MOPs with three objectives, DTLZ1-DTLZ5 and UF8-UF10 test problems are used in this article. ese test problems cover most of the challenges in this area, such as many local Pareto fronts, convergence deviations, concavities, and discontinuities. e relevant settings of these test problems are given in Table 2.
Among them, N represents size of the population; M represents the number of objectives; D represents dimension of the decision variable; FEs represents the maximum number of evaluations. For fair comparison, all relevant

Performance
Measures. e goal of MOPs is to find a uniformly distributed set that is as close to the true Pareto fronts as possible. In order to compare with other algorithms, this article uses inverted generation distance (IGD) [22] to evaluate the performance of GCDMOPSO. It is believed that this performance index can not only explain the convergence effects of the algorithm but also explain the distribution of the final solution. e true Pareto front for computing IGD was downloaded from http://jmetal. sourceforge.net/problems.html.

Experimental Settings.
In the experiment, in order to verify the performance of GCDMOPSO in a convincing way, it was compared with seven state-of-the-art MOPSOs (i.e., dMOPSO [19], MOPSO [9], NMPSO [23], SMPSO [13], MOPSOCD [24], MPSO/D [18], and MMOPSO [14]) and seven classic MOEAs (i.e., NSGA-II [10], NSGA-III [25], MOEA/D [15], MOEAIGDNS [26], SPEAR [27], SPEA2 [28], and IBEA [29]), respectively. For fair comparison, all relevant parameters in the comparison algorithm are set according to their original references, as shown in Table 3. p c and p m are crossover probability and mutation probability in Table 3, respectively; η c and η m are the distribution indexes of SBX and PM, respectively; F and CR are parameters set by the authors in differential evolution; T is the number of divisions in genetic algorithm; div is the division network number of cells; w, c 1 , and c 2 are the parameters of the velocity update equation used in the MOPSOs. e population size N of two objectives and three objectives of each algorithm is set to 200, and the maximum number of fitness evaluations is fixed to 10000; the size of the external file is set to be the same as N. In order to draw a statistical conclusion, the number of independent runs of each test experiment is set to 30. e average and standard deviation (std) on IGD are collected in corresponding Tables 4 and 5 for performance comparison. In addition, in order to determine the statistical significance, a Wilcoxon rank-sum test was further carried out to test the statistical significance of the difference between the results obtained by GCDMOPSO and the results obtained by other algorithms at α � 0.05. All experimental results are obtained on PC with 2.3 GHz CPU and 8 GB memory. All source codes of these competing algorithms are provided in the platform PlatEMO [34].

Comparisons of GCDMOPSO with Seven State-of-the-Art
MOPSOs. In GCDMOPSO, seven MOPSOs and seven Input: X, Vel, E, gencount, Maxgen Output: NP 1: Let NP=Ø 2: for X 1 ε X do 3: Randomly select two particles X a , X b from E 4: Calculate the cosine distance CD a and CD b between particle a, b and k 5: MOEAs are selected, and the program runs the average and standard deviation of the IGD values on ZDT1-ZDT4 and ZDT6, DTLZ1-DTLZ5, and UF1-UF10 in Table 4. Moreover, the Wilcoxon rank-sum test is adopted at a significance level of 0.05, where the symbols "+," "−," and "�" in the last row of the tables indicate that the result is significantly better than, significantly worse than, and statistically similar to that obtained by GCDMOPSO, respectively. e best average for each test instance is shown in bold. It can be directly observed that the performance of the proposed GCDMOPSO is significantly better than the existing seven compared MOPSOs in terms of benchmark testing, that is, dMOPSO, MOPSO, NMPSO, SMPSO, MOPSOCD, MPSO/D, and MMOPSO. Of all 20 test instances, GCDMOPSO achieved statistically significantly better IGD values on 12 test instances which were far greater than those of the competing MOPSOs. For example, the numbers of optimal IGD values for dMOPSO, MOPSO, and MOPSOCD are zero, the number of optimal IGD values for MPSO/D is one, the numbers of optimal IGD values for NMPSO and SMPSO are two, and MMOPSO has five optimal IGD values.
For two-objective ZDT2, ZDT4, and ZDT6, the proposed GCDMOPSO can obtain a set of nondominant solutions, which can approximate the entire Pareto front well and maintain a good distribution. For the three-objective DTLZ1, the proposed GCDMOPSO can still achieve competitive performance, but, on the three-objective DTLZ2-DTLZ5, the performance of GCDMOPSO does not seem to be so ideal. It is worth noting that MMOPSO performed best on the two-objective ZDT1 and ZDT3, due to the fact that it has adopted the crossover and mutation operators in MOEAs in addition to the updating strategies of PSO. In UF1-UF10, the performance is far better than those of other comparison algorithms. Generally speaking, compared with the existing MOPSOs, the proposed GCDMOPSO proves the overall best performance. At the same time, when different algorithms are run independently 30 times, the partial statistical block diagram of the evaluation index IGD of GCDMOPSO algorithm and the comparison algorithm is shown in Figure 8 Table 4.
From the above empirical results, we can conclude that, compared with the existing MOPSOs, GCDMOPSO has application prospects in solving PSO.

Comparisons of GCDMOPSO with Seven Competitive
MOEAs. Table 5 presents the mean and standard deviation of IGD values of NSGA-II, NSGA-III, MOEA/D, MOEAIGDNS, SPEAR, SPEA2, and IBEA on ZDT1 to ZDT4 and ZDT6, DTLZ1 to DTLZ5, and UF1 to UF10, where the Wilcoxon rank-sum test is also adopted and the best mean for each test instance is shown in bold. It can be observed that the performance of the proposed GCDMOPSO is significantly better than those of the seven compared MOEAs (i.e., NSGA-II, NSGA-III, MOEA/D, MOEAIGDNS, SPEAR, SPEA2, and IBEA) in terms of benchmark testing. According to the results, there are 12 test cases with statistically significant best performance among 20 test examples.

5.1106e − 3 (5.31e − 4)
the three-objective DTLZ series, compared MOEAs are obviously better than GCDMOPSO, and this is because genetic factors are more suitable for solving MOPs with local frontiers. erefore, in the existing MOPs, more researchers suggest the main reason for using genetic factors. At the same time, when different algorithms are run independently 30 times, the partial statistical block diagram of the evaluation index IGD of the GCDMOPSO and the comparison algorithm is shown in Figure 9 (1, 2, 3, 4, 5, 6, 7, and 8 represent NSGA-II, NSGA-III, MOEA/D, MOEAIGDNS, SPEAR, SPEA2, IBEA, and GCDMOPSO, respectively). As shown in Figure 9, GCDMOPSO recorded the minimum values on ZDT1, ZDT2, ZDT4, ZDT6, DTLZ1, UF1 to UF4, UF7, UF9, and UF10. It can be clearly seen from Figure 9 that GCDMOPSO can obtain best nondominated solutions compared with other MOEAs. e results are consistent with the qualitative analysis in Table 5.
From the above empirical results, we can conclude that, compared with the existing MOEAs, GCDMOPSO has application prospects in solving PSO.

Complexity of the GCDMOPSO.
e complexity of the proposed GCDMOPSO depends on the complexity of its components, that is, the complexity of game strategy and cosine distance. e following is the complexity analysis of GCDMOPSO.
Suppose that the population size is N, where there are m nondominated individuals. In general, it is assumed that k(m ≤ k ≤ N) games have been played. According to the game strategy, an individual will be eliminated after each game. erefore, a total of k individuals were eliminated, including q(0 < q ≤ m) dominated individuals and p(0 ≤ p < N − m) nondominated individuals. After k times of games, (N − k) individuals become winners among the N individuals. For the convenience of analysis, assuming that, in each game, (N − k) dominated individuals have the same probability of being selected, after k games, there are e time complexity of the game strategy is T(n) � O(N 2 ). At the same time, in the process of updating the external archive of the game strategy, the calculation complexity of the cosine distance is O(M × (2N)log 2 (2N)).
en the total   Table 5: IGD values of the proposed GCDMOPSO and seven MOEAs on ZDT1-ZDT4 and ZDT6, DTLZ1-DTLZ5, and UF1-UF10 test  problems.   Problems  NSGA-II  NSGA-III  MOEA/D  MOEAIGDNS  SPEAR  SPEA2  IBEA  GCDMOPSO   ZDT1 1     In addition, this article uses MATLAB functions (tic and toc) to calculate the runtime (unit: second) of each algorithm when the number of evaluations is 10000. It can be seen from Tables 6 and 7 that even though GCDMOPSO uses the cosine distance measurement mechanism and the game strategy, the time complexity of GCDMOPSO and other comparison  algorithms is on the same order of magnitude on functions ZDT1-ZDT4 and ZDT6, DTLZ1-DTLZ5, and UF1-UF10.

Conclusions
is paper has proposed a novel multiobjective particle swarm optimization based on cosine distance mechanism and game strategy to solve MOPs. e optimization was used to update the Pareto set in the external archives through the update strategy of the cosine distance measurement mechanism and add a candidate set as a storage for screened advantageous individuals' mechanism. e optimization is conducive to Pareto optimal set close to the true Pareto optimal front and maintains the diversity of the swarm. In order to improve the performance of optimization, this article combined the game update strategy to design a global optimal selection strategy of the game strategy based on the cosine distance measurement mechanism. ese experimental studies have shown that the proposed GCDMOPSO has better performance than several state-of-the-art MOPSOs and competitive MOEAs.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.