A Many-Objective Optimization Algorithm Based on Weight Vector Adjustment

In order to improve the convergence and distribution of a many-objective evolutionary algorithm, this paper proposes an improved NSGA-III algorithm based on weight vector adjustment (called NSGA-III-WA). First, an adaptive weight vector adjustment strategy is proposed to decompose the objective space into several subspaces. According to different subspace densities, the weight vector is sparse or densely adjusted to ensure the uniformity of the weight vector distribution on the Pareto front surface. Secondly, the evolutionary model that combines the new differential evolution strategy and genetic evolution strategy is proposed to generate new individuals and enhance the exploration ability of the weight vector in each subspace. The proposed algorithm is tested on the optimization problem of 3–15 objectives on the DTLZ standard test set and WFG test instances, and it is compared with the five algorithms with better effect. In this paper, the Whitney–Wilcoxon rank-sum test is used to test the significance of the algorithm. The experimental results show that NSGA-III-WA has a good effect in terms of convergence and distribution.


Introduction
Many-objective optimization problems (MAOPs) [1] refer to optimization problems whose number of objectives is over three and need to be processed simultaneously. As the number of objectives increases, the number of nondominated solutions will grow explosively in the form of e exponent [2], most of the solutions are nondominated, the pros and cons between the solutions become more difficult to be evaluated, and namely, many-objective evolutionary algorithms (MOEAs) have poor performance in convergence. In addition, the sensitivity of the Pareto front surface [3] will increase along with the increase of the spatial dimension, which makes it difficult to maintain the distribution among individuals.
In order to ensure the convergence and distribution of MOEAs [4], scholars have proposed the following three solutions to improve the aforementioned problems: (1) Change the dominance relationship [5], and then increase the pressure of selecting solution to improve the convergence of the algorithm. In 2005, Laumanns et al. proposed e-domination [6] to reduce the fitness value of each individual by (1 − e) times before the individual performs the Pareto dominance relationship comparison. In 2014, the ϵ-dominant mechanism proposed by Hernández-Díaz et al. [7] added an acceptable threshold to the comparison of individual fitness. In 2014, Yuan et al. proposed the θ domination [8] to maintain the balance between convergence and diversity in EMO. In 2015, Jingfeng et al. proposed a simple Pareto adaptive ε-domination differential evolution algorithm for multiobjective optimization [9]. In 2016, Yue et al. proposed a grid-based evolutionary algorithm [10] which modifies the dominance criterion to increase the convergence speed of evolutionary manyobjective optimization (EMO). In 2017, Lin et al.
proposed an evolutionary many-objective optimization based on alpha dominance, which provides strict Pareto stratification [11] to remove most of the blocked domination solutions. e methods above can enhance the selection pressure, but this relaxed strategy is limited to handle the situation with a little number of objectives. Moreover, it is very hard that parameters need to be adjusted for different optimization problems. (2) Method based on decomposition [12], which means to decompose the objective space into several subspaces without changing the dimension of the objective, thus transforming MAOP into single-objective subproblem or many-objective subproblems. In 2007, Zhang and Li proposed a many-objective evolutionary algorithm based on decomposition [13] (MOEA/D) for the first time whose convergence effect is significantly better than MOGLS and NSGA-II. In 2016, Yuan et al. proposed a distance-based update strategy, MOEA/DD [14], to maintain the diversity of algorithms in the evolutionary process by exploiting the vertical distance between solutions and weight vectors. In 2017, Segura and Miranda proposed a decomposition-based MOEA/D-EVSD [15] evolutionary algorithm, steady-state form, and a reference direction to guide the search. In 2017, Xiang et al. proposed the framework of VAEA [16] algorithm based on angle decomposition. e algorithm does not require the reference point, and the convergence and diversity of many-objective space are well balanced. However, the self-adjusting characteristics of the algorithms mentioned above make them fall into local optimum more easily although the convergence speed is improved. e distribution is not unsatisfactory. (3) e reference point method, this kind of algorithm decomposes the MAOPs into a group of manyobjective optimization subproblems with simple frontier surfaces [17]. However, unlike the decomposition method, the subproblem is solved using the many-objective optimization method. In 2013, Wang et al. [18] proposed the PICEA-w-r algorithm whose set of weight vectors evolved along with populations; the weight vector adjusts adaptively according to its own optimal solution. In 2014, Qi et al. adopted an enhanced weight vector adjustment method in the MOEA/D-AWA algorithm [19]. In 2014, Deb and Jain proposed a nondominated sorting evolution many-objective optimization algorithm based on reference points [20] (NSGA-III), and its reference point is uniformly distributed throughout the objective space; in the same year, Liu et al. proposed the MOEA/D-M2M method; the entire PF can be divided into multiple segments and solved separately by dividing the entire objective space into multiple subspaces. Each segment corresponds to a many-objective optimal subproblem [21], which improves the distribution of solution sets. In 2016, Bi and Wang proposed an improved NSGA-III manyobjective optimization algorithm [22] (NSGA-III-OSD) based on objective space decomposition. e uniformly distributed weight vector was decomposed into several subspaces through a clustering approach. e weight vectors can specify a unique subarea. A smaller objective space helps overcome the invalidity of the many-objective Pareto dominance relationship [23]; the distribution and the uniformity of the solution surface decrease because of the sparse solution of each subspace edge caused by the fixed subspace. In 2016, Cheng et al. proposed an evolutionary algorithm based on reference vector guidance for solving MAOPs [24] (RVEA). Its principle of the adaptive strategy is to adjust the weight vectors dynamically according to the objective function form. e weight vectors generated by the methods above are uniformly distributed, while the reference point on the solution surface cannot be guaranteed to be uniform, and there is also a possibility that the convergence may be lost.
In order to further improve the convergence and distribution of many-objective algorithms, based on NSGA-III, a many-objective optimization algorithm (NSGA-III-WA) based on weight vector adjustment is proposed. First, in order to enhance the exploration ability of the solution to the weight vector, an evolutionary model in which a novel differential evolution strategy and a genetic evolution strategy are integrated is used to generate new individuals. en, in order to ensure the uniform distribution of weight vectors in the solution surface, the objective space is divided into several subspaces by clustering the objective vectors. According to the adjustment of the weight of each subspace, the spatial distribution of the objective is improved. We will carry out simulation experiments on the DTLZ standard test set [25] and WFG standard test set [26]. We compare the proposed algorithm with the five algorithms that are currently performing better on the optimization problem of 3 to 15 objectives. e GD, IGD, and HV are compared as performance indicators. e experimental results show that NSGAWA has good effect in convergence and distribution. e rest of the paper is organized as follows. Section 2 introduces the original algorithm. Section 3 describes the proposed many-objective evolutionary algorithm. Section 4 compares the similarities and differences between this algorithm and similar algorithms. Section 5 gives the experimental parameters of each algorithm and comprehensive experiments and analysis. Finally, Section 6 summarizes the full text and points out the issues to be studied next.

NSGA-III
e NSGA-III algorithm is similar to the NSGA-II algorithm in that it selects individuals based on nondominated ordering. e difference is that the individual choice after nondominated sorting is different. e NSGA-III algorithm is introduced as follows: First, a population A of size N is set up, and population P t is operated by genetic operators (selection, reorganization, and variation) to obtain a population Q t of the same size, and then population P t and population Q t are mixed to obtain a population R t of 2N.

2
Computational Intelligence and Neuroscience e population R t is subjected to nondominated sorting to obtain layers of individuals with nondominated levels (F1, F2, and so on). Individuals with nondominated levels are sequentially added to the set S t of the next generation of children until the size of the set S t is greater than N. e nondominated level at this time is defined as the L layer. Pick K individuals from the L level so that the sum of K and all previous levels is equal to N. Prior to this, the objective value is normalized by the ideal point and the extreme point. After normalization, the ideal point of S t is a zero vector, and the provided reference point is located exactly on the normalized hyperplane. e vertical distance between each individual in S t and each weight vector (connect the origin to the reference point) is calculated. Each individual in S t is then associated with a reference point having a minimum vertical distance.
Finally, the niche operation is used to select members from F1. A reference point may be associated with one or more objective vectors or there are also possibilities that none of the objective vectors is associated with a reference point. e purpose of the niche operation is to select the K closest reference points from the F1 layer into the next generation. Firstly, calculate the number of individuals associated with each reference point in the S t /F l population and use ρ j to represent the number of individuals associated with the jth reference point. e specific operation is as follows: When the number of individuals associated with a reference point is zero, in other words, ρ j is equal to zero, the operation next depends on whether there are individuals related to the reference point in F l . If one or more individuals are related to the reference vector, extract the point with the smallest distance, add it to the next generation, and set ρ j � ρ j + 1. If no individual is associated with the reference point in F l , the reference point vector in this generation will be deleted. If ρ j > 0, choose the nearest reference point until the population size is N.

The Proposed Algorithm
3.1. e Proposed Algorithm Framework. In order to further improve the convergence speed and distribution of NSGA-III algorithm, a multiobjective optimization algorithm based on weight vector adjustment (NSGA-III-WA) is proposed. Algorithm 1 is the framework of the NSGA-III-WA algorithm. e algorithm is mainly improved in two aspects: evolution strategy and weight vector. is paper also adds the discriminating condition for enabling weight vector adjustment, which speeds up the running of the algorithm without affecting the performance of the algorithm. First, we initialize population P t with population size N and weight vector W_unit. Secondly, we enter the algorithm iteration process, generate the population Q t by the operating population P t using the differential operator, and then obtain population R t sized 2N using the combination of P t and Q t . R t should be updated through the environmental selection strategy. e next generation of population P t+1 is obtained. Lastly, adjust the weight vector and determine if the termination condition is satisfied. If so, output the current result and terminate the algorithm; otherwise, continue iterating.

Initialization.
e initial population is randomly generated whose size is the same as the number of weight vectors in its space. is article uses Das and Dennis's systematic method [27] to set weight vectors W_unit � w 1 , w 2 , . . . , w N }. e total number of weight vectors is equal to N � C M−1 H+M − 1 , where H represents the dimension of the solution vector and M is the number of objective functions. e initialized weight vectors (reference points) are uniformly distributed in the objective space, and each weight vector generation method is as follows: for w i ≥ 0, i � 1, . . . , H, H i�1 w i � 1.

Evolutionary Strategy.
e evolutionary strategy is essential to the convergence speed and accuracy of the solutions because it will determine the quality of new solutions to subquestions directly during evolutionary process. In order to improve the convergence speed, this paper proposes a new differential evolution strategy to replace the original strategy. e pseudocode is shown in Algorithm 2. Every individual performs the same operation as follows.

Variation. It is mainly divided into two parts:
(a) Select three individuals x r1 , x r2 , x r3 randomly from the population. A new individual x v will be obtained using (1) for parental vector variation to maintain population diversity. At the beginning of the algorithm, the mutation rate should be relatively large to make the individuals different from each other. is not only improves the search ability of the algorithm, but also prevents the individual from falling into local optimum. e mutation rate should decrease as the number of iterations increases and the solution approaches the Pareto optimal front surface (PFs) [28]. At this time, the mutation rate should decrease to accelerate the convergence of the algorithm to the optimal. is not only improves the convergence speed, but also reduces the complexity of the algorithm. Based on the above analysis of the needs of the algorithm, this paper proposes an adaptive mutated factor F � 0.5 + 0.5 cos(π × gen/gen_ max). It can be seen that as the number of iterations increases, the mutation rate decreases in size. At the beginning of the algorithm, it can enhance the individual's ability to jump out of local optimum and find superior individuals. As the mutation rate is smaller, the algorithm tends to be stable. To maintain the diversity of populations, select individuals x r1 , x r2 to generate a new individual (line 3 in Algorithm 2) by simulating the binary recombination operator (2) and (3). In (4), u is a random number between [0, 1] and η is a constant with the fixed value of 20.
Computational Intelligence and Neuroscience otherwise.
(b) Select an individual through probability selection from x v , x c1 , x c2 (which are generated in step a) to execute the crossover operation (lines 4 to 12 in Algorithm 2). e specific operation is as follows: Generate a random number between [0, 1] and compare it with p. If the number is smaller than p, execute lines 5-9 of the pseudocode in Algorithm 2; otherwise, select x v to enter the crossover operation (the detailed operations of lines 5-9 in Algorithm 2 are as follows: Generate a random number between [0, 1] and compare it with 0.5. If it is smaller than 0.5, then select x c2 to enter the crossover operation. Otherwise, select x c1 to enter the crossover operation). Here, p is 0.5.

Crossover.
is article selects the single-point crossing method, which is located in lines 13-20 of the pseudocode in Algorithm 2. In this way, individual selectivity is enhanced. Generate a random number between [0, 1]. If the number is less than or equal to the crossover operator CR, then select a certain dimension of the individual randomly and execute the crossover operations at the selected point. A large number of experiments have confirmed that when the value of CR is 0.4, the effect is better. en use the generated individuals and their fitness values to replace the original ones.

Environmental Selection.
e purpose of the environmental selection operation is to select the next generation of individuals. e framework of Algorithm 3 includes the following steps: (1) Nondominated sorting is conducted on the population R t , and individuals with a rank of 1, 2, 3, . . . after nondominating sorting are added to the offspring collection S t in order. (2) When the size of S t is greater than or equal to N, note the nondominant level F l at this time and determine when to terminate the operation (|S t | � N) or enter the next step (S t | > N). (3) Select individuals in S t /F l and enter P t+1 until its size is N. e specific operation is discussed below.

Normalize Objective.
Since the magnitude of the respective objective values is different, it is necessary to normalize objective values for the sake of fairness. First, calculate the minimum value z i of each dimension for every objective function. e sets of z i constitute the ideal points. All individuals are then normalized according to (5), where a i is the intercept of each dimension that can be calculated according to the achievement scalarizing function (ASF) shown in (6).

Associate Each Member of S t with a Reference Point.
In order to associate each individual in S t with the reference points after normalization, a reference line is defined for each reference point on the hypersurface. In the normalized objective space, the reference point with the shortest distance is considered to be related to population members.

Compute the Niche Count of the Reference Point.
Traversing every individual in the population, calculate the distances between itself and all reference points and record the number of individuals associated with each reference point using ρ j , which represents the number of individuals associated with the jth reference point.

Niche Preservation
Operation. If the number of populations associated with this reference point is zero (but there is an individual associated with the reference point vector in F l ), then find the point with the smallest distance and extract it from F l to join the selected next generation population. In the setting, the number of associated populations is increased by one. If each individual is not referenced to the reference point in the F l , the reference point vector is deleted. If the number of associated populations is not zero, then the nearest reference point is selected until the population size is N.

Weight
Adjustment. e uniformity of the solution surface cannot be achieved when the algorithm reaches a certain stable state, although the weight vectors distribute uniformly in the space. is is because of the complexity caused by the irregular shape of the PFs of the objective functions. e distribution of weight vectors is particularly important when all individuals are indistinguishable from each other and locate on the first level of the dominance level.
erefore, in order to improve the distribution of many-objective algorithms, a weight vector adjustment strategy whose framework is shown in Algorithm 4 is proposed. e distribution of weight vectors is appropriately adjusted according to the shape of the nondominated frontier. In order to prevent the weight vector adjustment in the high-dimensional space from being concentrated to a certain objective, the K-means clustering method is used to divide the weight vector into different subspaces. e specific operations are described below.
First, each weight vector is associated with the population member (line 1 in Algorithm 4). Secondly, the solution space is decomposed into many subspaces using the K-means clustering method (lines 2-5 in Algorithm 4), as shown in Figure 1. To prevent errors caused by excessive differences in the solution set in space, the subspace should not be too large or small, and it can be divided into C spaces according to the size of the population. A large number of experiments confirmed that when C � 13, better results can usually be obtained. e solution set is decomposed into [N/C] cluster spaces, and the weight vectors are adjusted by comparing the density of the entire objective space and subspaces (lines 6-17 in Algorithm 4). As shown in Figure 2, w2 should be away from w1 and approach w3. Finally, the number of weight vectors is adjusted to ensure that it can match the original number. If the number is greater than N, then the weight vector is deleted at the densest position in the entire objective space. If the number is less than N, then a weight vector is added in sparse position (lines 18-20 in Algorithm 4). e definition of spatial density is obtained by averaging the distances of similar individuals in the population. e minimum spatial density is defined as h 1 ρ o . e maximum spatial density is defined as h 2 ρ o . Under normal circumstances, when the value of h1 is 0.2 and h2 is 1.3, relatively good results can be obtained. Note ρ o is the density of the overall objective space and ρ i is density of the subspace. e adjustment process is divided into two situations described below: (1) When the subspace density is less than the objective space density, determine whether the subspace Computational Intelligence and Neuroscience density is too small. If the density of the subspace is less than the minimum space density h 1 ρ o , then the subspace density is considered too small. In this case, the weight vector should be evacuated. At this point, using the two nearest neighbor weight vectors and adding their sum vectors to the set of weight vectors, the parent vectors are deleted. Otherwise, the weight vector should be fine-tuned to achieve uniformity across the objective plane. At this time, according to the density difference, the nearest two weight vectors in the subspace are adjusted according to (7) and (8). Among them, the vectors W_unit(k) and W_unit(l) are the closest weight vectors. Let mt be the minimum distance. Let ρ i be the density value. e vectors W_unit(gwk) and W_unit(gwl) are neighbor weights of the respective weights.
(2) When the subspace density is greater than the objective space density, determine whether the subspace density is too large. If the density of the subspace is Delete a weight vector (9) else (10) Adjustment weight vectors by formulas (6) and (7) Adjustment weight vectors by formulas (8) and (9)  (14) else (15) Add a weight vector greater than the maximum space density h 1 ρ o , then the subspace density is too large. In this case, the weight vector should be aggregated. At this point, take the two furthest neighboring weight vectors and add their sum vectors to the set of weight vectors. Otherwise, at this time, according to the density difference, adjust the weight vectors according to (9) and (10). Among them, the vectors W_unit(k) and W_unit(l) are the furthest weight vectors. Note mx is the maximum distance and ρ i is the density value.
It is worth emphasizing that the edge vector is immovable; otherwise, the search range of the algorithm will be affected. Half of the maximum number of iterations was selected as an enabling condition for the weight vector

Discussion
e previous section described the NSGA-III-WA algorithm in detail. In this section, we compare the similarities and differences between NSGA-III-WA, NSGA-III, and VAEA.

e Similarities and Differences between NSGA-III-WA and VAEA
(a) Both algorithms use Pareto dominance to select individuals. (b) Both algorithms need to normalize the population. e difference is that VAEA normalizes the population according to the ideal and lowest point of the population, while NSGA-III-WA obtains the intercept of each objective axis by calculating the ASF and then normalizes the population. e latter is more universal and more reasonable.       Table 6.

Simulation Results
In order to verify the performance of the proposed algorithm on MAOPs, this paper selects the general test function set DTLZ and WFG  corresponding parameter settings for each algorithm, and second, to explain the experimental results, compare, and analyze them.

General Experimental
Settings. e number of decision variables for all test functions is V � M + k − 1, and M is the number of objective functions, k � 5 for DTLZ1, and k � 10 for DTLZ2-6. e number of decision variables for all WFG test functions is V � k + 1, where the position variable is k � M − 1 and the objective dimension is M; the distance variable is l � 10.
e population sizes of NSGA-III and RVEA are related to the uniformly distributed weight scale and determined by the combination number of M and the number of p on each objective. e double-layer distribution method in [13] is adopted in order to tackle the problem. e specific parameter settings are given in Table 1. For fair comparison, the population size is the same as the other three algorithms. Due to objective dimensions of the solution problem, MFE is also not the same. According to [16], the specific settings are shown in Table 2. e maximum number of iterations is calculated by gen_max � MFE/N. e parameter settings of NSGA-III and MOEA/D are shown in Table 3. In addition, the number of MOEA/D neighbor weight T is 10; the RVEA penalty factor change rate α is 2, and the VAEA angle threshold is expressed as δ � (π/2)/(N + 1). is section shows the results and analysis of the GD, IGD, and HV performance test data of the DTLZ1-6 test function.

Results and
e experimental results are shown in Tables 4-6. ey are the average values and standard deviations of 30 independent running results. e best results are shown in black and bold, and the values in parentheses indicate the standard deviation; the number in square brackets is the algorithm performance ranking, which is based on the Whitney-Wilcoxon rank-sum test [33]. To investigate whether NSGA-III-WA is statistically superior to other algorithms, Wilcoxon's rank-sum test is performed at a 0.05 significance level between NSGA-III-WA and each competing algorithm on each test case. e test results are given at the end of each cell, represented by the symbols "+," "�," or "−," which indicate that the NSGA-III performance is better than the algorithm in the corresponding column, equal to, and worse. At the same time, the last row of Tables 4-6 summarizes the number of test instances that NSGA-III-WA is significantly better than, equal to, and below its competitors. Tables 7-9 show the results of the comparison of NSGA-III-WA algorithm with the other five algorithms under different objective numbers.
It can be seen from the experimental results in Table 4

Computational Intelligence and Neuroscience
superior to the other five algorithms, only 8th, 10th, and 15th dimensions are superior on the DTLZ5, and NSGA-III on the DTLZ6 gets the best results. It shows that in solving many-objective problems, the convergence of NSGA-III-WA is more effective than that of NSGA-III algorithm and is better than other algorithms. Table 7 shows summary 4 Objective number   Table 4. NSGA-III-WA is compared to five other more advanced multiobjective algorithms and counts the number of wins (+), equal to (�), and number of loses (−). As can be seen from the table, NSGA-III-WA is clearly superior to the five most advanced designs selected.
From Table 5, the NSGA-III-WA can get the best results especially the objectives 5, 10, and 15 on the DTLZ1, 8, 10, and 15 on the DTLZ2 and DTLZ3, 3, 5, and 8 on the DTLZ4, and 5, 8, and 15 on the DTLZ5. Moreover, the objectives 3 and 8 on the DTLZ1, the objectives 3 and 5 on the DTLZ2, the objective 3 on the DTLZ3, and the objective 10 on the DTLZ5 achieve the second best results. Nevertheless, on the DTLZ5 and DTLZ6, the results of NSGA-III-WA are not significant because the DTLZ5 and DTLZ6 are used to test the ability to converge to a curve. Owing to the reason that NSGA-III-WA needs to build a hypersurface, M extreme points cannot be found in the later stage of the algorithm to construct the hypersurface, and it cannot converge to a curve well. In addition, NSGA-III-WA has noticeable effects on other test functions and is a kind of stable and relatively comprehensive algorithm. Table 8 shows summary of statistical test results from Table 5. It can be seen from the table that NSGA-III-WA performs best on the other five algorithms.
From Table 6, it can be seen that the NSGA-III-WA can effectively handle most test problems. It can get the best results especially the objectives 3, 8, 10, and 15 on the DTLZ1, 5 and 10 on the DTLZ2, 3, 5, 10, and 15 on the DTLZ3, 10 and 15 on the DTLZ4, 5, 8, and 10 on the DLTZ5, and 3, 5, and 8 on the DTLZ6. Moreover, the objective 5 on the DTLZ1, objective 15 on the DTLZ2, objectives 3 and 5 on the DTLZ4, objectives 3 and 15 on the DTLZ5, and objectives 10 and 15 on the DTLZ6 achieve the second best results. However, the performances of objective 3 on the DTLZ2 and 8 on the DTLZ3 are poor. Although NSGA-III, VAEA, RVEA, and MOEA/D can obtain optimal values for a particular dimension in the function, NSGA-III-WA has the best overall performance considering all dimensional objective results. Table 9 shows summary of statistical test results from Table 6. As can be seen from the table, NSGA-III-WA hypervolume performance is better than the other five algorithms.
In order to express the effect of the algorithm more intuitively, the performance of the algorithm is presented in the form of a box diagram. Due to space limitations, only the analysis of the box diagrams of the four algorithms under five goals and fifteen goals is given here. Figures 3-8 show the performance box diagram under the four goals, and From Figures 3-8, it can be seen that NSGA-III-WA can achieve better results when dealing with most test problems. Its convergence and breadth are significantly better than the other five algorithms. e overall performance indicators achieve the best results on the DTLZ1 and DTLZ4 and get the second best results on the DTLZ2 and DTLZ3. Although the minimum value is obtained on the DTLZ5 and DTLZ6, there exist abnormal values, indicating that the algorithm is relatively unstable. is is because DTLZ5 and DTLZ6 test the ability of the algorithm to converge to a straight line, while NSGA-III-WA needs to build a hypersurface, so it cannot converge to a curve well. However, the overall robustness of NSGA-III-WA is relatively better with all test function results.
From Figures 9-14, it can be seen that the NSGA-III-WA has the ability to handle most problems under the 15 objectives. e convergence on the DTLZ1-DTLZ5 is significantly better than the other five algorithms. e overall performance obtains the best results on the DTLZ1-DTLZ3, DTLZ5, and DTLZ6. e NSGA-III-WA under the 15 objectives can get the minimum on DTLZ5 and DTLZ6 but is relatively unstable. e breadth achieves the best results on the DTLZ1, DTLZ3, and DTLZ4 and gets the second best results on the DTLZ2, DTLZ5, and DTLZ6, and there exist abnormal values. On the 15 objectives, it is evident that the outliers of each algorithm increase. at is explained by the fact that the stability of algorithms in the high-dimensional space will decline due to the increase of the spatial breadth. Depending on the results of all test functions, NSGA-III-WA has better stability.
In order to visually reflect the distribution of the solution set in the high-dimensional target space, parallel coordinates are used to visualize the high-dimensional data as shown in Figure 15.
From Figure 15, it can be seen that NSGA-III-WA and RVEA find the final solution set in this problem to be similar in convergence and distribution. In contrast, MOEA/D-M2M and NSGA-III are slightly less distributed than the above three algorithms. VAEA finds that the distribution of the solution is poor. MOEA/D appears the concentrated solution. Lose extreme solutions at 12 objectives and the distribution of MOEA/D is seriously missing.

Testing and Analysis of WFG Series Functions.
e performance indicators of the WFG test function are mainly    From the results in Table 10, it can be seen that NSGA-III-WA can handle most of the considered examples well. In particular, it achieved the best overall performance on the objectives 3, 5, and 10 on WFG2 instances and the objectives 5, 8, and 15 on WFG3 instances. In addition, it achieves the best performance on the objective 15 on WFG8 and the objectives 3 and 8 on WFG9. e VAEA performed well on the objective 8 on WFG1 and WFG2 test instances and also achieved good results on the objectives 3 and 10 on WFG3 and the objective 4 on WFG3. RVEA obtains the best IGD value on the objective 3 on WFG1 and the objectives 5 and 10 on WFG4. It is worth noting that RVEA performs poorly for WFG2 and WFG3 instances. But it performs relatively well compared to the NSGA3 and MOEA/D algorithms. NSGA-III and MOEA/D-M2M typically have moderate performance on most WFG problems, and good results can only be achieved on specific WFG test instances. MOEA/D does not produce satisfactory results in all WFG test instances. As the number of objectives increases, the results gradually deteriorate. Table 11 shows summary of statistical test results from Table 10. It can be seen from the table that the performance of NSGA-III-WA is significantly better than that of the other five algorithms.
From the results in Table 12, it can be seen that NSGA-III-WA has obtained the best performance for most of the highdimensional objective problems. NSGA-III works well on the WFG1 and WFG2 test instances, and VAEA also gets good results on the objectives 10 and 15 on WFG3 test instances. RVEA obtains the best HV value on the objectives 3, 5, and 10 on WFG4. MOEA/D and MOEA/D-M2M are not quite effective on these five instances. Table 13 shows summary of statistical test results from Table 12. e three-dimensional performance of the NSGA-III-WA algorithm is not very prominent.
e performance under the eight-dimensional algorithm is the same as that of the RVEA algorithm, but the NSGA-III-WA algorithm can achieve better performance in high-dimensional objective problems. In general, the NSGA-III-WA algorithm outperforms the other five algorithms in this performance.
In summary, after comparing the test results of GD, IGD, and HV performance, the performance of the NSGA-III-WA algorithm is superior.

Conclusion
is paper proposes a many-objective optimization algorithm based on weight vector adjustment, which increases the individual's ability to evolve through new differential evolution strategies, and at the same time, dynamically adjust the weight vector by means of the K-means to make the weight vector as evenly distributed as possible on the objective surface. e NSGA-III-WA algorithm has good convergence ability and good distribution. To prove its effectiveness, the NSGA-III-WA is experimentally compared with the other five most advanced algorithms on the DTLZ test set and WFG test instances. e experimental results show that the proposed NSGA-III-WA performs well on the DTLZ test set and WFG test instances we studied, and the obtained solution set has good convergence and distribution. However, the proposed algorithm has high complexity and it only plays the role of alleviating sensitive frontiers. Further research will be conducted on the above problems.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.