Adaptive Strategies Based on Differential Evolutionary Algorithm for Many-Objective Optimization

. The decomposition-based algorithm, for example, multiobjective evolutionary algorithm based on decomposition (MOEA/ D ), has been proved eﬀective and useful in a variety of multiobjective optimization problems (MOPs). On the basis of MOEA/ D , the MOEA/D-DE replaces the simulated binary crossover (SBX) operator with diﬀerential evolution (DE) operator, which is used to enhance the diversity of the solutions more eﬀectively. However, the ampliﬁcation factor and the crossover probability are ﬁxed in MOEA/D-DE, which would lead to a low convergence rate and be more likely to fall into local optimum. To overcome such a prematurity problem, this paper proposes three diﬀerent adaptive operators in DE with crossover probability and ampliﬁcation factors to adjust the parameter settings adaptively. We incorporate these three adaptive operators in MOEA/D-DE and MOEA/D-PaS to solve MOPs and many-objective optimization problems (MaOPs), respectively. This paper also designs a sensitive ex-periment for the changeable parameter η in the proposed adaptive operators to explore how η would aﬀect the convergence of the proposed algorithms. These adaptive algorithms are tested on many benchmark problems, including ZDT, DTLZ, WFG, and MaF test suites. The experimental results illustrate that the three proposed adaptive algorithms have better performance on most benchmark problems.


Introduction
In fields like industrial production and scientific research, the solutions for many practical problems are considered as a type of multiobjective optimization according to many researches. And there are many challenges in MOPs, which means that there is still room for improvement. An MOP, which is the main objective in this paper, is illustrated as follows: where (x 1 ,. . . x n ) is a decision vector from the search space Ω (n is the number of decision variables) and f 1 (x), . . . , f m (x) are m objective functions. As these objectives conflict with one another, the algorithm will generate no single optimal solution, which is the output of single-objective optimization, but a group of solutions under the restriction of the balance of the m objective functions f 1 (x), . . . , f m (x), which means that any amelioration in one objective will impair at least one other objective. Such one group of solutions is Pareto optimal solutions (PS). e image of PS in the objective space is defined as the Pareto optimal front (PF). Decision-makers can select the probable solutions from a set of PF [1]. e effectiveness of solving MOPs by multiobjective evolutionary algorithms (MOEAs) has been demonstrated.
Many real-world problems, which contain more than three objectives [19][20][21][22], are usually named many-objective optimization problems (MaOPs). Compared with MOPs, MaOPs usually have more complex PFs and need higher performance requirements for algorithms. Because of the loss of selectivity [23], the traditional MOEAs have degradation in solving MaOPs [24]. For decomposition-based MOEAs, specifying a set of weight vectors in a high-dimensional target space is difficult, and its performance depends heavily on the consistency of the weight vectors and the shape of the PF, while for Pareto-dominated algorithms, it is difficult to provide efficient selectivity to the PFs when dealing with a variety of obtained solutions [18]. As for indicator-based algorithms, they usually need a lot of computational resources. To handle these problems, a lot of many-objective evolutionary algorithms (MaOEAs) were proposed for solving MaOPs in the few decades. Depending on the strategy for handling convergence enhancement and diversity maintenance, they can be generally divided into three classes [25]. e first class involves decomposition-based algorithms. MOEA/D-DD [26] combines dominance and decomposition-based strategies to solve MaOPs. MOEA/D-CRU used the chain-reaction solution update strategy to improve the diversity of the solutions. In MOEA/D-PaS [27], a Pareto adaptive scalarizing method was proposed to approximate the optimal value. In MOEA/D-LWS [14], a weighted sum method was applied in a local manner. e second class is the Pareto-based algorithms. Reference [28] proposed an ensemble fitness ranking method to balance the convergence and diversity in solving MaOPs. In [29], a shift-based density estimation method in SPEA2-SDE was proposed to reduce the loss of selection pressure. SPEA/R used a reference direction-based density estimator to solve MOPs and MaOPs. e third category involves indicator-based algorithms. In [30][31][32], several methods have been proposed to calculate HV in a more efficient way. Regarding the MaOPs with a variety of objectives, other performance targets, R2 [33,34], Two_Arch2 [35], and SRA [36], were proposed. In recent years, there have been many new algorithms proposed to solve MaOPs. Liang [37] proposed a two-round selection strategy to generate good solutions between population diversity and convergence. Ma et al. [38] designed an adaptive localized decision variable analysis approach to solve MaOPs. A bottleneck objective learning strategy was proposed by Liu et al. to balance the diversity and convergence [39]. Zhang et al. proposed a DECAL algorithm to increase the diversity of the population for solving unconstrained MaOPs [40]. Ma et al. [41] proposed an adaptive reference vector reinforcement learning approach to decomposition-based algorithms for industrial copper burdening optimization. An orthogonal learning framework was proposed by Ma et al. [42] to improve the learning mechanism of brain storm optimization for solving complex problems.
In this study, we design three strategies, the linear variation, power function transformation, and the exponential transformation methods, to adjust the crossover probability and amplification factor in DE adaptively, and we incorporate these three adaptive operators in MOEA/D to solve MOPs and MOEA/D-PaS to solve MaOPs. We run these algorithms on ZDT, DTLZ, WFG, and MaF test functions.
e experimental results illustrate that these proposed methods have advantages on most of the test functions.
ere is the organization of this paper: we expound some basic knowledge in Section 2 and elaborate on three adaptive algorithms in Section 3. Experimental research and results analysis are detailed in Section 4. e conclusions of this paper and some future works are presented in Section 5.

Basic Definitions.
ere are some basic definitions in multiobjective problems described as follows.
where z * represents the ideal point which is the point with minimum value in the i th objective. More details can be found in [3,15].

Differential Evolutionary Algorithms. Kenneth Price and
Rainer Storn proposed a variety of variation forms of differential evolution (DE) algorithms [44,45]. We describe the different DE algorithms as a form of DE/X/Y/Z. X represents the selection of the basis vector (the individual vector to be mutated) in the mutation operation, "rand" represents an individual that is chosen at random from the race, and "best" represents the individual that has the best performance. Y represents the number of different vectors. Z stands for crossover, and the binomial experiment described as "bino" is usually used for crossover operation.
ere are some frequently used algorithms described as follows: where r 1 , r 2 , r 3 , r 4 , and r 5 are randomly selected distinct integers from the set {1, 2, . . ., N} and r best is the individual that has the best performance. F is an amplification factor, which expands the different vector. [15] replaced SBX [46] operator in MOEA/D with the DE/rand/1/bin operator and proposed MOEA/D-DE. e algorithm adopts three randomly selected individuals, r 1 , r 2 , and r 3 , to generate the new solution from the neighborhood P:

DE in MOEA/D. Li and Zhang
where CR is the parameter that controls the rate of crossover, F represents the amplification factor, and rand represents a random number whose numerical value is between 0 and 1. e polynomial mutation in DE is described as follows: where the distribution index ω and the mutation probability p m are two parameters in the algorithm. a k represents the lower boundary and b k is the upper boundary.

MOEA/D-PaS.
In the last few decades, many MOEAs have demonstrated their effectiveness in solving MOPs and MaOPs. Among these MOEAs, decomposition-based algorithms use evenly distributed vectors to keep the population diversity [37]. MOEA/D-PaS was proposed by Wang [27], using the Pareto adaptive scalarizing method to maximize the searchability of the algorithms and enhance the robustness to the PF. In MOEA/D, a variety of decomposition scalarizing methods can be used to solve MOPs. A weighted scalarizing method can be described as follows: When p � 1, the above formula represents the weighted sum method, and when p � ∞, the formula is the Tchebycheff method [27].
e different values can affect the search speed in the objective space. e MOEA/D-PaS uses a set of p values to select a suitable scalarizing method to find the optimal solutions. More details can be found in [27]. e framework of the adaptive scalarizing method is described as follows.

Adaptive Strategies.
e contrast of other algorithms and MOEA/D-DE shows MOEA/D-DE lower time complexity and fast convergence speed [47]. However, there are weak points such as rough race dispersion and inefficient local search capability of the race in MOEA/D-DE. Besides, the amplification factor and the crossover probability are fixed in MOEA/D-DE, which would lead to a low convergence rate and be more likely to fall into local optimum. To overcome these shortcomings, we design three adaptive operators in MOEA/D-DE.
In DE, we encode the crossover probability (CR), and the amplification factor (F) evolved with the increase of iterations. e traditional differential evolutionary (DE) algorithm is to keep F and CR fixed in value, which will converge slowly and be difficult to search for the global optimal solution as a result of premature convergence [47]. So, we design three adaptive operators to adjust the values of F and CR dynamically.
e second method uses power function transformation called MOEA/D-DE-PAD:

(9)
Compared with the traditional DE, these three methods use different strategies to adjust the values of CR and F dynamically. e values of CR and F will change by the generations. η is an artificial parameter changed from 0.1 and 0.2 to 0.9 with step 0.1. We will discuss how the values of η would influence the three proposed algorithms in the following text. Algorithm 1 describes the situation that uses the LAD adaptive strategy to generate the new solution. While using the PAD strategy, we can replace lines 2 and 3 with CR � CR 0 + η * (gen/max Gen) 2 and F � F 0 − η * (gen /max Gen) 2 . In the same case, when using the EAD strategy, we replace lines 2 and 3 with CR � η (1− gen/max Gen) and F � η (gen/max Gen) . e impressions of η on the proposed algorithms will be discussed in the following text in detail. When η � 0.5, the values of CR and F among the three strategies are described as follows.
As shown in Figure   e adaptive scalarizing method in MOEA/D-PaS can select the suitable p value to enhance searchability. e adaptive DE operators can generate better solutions according to the generation. In the following text, we will discuss the advantages of adaptive operators through the experiments.

For MOPs.
To identify the effectiveness of these operators in solving MOPs and MaOPs, we run these algorithms on 2-and 3-objective test problems. e details can be found as follows.

Benchmark Problems.
In this section, on the purpose of testing the performance of the proposed algorithms in solving multiobjective optimization problems, the ZDT [48] test function set including ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 and the DTLZ [49] test function set including DTLZ1, DTLZ2, DTLZ3, and DTLZ4 are adopted in experiments.

Parameter Settings.
e settings of the parameters in four algorithms are described in Table 1.
Each algorithm is run for 30 times on each test problem; each run is for 30000 function evaluations for the 2-objective problem and 60000 evaluations for the 3-objective problem.
(1) if rand > gen/maxGen (2) for each p k in P do Set p to be the smallest p k with z � z k (7) end ALGORITHM 1: Adaptive scalarizing method. 4 Discrete Dynamics in Nature and Society F and CR are fixed parameters set in MOEA/D-DE. F 0 is the initial value of F, and CR 0 is the initial value of CR. e probability of mutation and its corresponding distribution index are set as p m � 1/n and η m � 20.
All the experiments are tested on a computer (AMD Ryzen 5-4600H CPU (3.0 GHz) 16G RAM Windows 10 systems).

Performance Metrics.
(1) IGD Metric. e reverse generation distance (IGD) [50] is adopted to evaluate the quality of one solution set P in the experiments set in this paper. Assuming that P * is the true Pareto front, P is the practical Pareto front found by an algorithm. e distance between P * and P is defined as follows: where d(v, P) is the minimum distance between points v and P. e algorithms with solutions of smaller values of IGD will be considered to have better performance.
e HV metric evaluates the MOEA performance by calculating the supervolume value of the space between the nondominant solution set and the reference point. e HV metric can be defined as where λ stands for the Lebesgue measure, which is used to measure volume. Input: Current generation: gen, Maximum generation: maxGen, ree individuals: r 1 , r 2 , r 3 Output: the new solution y′ (1) Using LAD strategy: (2) CR � CR 0 + η * (gen/max Gen) (3) F � F 0 − η * (gen/max Gen) (4) Calculate the values of CR and F according to the chosen adaptive strategy  We firstly set different values of η to see how IGD and HV would change, and then we run these proposed algorithms on more test functions with probable η compared with MOEA/D-DE to see which one performs better on these test functions. e probable value of η was decided by the IGD and HV values that most algorithms get best on these test functions. More details can be found as follows.
To explore the influence of the change of η parameter on the values of IGD and HV, we set η � From Figure 2, we can see that (1) On ZDT1 and ZDT2, the IGD values of LAD and PAD increase as η increases. However, LAD has a sharper increase than PAD. On these two test problems, PAD performs best among the three algorithms. When η � 0.1, the IGD values of LAD and PAD are minimum. e values of IGD in EAD are minimum when η � 0.9. For these 2-objective problems, PAD keeps a lower value than the other two algorithms.
(2) On test function DTLZ1 and DTLZ2 test problem, there are no obvious relationships between IGD and η. When η � 0.7, EAD has the best performance in comparison compared with LAD and PAD. For 3objective problems, there is a much more complex possibility than the 2-objective problem. e IGD Input: e number of populations: N, the number of the weight vectors from neighborhood: T, and the maximum generation: maxGen. Output: Optimal solutions.
(1) Initialization: (2) Generate N weight vectors λ 1 , λ 2 , . . . , λ N from objective space evenly (3) Calculate the distances between every pair of the weighted vectors. On the basis of the distances, find T closest weight vectors Randomly select three solutions x r 1 , x r 2 , x r 3 from mating pool Q (13) Generate a new solution y′ by the adaptive DE operators in Algorithm 2 (14) If a dimension in y is beyond the search boundary, instead it of the value which is chosen from the value which is within the boundary at random (15) Update the ideal point z : Update the solutions: if g(y|λ j , z) ≤ g(x j |λ j , z), then set x j � y (17) end (18) Find the nondominated solutions during every generation (19) gen ← gen + 1 (20) end (21) Output these nondominated solutions as optimal solutions ALGORITHM 3: Adaptive strategies in MOEA/D. (3) e IGD values of EAD are complicated and changeable, and there are no obvious patterns compared with LAD and PAD. Moreover, in 2objective test problems, LAD and PAD changed more regularly than in 3-objective problems.
From Figure 3, we could see that From the figures shown in this paper, the values of IGD and HV would dynamically change by η. ese algorithms perform better on those test functions with η � 0.1, so we set η � 0.1 in these proposed algorithms. We run these algorithms and MOEA/D-DE on ZDT series test problems. e performance of the four algorithms on 2-objective test functions is shown in the figure. In the following text, the true PF is the real Pareto front of the test functions. According to the comparison with the true PF, we can inform that which algorithm performs better among these proposed algorithms.
As shown in Figure 4, we enlarge some areas to explore which algorithm has the better performance through comparing the distance between the true PF and the PF of the algorithm. e smaller distance between the PF and the algorithm is, the better performance the algorithm has. In the enlarged area, the horizontal axis represents the value of f1, and the vertical axis is the value of f2. Compared with MOEA/D-DE, the proposed algorithms have better performance on these test functions. On ZDT1, PAD almost converges to the PF completely. LAD has the best convergence performance on ZDT2. On ZDT3, the proposed algorithms are approximate. Besides, EAD has the best performance among the four algorithms on ZDT4. e PFs of the three proposed algorithms are much closer than MOEA/D-DE to the real PF. On these 2-objective test problems, using adaptive strategies can make the distance between the PF of the algorithms and the real PF much closer.
For the 2-objective problems, we made the figures which obtain the comparison of the true PF and the PF of the four algorithms, and we can distinguish which algorithm is better from the figures easily. However, for 3-objective problems, it is uneasy to recognize which performs better from the comparisons of the four algorithms in one figure. Besides, the performance metrics can analyze the performance of the algorithm from various aspects. So we select IGD and HV as criteria to judge which performs better on these test functions.
To further explore the performance of these four algorithms on the test problems, we made the statistics of IGD and HV in the tables.
It can be seen from Table 2  From the two tables shown above, the proposed three algorithms have advantages on most of the test functions. MOEA/D-DE only has the best values of IGD and HV on ZDT6; however, LAD and PAD are close to it on ZDT6.

Discrete Dynamics in Nature and Society
From the results shown in the tables, PAD has advantages over LAD and EAD, and these proposed adaptive strategies are effective on these test problems.

For MaOPs.
For MaOPs, we run these algorithms on some 4-, 7-, and 10-objective test functions to explore which method can help MOEA/D-PaS get more nondominated solutions.

Benchmark Functions and Performance Measures.
We use two test suites, WFG test functions WFG1-WFG8 [51] and MaF test problems MaF1-MaF8 [52], to test the performance of the algorithms. WFG test suite is a classic test problem with different scaled objectives for MaOPs and is wildly used in [18,25,38,40,[53][54][55]. WFG test set has different PF shapes, which obtain nonseparable disconnected and biased PFs. And for MaF, it is a new test suite with complicated PFs and is more challenging to the MOEAs. For these test problems, each test function is tested for 4-, 7-, and 10-objective instances. e settings of other parameters can be found in the table.
e hypervolume (HV) has brilliant theoretical qualities [56][57][58], so we use the values of HV to access the performance of the proposed algorithms. e HV value can reflect the quality of the solutions by calculating the volume of a region in the objective space bounded between a nondominant solution set and a reference point. A larger HV value means that the obtained solutions set is closer to the true PF.   Discrete Dynamics in Nature and Society

Parameter Settings.
We run these algorithms on WFG and MaF test suites to explore which adaptive DE operators perform better for the MaOPs. e value of η is 0.1. For a fair comparison, the population size is set to 200 for four objective problems, 240 for seven objective problems, and 280 for ten objective problems. e evaluations are set at 100000, 168000, and 196000 for 4-, 7-, and 10-objective test functions. All the algorithms are implemented in PlatEMO [59].
To make the experiment results more convincing, every algorithm is run thirty times on each test function independently. All the experimental results are shown in the following tables and figures.  Table 4    10 Discrete Dynamics in Nature and Society MOEA/D-PaS has the best performance on WFG3 and WFG8. For WFG1 with a convex, biased, and mixed PF, PaS-PAD has the best performance in all cases. Regarding WFG2 with a disconnected PF, PaS-EAD is the best in four objectives, while the three proposed algorithms have close performance in seven objectives, and in ten objectives, PaS-LAD and PaS-PAD perform better than the other two algorithms. For the WFG3 test problem with linear PF, PaS-LAD has the best performance in four and seven objectives, while MOEA/D-PaS obtains the best result in ten objectives. Considering WFG4 with a convex PF, it is observed that PaS-PAD shows an advantageous performance over the other three algorithms. Meanwhile, PaS-PAD also shows its advantages on WFG5 in four and seven objectives, while in ten objectives, PaS-EAD has superior performance than other algorithms. PaS-EAD also has the best performance on WFG6 in four and seven objectives, while PaS-LAD and PaS-PAD perform better in ten objectives. Concerning WFG7 with a convex PF, PaS-PAD performs best in seven and ten objectives, and in four objectives, PaS-PAD has close performance with PaS-LAD. On the WFG8 test function, MOEA/D-PaS performs best in ten objectives, while PaS-PAD has superior performance in four and seven objectives.

Experiments Analysis.
To better visualize the distribution of the final population, we plot the final population of the four algorithms on the WFG1 test problem.
It can be seen from Figure 5 that, on the WFG1 test problem with flat bias and a mixed structure of the PF, the proposed algorithms with adaptive operators have better convergence and diversity compared to MOEA/D-PaS.
ough MOEA/D-PaS can still achieve the value, it loses the diversity of the solutions. Among the proposed algorithms, the PaS-PAD has better distributed solutions than PaS-LAD and PaS-EAD.
7.5305e − 1 (6.55e − 2) 6.8311e On the WFG test set, PaS-PAD has better performance overall. MOEA/D-PaS only has the best results in 2 out of 8 cases in ten objectives. PaS-LAD and PaS-EAD also have advantages on some test functions compared with MOEA/ D-PaS.
To further explore which adaptive operator has superior performance than its competitors, we also run these algorithms on a newly benchmark MaF test suite. e HV values of MaF1-MaF8 are shown in the table. Table 5 collects the HV comparison results in the four algorithms on MaF1-MaF8 with 4-, 7-, and 10-objectives. PaS-PAD has the best performance in 9 out of 24 cases. MOEA/D-PaS, PaS-EAD, and PaS-LAD perform best in 6, 5, and 4 out of 24 cases. Compared with the other three algorithms, PaS-PAD has superior performance. e PF of MaF1 is gotten by inverting the DTLZ1 PF [51]. For the four and ten objectives, MOEA/D-PaS performs better than proposed algorithms, while in seven objectives, PaS-EAD has a better performance. MaF2 is gotten from DTLZ2 by enhancing the difficulty of convergence. For getting the real PF, all the objectives should be optimized on MaF2 at the same time. On MaF2, PaS-EAD has superior performance in four and seven objectives, and PaS-PAD has the best performance in ten objectives. Regarding MaF3 with a convex PF and a lot of local PFs, PaS-PAD shows an advantageous performance over its competitors in four, seven, and ten objectives. e PF of MaF4 is obtained by inverting the DTLZ3 PF shape. MOEA/D-PaS performs best on MaF4 in seven and ten objectives, while in four objectives, PaS-PAD and PaS-LAD perform better than the other two algorithms. On MaF5, PaS-PAD has superior

12
Discrete Dynamics in Nature and Society   performance in four and ten objectives, while in seven objectives, PaS-LAD performs best. Concerning that MaF6 has degenerate PF, PaS-EAD performs better than other algorithms in seven and ten objectives, and the four algorithms perform close in four objectives. For MaF7 with a disconnected PF, PaS-LAD has advantageous performance in four and seven objectives, while in ten objectives, PaS-LAD has a close performance with MOEA/D-PaS. On MaF8, the proposed algorithms  14 Discrete Dynamics in Nature and Society have the smallest HV values larger than 0, while the HV values of MOEA/D-PaS are 0 in four, seven, and ten objectives. All algorithms could not find the obtained solutions overall, but adaptive operators can still help the algorithms to get some solutions from the results shown above. Besides, we plot the final population of the four algorithms on the MaF3 test problem. From Figure 6, we can see that, on the MaF3 test problem with a convex PF, the MOEA/D-PaS and PaS-EAD cannot achieve the solutions. PaS-LAD and PaS-PAD have achieved solutions with better distributed solutions than the other two algorithms. PaS-PAD has the best convergence and diversity among these algorithms.
On the MaF test suite, PaS-PAD still has better performance than other algorithms, but the other two proposed algorithms perform not so well compared with MOEA/D-PaS overall. e adaptive operators can help algorithms get more obtained solutions on some test problems. e PAD method is better than the other two methods overall. Table 6 represents the runtimes of every algorithm. It can be seen from the table that when using adaptive strategy, it would cost extra computing resources to get the optimal solutions. MOEA/D-PaS has the best performance in 14 out of 24 cases. PaS-LAD, PaS-PAD, and PaS-EAD perform best in 2, 5, and 2 out of 24 cases. So the runtimes in adaptive strategies are longer than MOEA/D-PaS. e values of CR and F in the proposed three strategies are changed by the generations. During every generation, it is necessary to calculate the values of CR and F using extra computing resources. But from the values of HV shown in Tables 3 and  4, we can know that it is worth spending some extra computing resources to get better results.

Conclusion
MOEA/D and MOEA/D-DE have been demonstrated to be effective and useful in solving MOPs. However, the parameters are fixed, which would affect the convergence of the algorithm.
is paper proposes three algorithms using different self-adaptive DE operators to automatically adjust the setting of parameters CR and F in different problems based on MOEA/D-DE. e experiments have demonstrated that the adaptive strategies have effectiveness on the 2-objective and 3-objective test functions. Moreover, PAD has better performance than LAD and EAD from the tables of IGD and HV values. For MaOPs, we incorporate the adaptive DE methods into the MOEA/D-PaS and run these algorithms on WFG and MaF test suites for 4-, 7-, and 10objective problems. According to the HV values, for WFG problems, the proposed algorithms have huge advantages compared with MOEA/D-PaS. Moreover, PaS-PAD has superior performance among these adaptive methods. PaS-LAD and PaS-EAD do not perform as well as PaS-PAD but still have better performance than MOEA/D-PaS. Besides, on the MaF test functions, PaS-PAD still has advantages compared with other algorithms. In conclusion, for solving MOPs and MaOPs, the adaptive methods can help the algorithms to get more obtained solutions to converge much closer to the real PFs. Among these adaptive methods, the PAD method has the best performance.
However, there are still some problems to be solved; for example, for many real-world problems, the effectiveness of the adaptive methods needs to be proved. So we need to run these adaptive algorithms on more complex problems to identify their effectiveness.
For further study, firstly, we need to use these three adaptive strategies to run more complex test functions to identify if the PAD method still has advantages over LAD and EAD. Secondly, we would like to use niching technologies in population to accelerate the pace of converging to the optimal solution in the three adaptive algorithms.
irdly, we would like to apply these proposed algorithms to practical problems like community detection, recommendation system, and so on [60] Data Availability e data used to support the findings of this study are included within the article.
Disclosure is work about multiobjective optimization problems has been accepted in the Second International Conference, NCAA 2021 [60]. In this paper, the authors developed new algorithms to solve the many-objective optimization problems (MaOPs), which are not included in the conference paper. e authors incorporate the three adaptive strategies into MOEA/D-PaS to solve the MaOPs. e authors run these algorithms on 4-, 7-, and 10-objective test problems to verify the effectiveness of the adaptive strategies. e HV values of WFG and MaF test suites demonstrate the advantages of the proposed strategies.

Conflicts of Interest
e authors declare no conflicts of interest.