A Knee Point-Driven Many-Objective Evolutionary Algorithm with Adaptive Switching Mechanism

.


Introduction
Multiobjective optimization problems (MOPs) are ubiquitous in the real-world, such as wireless sensor networks [1], engineering design [2], workflow applications [3], and robot path planning [4].In MOPs, the objectives usually conflict with each other.Generally, an MOP with M conflicting objectives can be defined as follows: where x = x 1 , x 2 , ⋯, x n is an n-dimensional decision vector in the decision space Ω and M is the number of objective functions.MOPs with more than three-objectives (i.e., M > 3) are also referred to as many-objective optimization problems (MaOPs) [5,6].
Generally, many-objective evolutionary algorithms (MaOEAs) still adopt the Pareto dominance as the criterion for selecting solutions.However, in high-dimensional spaces, MaOEAs may face two significant challenges due to the curse of dimensionality.The first challenge is the phenomenon of dominance resistance.It means that the number of nondominated solutions in the population rapidly increases as the number of objectives increases, which leads to the incomparability of solutions [7,8].The resulting solutions that may survive for many generations in the population are called dominance resistance solutions (DRSs).The presence of DRSs will further expand the search space, which causes the convergence deterioration in the local search space.The second challenge is the contradiction between maintaining population convergence and diversity [9,10].
To address these challenges, many novel MaOEAs have been proposed [11,12].These methods can be broadly classified into the following four categories.
The first category involves the redefining or relaxing the Pareto dominance relationship.Examples of improved dominance relationships include angle dominance [13], grid dominance [14], scalarization-based dominance relationship [15], RPS-dominance relationship [16], and reinforced dominance relationship [17].NSGA-II+AD [13] proposes an angle dominance criterion that is insensitive to parameters.And the involved angle is composed of the solution and each objective axis.This dominance relationship cleverly expands the dominance region of a solution, which could increase the convergence pressure.In CDR-MOEA [18], a dominance relationship based on a convergence metric (CDR) is proposed, which combines the convergence metric and the adaptive parameter λ based on cosine similarity.This dominance relationship effectively balances the convergence and diversity of the nondominated solution set.MOEA/D-AED [19] uses adaptive Epsilon dominance to control the number of nondominated solutions, which effectively addresses problems with complex Pareto fronts (PFs).
The second type involves the decomposition-based algorithms, whose core idea is to solve MOP by decomposing a MOP into many single-objective subproblems.MOEA/D [20] is a typical decomposition-based algorithm that guides candidate solutions towards the Pareto front by defining a set of uniformly distributed weight vectors.However, this approach relies on the degree of match between the adopted reference vectors and the true Pareto front.To address the problems with irregular PFs, various adaptive decomposition-based methods have emerged.For example, AMOEA/D [21] adopts an adaptive addition-first-and-deletion-second strategy to dynamically adjust the weight vector.In addition, the reference points are widely used in most decomposition-based algorithms, such as NSGA-III [22], MOEA/DD [23], and SPEA/R [24].SPEAR proposes a density estimation technique based on reference vectors to maintain population diversity.NSGA-III, as an extension of NSGA-II [25], features a combination of nondominated sorting with reference vectors.MOEA/DD, as a variant of MOEA/D, introduces the Pareto dominance and adaptively associates solutions with subregions.In summary, the decomposition-based method is suitable for solving MaOPs, but it is inevitable to preset the appropriate weight vectors and the scalar functions for guaranteeing their performance.
The third type is the indicator-based algorithm.Unlike the Pareto-dominance and decomposition-based algorithms, this method usually evaluates the quality of solutions using predefined performance indicators.Some well-known indicators are used as criteria in environmental selection, including inverted generational distance (IGD), hypervolume (HV), and generational distance (GD).The indicator-based method can provide more selection pressure towards the Pareto front.Some recently popular indicator-based algorithms include HypE [26], R2HCA-EMOA [27], and MaOEA/IGD [28].
HypE adopts the Lebesgue measure to calculate the fitness values of solutions.To reduce computational complexity, R2HCA-EMOA uses the R2 indicator to approximate the value of HV.However, this method cannot effectively balance convergence and diversity.MaOEA-IGD proposes a novel nondominance comparison strategy based on the IGD indicator.This strategy compares solutions to reference points, which improves the quality of population mating.
The final type is knee point-driven algorithms, which are aimed at improving the performance of MaOEAs by identifying knee points that contribute to a larger HV [29].These algorithms employ strategies such as reflex angle [30], extended angle dominance [8], niching-based method [31], distance-based strategies [32], and offspring generation strategy [33] to identify knee points, which increase selection pressure and improve convergence.KnEA [34] identifies knee points based on the distance from the solutions to the hyperplane.And the number of knee points involved in this strategy is controlled by threshold parameter T.However, the performance of KnEA is greatly influenced by the variation of parameter T. Pi-MOEA [35] uses the average ranking method to identify pivot solutions within the neighborhood, which is insensitive to parameters.k-NSGAII [36] applies a distancebased metric to identify convex and edge knee points.To improve the accuracy of the knee-detection method, k-NSGAII designs a specific parameter η to control the width of the knee region.However, these strategies emphasize convergence too much but neglect the importance of diversity.In contrast, KnMAPIO [29] designs a novel environmental selection based on knee point-oriented dominance, which uses extreme and boundary points to select solutions from the critical layer.This selection strategy effectively balances the convergence and diversity.
In the past few years, many novel strategies for addressing MaOPs have been proposed.However, most MaOEAs still cannot meet the performance requirements.There are two main challenges.
First, without any prior knowledge, the diversity-first-convergence-second principle adopted by most Pareto dominancebased MaOEAs leads to the loss of population convergence.
Second, a strategy that overly emphasizes diversity may select many DRSs into the next-generation population, which leads to the population gradually moving away from the PF.
To address the above challenges, a knee point-driven many-objective evolutionary algorithm with adaptive switching mechanism (KPEA) is proposed in this paper, which is aimed at achieving a balance between convergence and diversity.Specifically, to increase the probability of generating excellent offspring, KPEA incorporates the knee points and the weighted distances into its mating strategy.During the environmental selection, an adaptive switching mechanism between angle-based selection and penalty is designed to balance convergence and diversity.The main contributions of this study can be summarized as follows: (1) Unlike the methods that indirectly identify DRS based on extreme solutions, to detect and eliminate DRSs in the population, an interquartile range 2 Journal of Applied Mathematics method based on population convergence is used, which enhances the quality of population mating and ensures even scaling of each objective (2) Compared with selection mechanisms based on convergence-first or diversity-first, in the environmental selection process, an adaptive switching mechanism between angle-based selection and penalty can maintain a balance between convergence and diversity, instead of excessively emphasizing either of them The remainder of this paper is organized as follows.The related works and the motivation of this study are presented in Section 2. A detailed description of the framework and core strategies of KPEA is presented in Section 3. The experimental setup is presented in Section 4. The experimental results and related analysis are presented in Section 5. Further investigation and discussion on KPEA are provided in Section 6.Finally, the conclusions of this paper and future work are given in Section 7.

Related Work and Motivation
In this section, the relevant research on dominance resistance solutions (DRSs) and angle-based selection strategies are reviewed, and the research motivation is elaborated briefly.

Impact of DRSs.
In general, the Pareto-based algorithms regard the nondominated solutions as suitable candidates for the next generation.When some solutions are barely dominated but clearly inferior to others, then they will not be removed by the Pareto dominance-based strategies [7,37,38].This phenomenon is called dominance resistance, and these solutions are called DRSs.
Regarding the specific definition of DRSs, various researchers have provided various descriptions.In literature [7], "DRSs are extremely inferior to others in at least one objective, and therefore, they are apart from Pareto optimums."DRSs were rephrased in literature [38] as "These are points in the nondominated set that are extremely poor in at least one of the objectives, while being extremely good in some others."Since DRSs are located on certain boundaries of the objective space, most may have the best density values, which causes these solutions to be always reserved for the next generation [39].Furthermore, the proportion of nondominated solutions in a population grows exponentially as the number of objectives increases [40].Meanwhile, the number of DRSs may also increase with the number of objectives, which results in more intense dominance resistance [5].Even worse, DRSs can seriously inflate the nadir in specific axial directions, leading to an uneven scaling of each objective [8].
Figure 1 illustrates the situation of DRSs in the population in biobjective and triobjective spaces.The solid purple dots represent the nondominated solutions in the search space.In Figure 1, each axis shows the fitness value on each objective, and the nondominated solutions within the red dashed box are DRSs.With the increase of the number of objectives, the presence of DRSs will further expand the search space, which causes the convergence deterioration in local search space.Some researchers have tried to identify the extreme solutions directly with specific methods.In many-objective particle swarm optimization (MaOPSO) [41], the solutions in the external archive that minimize the achievement scalarizing function (ASF) [42] are considered as the extreme solutions.In the multiphase balance of diversity and convergence in many-objective optimization (B-NSGA-III) [43], the extreme-LS operator is used to identify the extreme solutions in objective directions.Nevertheless, because MaOPSO and B-NSGA-III are all Pareto-based algorithms, their effectiveness is significantly impacted by DRSs.Different from directly determining the precise extreme solutions, some researchers are committed to deleting the DRSs in the current population to indirectly determine the extreme solutions.
Bhattacharjee et al. [38] propose a six-sigma-based method to eliminate DRSs.In this method, a solution is classified as DRSs when its value on the objective exceeds the average value plus six times the standard deviation of all nondominated solutions on that objective.However, the strategy still has some drawbacks.Xiang et al. [44] demonstrate experimentally that the method produces solutions with very poor convergence for some problems.Furthermore, Zhou et al. [45] propose a method to eliminate DRS by modifying the current candidate extreme solutions.Unfortunately, when there are several DRSs on an objective axis, this strategy cannot eliminate all DRSs.
In summary, the elimination of DRSs is beneficial to finding the exact extremal solutions.As Bhattacharjee et al. stated in literature [38], "the detection and elimination of DRSs is far from trivial, and further research is needed to address this issue."In the paper, an interquartile range method (IQR) is introduced to eliminate certain nondominated solutions in the population whose convergence values are greater than the upper third quartile (Q3).

Angle-Based Selection Strategy.
Basing on the embedding position of the angle selection criterion, the angle-based evolutionary algorithm can be classified into three categories.The first category involves applying the angle selection to environmental selection.The decomposition-based ranking and angle-based selection evolutionary algorithm (MOEA/ D-SAS) [46] employs the angle information between solutions in the objective space to maintain population diversity.The vector angle-based evolutionary algorithm (VaEA) [47] uses the maximum-vector-angle-first strategy to select elite individuals.However, in certain cases, the selected solutions may have poor convergence.Thus, VaEA adopts the worseelimination principle and conditionally uses alternative solutions to replace poorly converging ones.The scalar projection and perspective-based evolutionary algorithm (PAEA) [45] uses the minimum angle formed between a solution and a unit vector as a control threshold to maintain a balance between convergence and diversity.However, how to determine the threshold for the search direction in these strategies is usually an extremely complex problem.
The second category involves incorporating angle selection into the dominance relationship.Wang et al. [37] propose the generalized Pareto optimum (GPO) that uses a parameter ϕ i , the extension angle on the ith objective to redefine the dominance relationship.Yuan et al. [13] propose an angle dominance criterion that is insensitive to parameters.The involved angle is composed of the solution and each objective axis.This dominance relationship cleverly expands the dominance region of a solution, which could increase the convergence pressure.
The third category involves considering angle selection in both environmental selection and dominance relationships.He and Yen [48] propose a coordinated selection strategy to solve MaOP problems.This strategy uses ASF distance as the convergence metric and the angle formed between individuals as the diversity metric, which is applied to both environmental selection and mating selection.During the process of evolution, this strategy can balance the convergence and diversity of the whole population.Li et al. [49] use an archive population to update reference points and construct a distance scaling function based on the angle between individuals and reference points.This function is employed in mating selection, which improves the quality of offspring in the population.
Based on the above research, this paper presents a novel environmental selection strategy that regards the relationship between population size and the number of nondominated solutions as two criteria for selecting the solutions.The knee points determined by an adaptive strategy are introduced for not only mating selection but also environmental selection.

Motivation of This
Paper.When dealing with manyobjective optimization problems (MaOPs), dominancebased algorithms often face difficulties caused by DRSs and active diversity promotion [50,51].Decomposition-based algorithms can avoid the problem of insufficient selection pressure but require a predefined set of weight vectors.However, uniformly distributed weight vectors cannot guarantee well-distributed solutions.In contrast, angle-based selection strategies perform well on enhancing population diversity, especially when dealing with MaOPs with complex Pareto fronts.The studies in Section 2.2 indicate that many algorithms using angle-based ecological niche protection techniques seem effective, but there are still many unresolved problems.When dealing with nonlinear optimization problems, for the strategy proposed by the hyperplane-assisted evolutionary algorithm (hpaEA) [52] to protect prominent solutions, it may be invalid to maintain the diversity of the population.
The strategy of protecting prominent solutions proposed by the hyperplane-assisted evolutionary algorithm (hpaEA) [52] is not suitable for MaOPs with nonlinear Pareto fronts.On this type of problem, it is unable to maintain the diversity of the population for this strategy.The strategy proposed by VaEA overemphasizes diversity and lacks a reasonable design for angle thresholds, which results in difficultly achieving effective convergence in certain high-dimensional problems.
To address the above issues, this paper proposes a knee point-driven many-objective evolutionary algorithm with adaptive switching mechanism (KPEA).It incorporates the knee points and weighted distance into its mating strategy to increase the probability of generating excellent offspring.An interquartile range method is introduced to detect and eliminate DRSs in the population.During the environmental selection, an adaptive switching mechanism between anglebased selection and penalty is designed to balance convergence and diversity.
Up to now, most selection mechanisms focus on either convergence or diversity optimization.Generally, without any prior knowledge, a single strategy cannot offer a universal and acceptable performance on most MaOPs.
With this motivation, an adaptive switching mechanism between angle-based selection and penalty proposed in this paper could keep a balance between convergence and The selection mechanism first preserves knee points identified by adaptive recognition and then selects individuals based on the maximum-vector-angle-first principle.The adaptive penalty mechanism selects the individuals with the best convergence one by one.Then, it adaptively adjusts the penalty set based on the distribution of solutions, which achieves comprehensive coverage of the Pareto front.As mentioned above, KPEA not only eliminates the influence of DRSs throughout the entire evolutionary process but also achieves a balance between diversity and convergence.

Proposed Algorithm
This section provides a detailed description of the proposed KPEA.Firstly, the main framework of the algorithm KPEA is given.Subsequently, the quartile detection method is regarded as a preprocessing tool to detect DRSs, which enhances the quality of population mating and ensures even scaling of each objective.In addition, in the mating selection, knee points and weighted distances are introduced as secondary selection criteria.Finally, an environment selection strategy is proposed with an adaptive switching mechanism, which can adaptively switch according to the relationship between population size and the number of nondominated solutions.
3.1.General Framework.A common Pareto-based MaOEA framework is employed in the proposed KPEA, as shown in Algorithm 1.First of all, an initial parent population P of size N is randomly generated.Then, after individuals are selected from P using the binary tournament selection strategy, an offspring population Q is generated using mutation (lines 5 and 6).Three metrics, namely, dominance relation, knee point criterion, and weighted distance metric, are employed as the selection criteria for the binary tournament selection.If there are some checked DRSs in the combined population S, a preprocessing mechanism based on IQR is used to detect and eliminate DRSs (line 8).Next, the nondominated sorting is performed on the preprocessed new combined population P (line 9), and an adaptive strategy is introduced to identify the knee points of each nondominated front in the combined population P (line 10).Finally, with an adaptive switching mechanism, N individuals are selected from the population based on the relationship between population size and the number of nondominated solutions.As a vital contribution of this study, the niche protection operation different from VaEA [47] considers both convergence and diversity of each solution in the population.
In order to clearly illustrate the framework of KPEA, the specific flow of KPEA is shown in Figure 2. Specifically, KPEA consists of five evolution parts: mating selection, preprocessing, nondominated sorting, identification of knee points, and environment selection.During the environmental selection, an adaptive switching mechanism between angle-based selection and penalty is designed to balance convergence and diversity.In addition, the switching conditions and the specific evolutionary process of this novel adaptive switching mechanism are visualized.

Preprocessing Operation.
Preprocessing is one of the most important steps in data mining, which is aimed at eliminating outliers [53].In fact, during the optimization process of MaOPs, there are many DRSs (i.e., outliers) within the population.As discussed in Section 2, these DRSs have a negative impact on the quality of the final solutions.Thus, removing these DRSs is a crucial step to improve the performance of the algorithm.
In this paper, an interquartile range (IQR) method is introduced to detect and eliminate DRSs.IQR is a descriptive statistic for identifying outliers in any dataset.Given an ordered one-dimensional dataset R, the lower quartile (Q 1 ) and upper quartile (Q 3 ) of the dataset are determined first, and then the IQR is calculated as follows.
where UB and LB denote the upper and lower boundaries of the dataset, respectively.The value of parameter r is set to 1.5, which will be comparatively analyzed in Section 4. In addition, Q 1 and Q 3 are defined as where • indicates rounding up and • represents the size of the set.The specific process of preprocessing aimed at the dataset R = 48, 32, ⋯, 53, 17 , which is composed of convergence metrics of solutions, is visually illustrated in Figure 3. First, the convergence metrics of all solutions are calculated and sorted in ascending order.According to the quartile method, the index positions i and j locate at 1/4 and 3/4 of the sorted convergence metrics, respectively.In Figure 3, i = 13 * 1/4 = 4 and j = 13 * 3/4 = 10, where • indicates rounding up.Q 1 and Q 3 are calculated according to Eq. (3).In Figure 3, Q 1 = 32 and Q 3 = 55.Then, the lower and upper boundaries are set as are calculated according to Eq. (2).In this paper, a solution is considered a DRS when its convergence metric is greater than the upper boundary.Finally, the blue point with a convergence metric of 92 will be regarded as DRSs by the preprocessing mechanism and will be eliminated from the population.
Algorithm 2 presents the specific process of the preprocessing operation.In the first step, the convergence metric for each solution is calculated and stored in the set CM (line 1).Then, CM are sorted in ascending order (line 2).Subsequently, the corresponding index position i of Q 1 in P is calculated according to the quartile method, and then the value of Q 1 in the ordered CM is calculated using Eq.(3) (line 3).As the same as the solving procedure of Q 1 , Q 3 is calculated 5 Journal of Applied Mathematics (line 4).To avoid the situation that the number of candidate solutions after preprocessing is less than the population size N, Q 1 and Q 3 may be adjusted.If j > N, the upper and lower quartiles of the CM are adjusted accordingly (lines 5-7).After determining the upper and lower quartiles, UB is calculated using Eq.(2) (line 9).Finally, any solutions with a convergence metric larger than UB are eliminated, leading to a new set of solutions P ' (line 10).P ' is an auxiliary set for detecting whether there are DRSs in P. The disparity indicator I is employed, which calculates the diagonal length of the hypercube formed by the extreme objective values in population (line 11).The disparity indicator I is defined as where m represents the number of objectives and x is a candidate solution that belongs to the population.After calculating the disparity indicator I p and I p' of P and P ' , the difference ratio γ between two sets is calculated as follows:  Obviously, if γ is smaller than or equal to 1, it implies that there are no DRSs in P. On the contrary, if γ is greater than 1, it indicates that one or more DRSs exist in P, and P will be updated to P ' with no DRSs (lines 12 and 13).

Adaptive Strategy of Identifying Knee Points.
In KPEA, the knee points play a crucial role.According to literature [34], knee point-based selection has more advantages than using HV-based selection.Input: P (population), N (population size) 1 Compute the convergence metrics of solutions P → CM 2 Sort CM; 3 The index position i is calculated, and then Q 1 is calculated using Eq. ( 3). 4 The index position j is calculated, and then Q 3 is calculated using Eq.(3). 5 If j > N then The core idea of determining the knee point is demonstrated in Figure 4.In biobjective minimization problems, the extreme solutions with the maximum value on the respective objective axes f 1 and f 2 need to be first determined.Then, the extreme line L is defined as Finally, the distance D from each solution A x A , y A to L is calculated as Within the neighborhood range, the solution with the maximum distance to L will be identified as the knee point.In many-objective optimization problems, L refers to the hyperplane [54].
As shown in Figure 4, solutions A, B, and C are located within the neighborhood enclosed by dashed lines.Among these, solution A has the maximum distance to L. Therefore, A is considered as the knee point within the neighborhood.When there is only one solution within the neighborhood (e.g., solution H), that solution is also considered as the knee point.
The results of identifying knee points will be significantly affected by the size of the neighborhood of the solutions.Solutions B, E, and H are identified as knee points based on the neighborhood size defined in Figure 4.However, when all solutions are included in the same neighborhood, only solution E is identified as the knee point.To address this issue, a strategy is introduced to adjust the neighborhood size of solutions.
Assuming that at the gth generation, there are N F -nondominated fronts within the population.Each nondominated front is composed of a set of nondominated solutions F i .The neighborhood of a solution is defined by a hypercube of dimension , where 1 ≤ j ≤ M and M denotes the number of objectives.Specifically, the neighborhood size V g j of the jth objective is calculated as follows: where f min j g and f max j g represent the minimum and maximum values of the nondominated set F i on the jth objective at the gth generation, respectively.The parameter r g denotes the ratio of the size of the neighborhood to the span of jth objective in nondominated front F i at the gth generation, which is defined as where r g−1 represents the ratio of the size of the neighborhood to the span of jth objective in F i at the (g − 1)th generation and t g−1 indicates the ratio of knee points to the number of nondominated solutions in F i at the g − 1 th generation.The threshold T 0 < T < 1 is used to control the proportion of knee points in F i .
The main steps for identifying the knee points are presented in Algorithm 3. The knee points are identified starting from the first nondominated front (lines 2).First, the extreme points in F i are identified, and the hyperplane L is calculated (lines 3 and 4).Next, the size of the neighborhood for the solutions in F i is calculated (lines 5-8).The distances from each solution to L are calculated and sorted in descending order (lines 9&10).Then, the knee points within each neighborhood are identified and added into the knee point set K (lines [12][13][14][15].Finally, the ratio between the number of knee points and the number of nondominated solutions in F i is updated (line 17).

Binary Tournament Mating Selection.
In KPEA, a binary tournament selection based on three metrics, including the Pareto dominance, knee point criterion, and weighted distance, is designed, as shown in Algorithm 4.
Two individuals, x and y, are randomly selected from the parent population P. If y is dominated by x, then x will be preferred (lines 4-7).If x and y are nondominated, the individual belonging to the knee point set K will be selected (lines 9-12).If both or none of x and y belongs to the knee point set K, the weighted distances will be used to compare x and y.The weighted distances DWs of x and y are calculated as follows:

8
Journal of Applied Mathematics Input: F (sorted population), T (rate of knee points in population), r, t (adaptive parameters) K ← ∅ / * knee points * / For all Update r by Eq. ( 9) fmax ← Maximum value of each objective in F i fmin ← Minimum value of each objective in F i Calculate V by Eq. ( 8) Calculate the distance from each solution in F i to L by Eq. ( 7) Sort F i in a descending order according to the distances

End for
Return K, r and t Algorithm 3: Finding_knee_point (F, T, r, and t).
Randomly choose x and y from P where x i is the ith nearest neighbor of x in the population, w xi is the weight of x i , dis x,xi is the Euclidean distance between x and x i , and r xi is the rank of distance dis x,xi among all distances dis x,xj , 1 ≤ j ≤ k.

Environmental Selection.
In this paper, a novel environmental selection strategy is proposed, which is aimed at enhancing selection pressure while balancing convergence and diversity.
The details of the novel environmental selection strategy are provided in Algorithm 5. Firstly, the sorted population F will be normalized (line 1).The objective vector F x j of a solution x j in the sorted population F can be normalized T using the following formula [55,56]: where z i min denotes the ideal point, that is, When the number of solutions in F 1 exceeds the population size N, the solutions based on the Pareto dominance relationship are incomparable.In this situation, the convergence and diversity of KPEA are maintained alternately by the penalty mechanism (introduced in Section 3.5.1),which also decreases the sensitivity to the distribution threshold (lines 2 and 3).
In contrast, when the number of nondominated solutions in F 1 is less than the population size, knee points are first preserved by the selection mechanism (introduced in Section 3.5.2),which increases selection pressure and improves convergence.Then, the remaining solutions are selected with association and niching functions based on the principle of maximum-vector-angle-first, which maintains diversity (lines 4 and 5).The detailed explanations of these methods will be provided in the following sections.
3.5.1.Penalty.When the number of solutions in F 1 is greater than the population size N, the function Penalty is triggered in KPEA, as shown in Algorithm 6.The excellent solutions for the next generation are selected based on the penalty principle, which considers both the diversity and convergence of the population.First, the angles and stored in S DM = DM 1 , DM 2 , ⋯, DM F in ascending order (lines 2-3).The distance CM i from p i to the idle point z * is computed and stored in S CM = CM 1 , CM 2 , ⋯, CM F (lines 4-5).Then, the solution with the minimum value in S CM is selected from the candidate solution set and put into the next-generation population P (lines 7 and 8).Once a solution is selected, its neighbors N i are identified, where μ i j < α and μ i j ∈ μ i 1 , μ i 2 , ⋯, μ i F −1 .These neighbors N i are put into the penalty set E (lines 9 and 10).The size of E is determined by the size of the initial candidate set S and the desired population size N.If the number of solutions in E exceeds its capacity limit, the penalty set E must be updated based on the similarity between the penalized solutions N i and the selected solution set P. Specifically, the closer the solution p i in E is to the candidate solution, the smaller the angle value DM i (stored in S DM ) between p i and the candidate solution.Therefore, among the neighbors of the candidate solutions, the top N j = S -N neighbors with smaller angle values are kept in the penalty set (lines 11-13).Then, the remaining well-distributed solutions N F with larger angle values (stored in S DM ) will be moved from E to the candidate set F (line 14).
The process of the penalty mechanism in a 2D objective space is illustrated in Figure 5.In this example, there are 9 nondominated solutions in F 1 , and the population size N is 5. Firstly, solution A with min S CM is selected.And then, as the angles between the solutions (B and C) and A are smaller than α, B and C will be penalized, as shown in Figure 5(b).Next, as the same as the method in Figure 5(b), solution I with min S CM is selected.As the number of solutions in the penalty set does not exceed the limit, the neighbor H of I can be directly punished, as shown in Figure 5(c).Subsequently, solution F with min S CM is selected.At present, solutions A, I, and F are contained in the population P. Solutions B, C, and H are included in the penalty set.Because the penalty set can only accommodate four solutions, the neighbors E and G cannot be directly penalized by the mechanism.Therefore, it is necessary to update the penalty set by removing solution H with max S DM .After the update, the penalty set is composed of solutions B, C, E, and G.The selected solution may have no neighbors, which means there is no angle between the candidate solutions and the selected solutions smaller than α.In this case, no solution is penalized.Finally, based on the above steps, solutions D and H are selected, as shown in Figures 5(e) and 5(f).

Selection.
Once the number of nondominated solutions in F 1 is less than the population size, the selection operation will be triggered, as given in Algorithm 7. The knee points of each nondominated front are selected first and added into P, which can increase selection pressure (line 2).Then, the remaining nondominated solutions from F 1 ∪ ⋯∪F l−1 are selected and added into P.If the number of solutions in F l exceeds the population size N, then F l is considered as the critical front.N − P knee points with the minimum distance to the hyperplane are removed (lines 4 and 5).Otherwise, the association function that constructs the search direction based on P and the niching function that is aimed at ensuring the diversity of the population are used to select N − P solutions one by one from F l (lines 6-8).The association and niching functions will be described in detail in the following sections.
3.5.3.Association.The angle-based association process is described in this section.After selecting the knee points into P, if P is not full, N − P solutions will be selected from F l based on the vector angles.Before describing the association operation, some relevant formulas are first presented in this section.
The norm could calculate the vector angle between two solutions in the normalized objective space.The objective vector F x j in the sorted population F has been normalized T , as introduced at the beginning of Section 3.5.The norm of solution x j in the sorted population is defined as The vector angle μ between solutions x j and c k is defined as where F′ x j • F′ c k is to calculate the inner product between F ′ x j and F ′ c k .And F′ x j • F′ c k is defined as The main steps of the association process are provided in Algorithm 8.For each remaining member in F l , for example, x j , the minimum vector angle (denoted as θ x j ) between x j and the solutions in P is calculated.The index of the solution in P that has the minimum vector angle to x j is denoted as γ x j .
Firstly, for each x j ∈ F l , its θ x j and γ x j are initialized as ∞ and -1, respectively (lines 1-3).Then, for each c k ∈ P, Compute the distance CM i from p i to the idle point z * ; 5 Select p i with min(S CM ) 8 Move p i into P 9 Identify the neighbors N i of p i , where μ i j < α and Identify the neighbors N j of P, where the DM j of N j is smaller compared to the other neighbors of P, DM j ∈ {DM  12 Journal of Applied Mathematics the angle μ between x j and c k is calculated (lines 4 and 5).If μ < θ x j ,θ x j and γ x j are updated accordingly (lines 6-8).
The association process in a 2D objective space is illustrated in Figure 6.In this example, there are 4 nondominated solutions, and one knee point has been selected and put into the nextgeneration population P. Based on the vector angle association, both x 1 and x 2 are associated with c 1 , x 3 , and x 4 are associated with c 2 , and x 5 is associated with c 5 .Taking x 3 as an example, θ x 3 and γ x 3 are β and 2, respectively.

Niche Preservation.
A niche-based diversity maintenance strategy is described in Algorithm 9, which is based on the principle of maximum-vector-angle-first.First, each solution in F l has a flag value, which indicates whether the solution has been added into P.These flag values should be initialized to false at the beginning of the process (lines 1 and 2).Subsequently, for each k ≤ N − P , the index of the solution in F l that has the maximum vector angle to P is calculated, which is denoted by ρ (lines 3 and 4).
Finally, based on the principle of maximum-vectorangle-first, the best solution is selected and added to P, as specified in Algorithm 10.When ρ is null, it means that all solutions in F l have been added into P. Otherwise, x ρ is added into P and the corresponding flag is updated to true (lines 4 and 5).When adding x ρ to the population P, the vector angles between the remaining solutions in F l and the solutions in P must be adjusted accordingly.This adjustment can be achieved by computing the vector angle between x ρ and each solution in F l whose flag value is false (lines 7-11).The specific updating process has been described in Section 3.5.3.As shown in Figure 6, compared to the other remaining solutions, the angle between the solution c 1 in P and x 2 in F l is the maximum.Based on the principle of maximum-vector-Input: F (sorted population), K (set of knee points), N (population size); Output: P (next population) 1 P = ∅, i = 1 2 P ← P ∪ (K ∩ F l ) / * Select the knee point and the solutions in Delete P − N solutions from K ∩ F l , which have the minimum distances to the hyperplane 6 Else If P < N then 7 Associate each member of F l with a solution in P: Association (P, F l ) 8 Choose N − P solutions one by one from F l to construct final P: P = Niching (P, F l , N − P ) 10 End If 11 Return P Algorithm 7: Selection (F, K, and N).
For each c k ∈ P 5 Calculate angle μ between x j and c k by Eq. ( 15) End For 11 End For Algorithm 8: Association (P, F l ).

Knee points
Other chosen solutions Candidate solutions 13 Journal of Applied Mathematics angle-first, x 2 will be added into P. Obviously, compared to solutions such as x 1 , x 3 , and x 4 , adding x 2 is more likely to improve the distribution of the population.In addition, the potential solutions may be obtained along the direction of x 2 .Assuming that two more solutions need to be added into P, x 1 and x 3 will be chosen.
In contrast, x 5 can never be considered because the vector angle between x 5 and the associated solution c 5 is zero.This means that c 5 with the same direction as x 5 has good convergence in P.
3.6.Time Complexity Analysis.During each evolutionary generation, KPEA performs five main operations: mating selection, preprocessing, nondominated sorting, identification of the knee points, and environmental selection.The preprocessing operation takes a runtime of O N log N .Nondominated sorting operation takes a runtime of O N log m−2 N .The identification of the knee points requires at most a runtime of O mN 2 .Specifically, it takes at most a runtime of O mN to compute the distance between each nondominated solution and the hyperplane.And it takes a runtime of O mN 2 to identify the knee points in the nondominated solutions set.
During the environment selection, the time complexity of KPEA is mainly determined by the adaptive switching strat-egy, as described in Algorithm 5. When the number of nondominated solutions in F 1 exceeds the population size N, the function penalty operation is triggered.First, the computation of cosine similarity requires a runtime of O mN 2 .Next, it takes a runtime of O N 2 to identify the elite solutions.When the number of nondominated solutions in F 1 is less than N, the selection operation will be triggered.First, it takes a runtime of O N log N to identify the knee points into the next generation of the population.Next, the association and niching functions require at most a runtime of O mN 2 .
Therefore, the environment selection stage requires at most a runtime of O mN 2 .Overall, in each generation, the worst time complexity of KPEA is max O N log m−2 N , O mN 2 .

Experimental Setup
The experiment to validate the performance of the proposed KPEA on MaOPs is set up in this section.First, in Section 4.1, five representative algorithms are briefly introduced, namely, k-NSGAII [36], KnEA [34], Pi-MOEA [35], hpaEA [52], and VaEA [47].Then, the specific characteristics of the test problems in the MaF and WFG benchmark suites are listed in Section 4.2.Finally, performance metrics and Input: P (population), F l (critical front), N − P (solutions to be chosen) Output: P (next population) 1 T= F l / * F l returns the cardinality of F l * / 2 flag(x j ) = false, x j ∈ F l , j = 1, 2, ..., T 3 For k = 1 to (N − P )/ * Find the index with the maximum vector angle * / 4 ρ = arg max θ x j x j ∈ F l ∧ f lag x j == f alse 5 P = Maximum-Vector-Angle-First (P, ρ, T) 6 End for 7 Return P Algorithm 9: Niching (P, F l , and N − P ).

Comparison Algorithms.
To fully validate the effectiveness of KPEA, five state-of-the-art algorithms are selected for performance comparison in this paper.The core ideas of these five algorithms are presented as follows.
(i) k-NSGAII [36].The algorithm is based on NSGA-II and introduces a knee point identification strategy, which uses a user-defined parameter η to control the width of the neighborhood of knee points.This strategy can improve the coverage of the population on the Pareto front (ii) KnEA [34].The algorithm adopts an adaptive strategy to identify knee points for each nondominated front and uses a custom threshold parameter T to control the number of knee points recognized.This strategy not only enhances selection pressure on the Pareto front but also promotes population diversity (iii) Pi-MOEA [35].The algorithm uses an average ranking method to identify pivot solutions within the adaptive neighborhood, which is aimed at achieving a balance between convergence and diversity (iv) VaEA [47].The selection of solutions is based on the principles of maximum-vector-angle-first and worse-elimination without requiring additional weight vectors and parameters (v) hpaEA [52].With the assistance of hyperplanes, prominent solutions are selected first during both mating selection and environmental selection, which increases the selection pressure 4.2.Description of Benchmark Problems.In this paper, 20 scalable test problems from two well-known benchmark test suites, WFG [57] and MaF [58], are used in this study to verify the performance of KPEA.The characteristics of the test problems are detailed in Table 1.

Performance Indicators.
Two widely used performance metrics, namely, hypervolume (HV) [59] and inverse generational distance (IGD) [60,61], are considered as they can simultaneously evaluate the convergence and diversity of solution sets.The IGD measures the average distance between the nondominated solution set P and the nondominated solution set P * uniformly distributed along the true Pareto front (PF) [26].IGD is defined as follows: where d v, P refers to the minimum distance between the solution in v and its closest solution in P. Therefore, the smaller the value of IGD, the closer the solution set is to the true PF.
HV refers to the volume of a hypercube in the objective space that is enclosed by a reference point and a set of non-dominated solutions P, which is a common indicator for measuring the performance of the algorithm.It can be defined as follows: where v i is the hypercube enclosed by z r = 1, 1, ⋯, 1 and the individual i in P. The quality of the solution set can be judged by the HV value.

28
Journal of Applied Mathematics (i) Variation Operators.In this paper, the simulated binary crossover (SBX) [64] and polynomial mutation [65] are adopted by all the MaOEAs used for comparison to generate offspring solutions.The crossover probability p c is set to 1, and the mutation probability p m is set to 1/D, where D is the number of decision variables.Following the recommendations in literature [23], the distribution indexes η c and η m are set to 20 and 30, respectively.The other parameter settings for the algorithms are consistent with the settings described in their respective literature (ii) Population Size.This paper considers 3, 5, 8, and 10 objectives.The population size N of six algorithms for different numbers of objectives M is summarized in Table 2 (iii) The Number of Runs and Termination Criterion.
For each test instance, each algorithm runs 20 times independently.The termination criterion for each run is determined by the predefined maximum number of generations (maxGen).For WFG1 and WFG2, maxGen is set to 1000.For WFG3-WFG9, maxGen is set to 250.For the MaF test suite, max-Gen is set to 250.When the number of generations reaches maxGen, all algorithms are terminated

Experimental Results
In this section, according to the experimental design described in Section 4, KPEA is compared with five stateof-the-art algorithms on WFG and MaF.The statistical results of IGD and HV metrics obtained by the six algorithms on the test suite are listed in Tables 3-6.The best results for each problem instance are annotated with a dark gray background.Then, in Section 5.3, the sensitivity of KPEA to parameter r is discussed.Additionally, the performance comparison of KPEA and its different variants is analyzed in Section 5.4.Finally, in Section 5.5, the overall performance of six algorithms is analyzed.

Analysis of WFG Test
Problems.The mean and standard deviation values of IGD and HV results obtained by five compared algorithms and KPEA on the WFG test suite are listed in Tables 3 and 4. The performance is challenged by the multiple complex features of WFG problems, which include scalability, deception, and separability.As shown in Table 3, KPEA significantly outperforms other algorithms.Specifically, among 36 instances, KPEA outperforms all compared algorithms on 18 instances.The performance of KPEA is significantly better than the other five algorithms on the WFG1, WFG2, and WFG8 problems with scaling and concave-convex Pareto fronts.It can be attributed to the novel adaptive switching mechanism of KPEA, which balances diversity and convergence.In terms of the HV indicator, KnEA performs better on the WFG5 and WFG9 problems with deceptive and concave Pareto fronts.This is due to the fact that KnEA sets a small threshold T for the adaptive strategy to identify knee points, which helps KnEA to achieve good performance on scalable problems.Pi-MOEA performs better than KPEA when solving the WFG problems with three-objective.Furthermore, the performance of VaEA is considerably worse than that of KPEA.The reason is that VaEA emphasizes diversity too much but the importance of convergence is neglected.
In Table 4, in terms of IGD values, KPEA and VaEA achieve better results than other algorithms on most of the WFG test problems.Out of the 36 test instances, KPEA achieves the best IGD result on 20 instances, while Pi-MOEA obtains the best result on 8 instances.The reason why KPEA performs well on some WFG test problems may be that it combines the knee points with the maximum-vector-angle-first principle to guide the evolution of the population.Therefore, unlike KnEA and k-NSGAII that lack a diversity maintenance mechanism, it is not easy for KPEA to trap in the local optima on problems with biased Pareto fronts such as WFG1, WFG8, and WFG9.Pi-MOEA uses a threshold T defined based on the average ranking to control the neighborhood range of solutions, which helps the algorithm maintain a balance between convergence and diversity.It is important to note that on the WFG test suite, the performance of the k-NSGAII and hpaEA is relatively the worst in both HV and IGD metrics.For k-NSGAII, the effectiveness of the knee point-driven algorithms is heavily influenced by the approach used to detect the knee points, which is a crucial factor.However, the approach proposed by k-NSGAII only focuses on a small portion of the PF region when identifying the knee points, which results in poor diversity during the search process of k-NSGAII.For hpaEA, this is mainly due to its proposed prominent solution protection strategy, which is unable to maintain a good diversity of solutions on the linear Pareto front of the WFG test suite.
Taking WFG1 and WFG9 with 10 objectives as examples, the final solutions obtained by six algorithms in a single run are shown in Figures 7 and 8, respectively.For the WFG1 problem with 10 objectives, each algorithm encounters some difficulties in solving this problem, as shown in Figure 7.In terms of distribution and convergence, both hpaEA and VaEA perform poorly.Furthermore, the uniformity of the Pareto

Analysis of MaF Test Problems.
Although WFG problems are widely used to evaluate the performance of many-objective optimization algorithms, their benchmark functions are too regular and monotonous.Furthermore, WFG problems lack some validation for the complex and uncertain real-world problems.In contrast, the MaF problems that focus on evaluating diversity and convergence have higher complexity and practical significance.In this study, tests are conducted on the MaF problems to further validate the performance of KPEA.
The statistical results of the HV values obtained by six algorithms on the MaF test suite are presented in Table 5. KPEA outperforms the other five algorithms on 27 instances out of the 44 test instances.Furthermore, KPEA also demonstrates strong competitiveness on MaF4 with a convex Pareto front, MaF8, MaF11 with a disconnected Pareto front, and MaF13 with irregular mixed Pareto fronts.
The IGD values obtained by the considered algorithms on the MaF test suite are presented in Table 6.Out of the 44 test instances, KPEA achieves the best IGD value on 26 instances.Additionally, Pi-MOEA also performs well, which obtains the best IGD value on 6 test instances.Conversely, the performance of k-NSGAII and KnEA is not as good as other algorithms.
To illustrate the convergence and distribution of solutions more intuitively, the final population of six algorithms on several representative problems is plotted in Figures 9-12.KPEA, Pi-MOEA, hpaEA, and VaEA maintain a uniform population distribution on MaF1 with an irregular PF shape, as shown in Figure 9. On the contrary, the distribution of solutions obtained by k-NSGAII and KnEA is relatively poor.The distribution of solutions obtained by different algorithms on the MaF5 with three objectives is shown in Figure 10.The solutions with the uniform distribution obtained by Pi-MOEA and KPEA are closest to the true Pareto front of MaF5.Compared to k-NSGAII and hpaEA, the solutions of KnEA and VaEA seem to have good convergence.However, both KnEA and VaEA struggle to maintain population diversity on MaF5.From Figure 11, the solutions obtained by KnEA, VAEA, and KPEA are closest to the true Pareto front when the number of objectives is 10.Moreover, as shown in Figures 10 and 11. unlike other algorithms, the performance of KPEA becomes more prominent as the number of objectives increases.MaF8 is a problem of multipoint distance minimization [43].From Figure 12, KPEA demonstrates the best convergence and diversity performance on MaF8.However, the distribution of k-NSGAII, Pi-MOEA, and KnEA is quite unsatisfactory.The solutions obtained by hpaEA and VaEA cannot converge to the Pareto front on MaF8.This also proves the effectiveness of the proposed KPEA.

Sensitivity of Parameter r.
To investigate the sensitivity of parameter r of preprocessing in KPEA, the performance with varying r from 0 to 5 on 3, 5, and 10 objective test problems is evaluated.Seven representative problems in Table 7 are selected for testing and comparison.The average IGD values in 30 iterations for each problem are calculated.From Figure 13, on MaF2 and WFG9 problems with concave and scaled Pareto fronts, the performance of KPEA is insensitive to r.On MaF1 with degenerate Pareto front, the analysis needs to be conducted based on the number of objectives.As shown in Figures 13(a) and 13(b), when the number of objectives is less than 10, the IGD value of KPEA slightly fluctuates as the value of r increases.However, when the number of objectives reaches 10, the variation of r has a significant impact on the performance of KPEA.Firstly, as r increases from 0 to 1.5, the IGD value consistently decreases.Then, as r continues to increase, the IGD value gradually increases.Moreover, on WFG1 and MaF9, the IGD value also slightly fluctuates as the r value increases.However, no matter how the r value fluctuates, the IGD values of KPEA are always the smallest at r = 1 5. On WFG2 and MaF6, the IGD values rapidly decrease until r = 0 5 and then tends to stabilize with the increase of r.
Therefore, r = 1 5 is the best choice to improve the performance of KPEA.The parameter setting of r = 1 5 can significantly enhance the applicability of the algorithm.

Performance Comparison of KPEA and Its Different
Variants.The performance of KPEA is compared with its three variants in this section.The experimental results are used to validate the effectiveness of penalty, the effectiveness of preprocessing, and the necessity of knee point guidance.KPEA-N represents a variant of KPEA.During the environment selection of KPEA-N, solutions are selected based on the worse-elimination principle in VaEA, rather than on the penalty mechanism.This variant contains only preprocessing strategy and knee point guidance.KPEA-NR denotes a variant of KPEA that does not eliminate DRSs.KPEA-NK represents a variant of KPEA without a knee point-driven strategy.The IGD and HV results of KPEA, KPEA-N, KPEA-NR, and KPEA-NK on the MaF test suite are given in Tables 8 and 9, respectively.
Specifically, in terms of HV values, KPEA outperforms the other three variants on all test instances of the MaF4 and MaF11 problems.In terms of IGD values, KPEA outperforms KPEA-N, KPEA-NR, and KPEA-NK on 29 instances out of the 44 test instances.As shown in Table 8, for the MaF8 problem with distance minimization, there is no need to use preprocessing to eliminate DRSs.For problems with nonseparable Pareto fronts, the solutions obtained by KPEA-NR are very close to the final Pareto optimal front.In other words, the preprocessing strategy of KPEA may not be beneficial for optimization on such problems.
As shown in Table 9, for the MaF7 problem with concave-convex Pareto fronts, KPEA-NK performs significantly better than KPEA.This may be due to the fact that MaF7 is a multimodal problem that contains a large number of segmented Pareto-optimal fronts, which results in the KPEA easily falling into local Pareto optimality when driven by knee points.
It is important to note that on the MaF test suite, the performance of the KPEA-N is relatively the worst in both HV and IGD metrics.It is difficult for this variant to achieve relatively better performance on any problem.The adaptive penalty mechanism selects the individuals with the best convergence one by one, which achieves comprehensive coverage of the Pareto front.Compared to the worseelimination principle in KPEA-N, the penalty mechanism in KPEA can better balance the convergence and diversity of the population.
However, both for IGD and HV values, KPEA has a significant advantage over the variant algorithms KPEA-N, KPEA-NR, and KPEA-NK in most test instances.Overall, the combination of the penalty mechanism, knee-pointdriven strategy, and elimination of DRS strategy is more suitable for most problems.5.5.Overall Performance Analysis.In this section, two statistical tests (i.e., the Wilcoxon rank-sum test and the Friedman test) are used to validate the comprehensive performance of KPEA in terms of convergence and diversity.
The statistical results based on the Wilcoxon rank-sum test are listed in Table 10.Specifically, in terms of the IGD metric, the proportion of the test instances where KPEA outperforms k-NSGAII, KnEA, Pi-MOEA, hpaEA, and VaEA with statistical significance is 50/80, 62/80, 69/80, 77/80, and 60/80, respectively.In terms of the HV metric, the proportion of the test instances where KPEA outperforms     Furthermore, in order to quantify the overall performance of each algorithm, a more effective statistical test, namely, the Friedman test, is also employed.The Friedman test ranks based on the IGD and HV results are presented in Table 11.KPEA ranks 1st in both HV and IGD metrics of the combined test suite.In addition, the bar chart in Figure 15 also intuitively shows that KPEA performs much better than the other five state-of-the-art algorithms in terms of the Friedman test ranking.
Both the results of the Wilcoxon rank-sum test and the average performance ranks indicate that KPEA has significant advantages in terms of convergence and diversity.

Discussions
In KPEA, a good balance between convergence and diversity is achieved through an adaptive switching mechanism.The knee point-driven selection mechanism increases selection pressure.The convergence and diversity of KPEA are maintained alternatively by the penalty mechanism.The experimental results in Section 5 demonstrate that KPEA can effectively solve optimization problems with different PFs.
In KPEA, when the number of nondominated solutions is less than the population size, knee points are first pre-served by the selection mechanism, which increases selection pressure and improves convergence.Then, the remaining solutions are selected based on the principle of maximumvector-angle-first, which maintains diversity.It is worth mentioning that both KnEA and k-NSGAII are knee pointdriven algorithms that use adaptive neighborhood strategies to identify knee points.In addition, Pi-MOEA uses the average ranking method to identify pivot solutions within the neighborhood, which is similar to the strategy of identifying knee points.However, in KnEA and Pi-MOEA, the remaining nondominated solutions are selected based on the distance of each solution to the hyperplane.Therefore, this selection strategy emphasizes convergence too much but neglects the importance of diversity.Similarly, k-NSGAII uses parameter η to control the width of the neighborhood, which leads to difficulty in maintaining satisfactory diversity.
Compared to the worse-elimination principle in VaEA, the penalty mechanism in KPEA can better balance the convergence and diversity of the population.However, the penalty mechanism uses a fixed neighborhood range to penalize solutions, which may be not suitable for addressing the problem with mixed PFs.
It is a challenging task to set an appropriate neighborhood range for the solution.Because too high or too low thresholds will affect the performance of the algorithm.Estimating the shape of the PFs or adaptively adjusting the neighborhood range may be a feasible approach.
Furthermore, recent research has pointed out that DRSs can have a significant impact on the performance of MaOEAs.Currently, scholars have proposed various strategies to eliminate DRSs, such as the direct identification of 39 Journal of Applied Mathematics DRSs and the indirect elimination of DRSs by identifying boundary solutions.However, some boundary solutions of the extremely convex Pareto front (ECPF) are often mistakenly identified as DRSs.Indirect elimination of DRSs also cannot guarantee the identification of all boundary solutions.In real-world applications, such as unmanned aerial vehicle route planning [66] and software product line configuration [67], it is crucial to preserve ECPF boundary solutions.Therefore, it is necessary to accurately identify DRSs on the ECPF.

Conclusions
As the number of objectives increases, the performance of algorithms based on the Pareto dominance is severely affected.Recently, the angle-based selection strategies have been demonstrated to effectively address this problem.Motivated by this, this paper proposes a knee point-driven manyobjective evolutionary algorithm with adaptive switching mechanism (KPEA).KPEA incorporates the knee points and the weighted distance into its mating strategy to increase the probability of generating excellent offspring.And an interquartile range method is introduced to detect and eliminate DRSs in the population.Moreover, in the environmental selection, an adaptive switching mechanism between angle-based selection and penalty is designed to balance convergence and diversity.
The performance of KPEA is compared with five stateof-the-art algorithms, namely, k-NSGAII, KnEA, Pi-MOEA, hpaEA, and VaEA, on WFG and MaF problems.The experimental results indicate that, in terms of IGD and HV, KPEA outperforms its competitors in most test instances.Additionally, this paper also discusses the sensitivity of parameter r and provides recommendations for parameter settings.
KPEA still has some issues that require further research.First, when dealing with MaOPs with complex PFs, the neighborhood range of solutions will affect the coverage of the population on the Pareto front.Adaptively adjusting the neighborhood range during the evolutionary process is a feasible research direction.In addition, the DRS elimination strategy may face a dilemma between eliminating DRSs and preserving boundary solutions on some problems.Future research will focus on designing an effective strategy to differentiate between DRSs and boundary solutions.

Figure 1 :
Figure 1: The existence of DRSs in the population at different objectives.

Figure 3 :
Figure 3: An illustration of identifying the DRSs with the interquartile range method, where LB and UB represent the lower and upper boundaries, respectively, and Q 1 and Q 3 represent the lower and upper quartiles, respectively.

End 9
Compute the UB of the ordered set CM 10 Eliminate the solutions whose convergence metrics are greater than UB → P' 11 Calculate the disparity indicators Is of P and P': / * Detect if there are DRSs in P * / 12 If γ > 1 then 13 P = P' 14 End

Figure 4 :
Figure 4: Description of identifying the knee points in KPEA.
Input: F (sorted population), K (set of knee points), N (population size); Output: P (next population) 1 Normalization (F) 2 If F 1 > N then 3 P ← Penalty(F 1 , N); / * Solutions are selected one by one using penalty mechanism.* / 4 Else 5 P ← Selection(F, K, N); / * The knee points are first preserved, and then each solution is selected based on the principle of maximum-vector-angle-first.* / 6 End 7 Return P Algorithm 5: Environmental_selection (F, K, and N).Input: F (sorted population), N (population size); Output: P (next population) 1 while P < N do 2 Compute the angles

Figure 5 :
Figure 5: An illustration of selecting solutions in the penalty mechanism.Five superior solutions need to be selected from 9 nondominated solutions A-I, while the penalty set can accommodate only four solutions.The threshold α represents the neighborhood range for each solution.(a) There are 9 nondominated solutions in the population.(b) A with min S CM is selected.As ∠Bz * A < α and ∠Cz * A < α, B and C are penalized.(c) I is selected, and then H is penalized.(d) F is selected, then E and G are penalized.(e) D is selected.(f) H is selected.

Figure 6 :
Figure 6: Description of updating angles in the maximum-vectorangle-first principle.

4 Table 10 :Figure 14 :
Figure 14: A comparison of the Wilcoxon rank-sum test performance on benchmark problems between KPEA and state-of-the-art algorithms.

Figure 15 :
Figure 15: The average ranks of the Friedman test among all considered MaOEAs on WFG and MaF problems.(a) Average ranks of all considered MaOEAs based on all IGD results.(b) Average ranks of all considered MaOEAs based on all HV results. 5 the ordered convergence metrics are calculated : P (population), ρ (solution index); T (number of solutions) End If4 P = P ∪ { ρ } / * ρ is orderly added * / 5 flag( ρ ) = true 6 For j = 1 to T 7If flag(x j ) == false / * x j is a member of F l * / 8Calculate angle between x j and ρ by Eq. (15) 9If angle < θ (x j ) / * x j is associated with ρ * / 10 θ (x j ) = angle / * Update the vector angle from x j to P * / 11 γ x j = P / * Update the corresponding index.*

Table 1 :
Characteristics of the benchmark problems.

Table 2 :
Setting of the population size.

Table 3 :
Mean and standard deviation values of HV for KPEA and state-of-art algorithms on WFG problems.

Table 4 :
Mean and standard deviation values of IGD for KPEA and state-of-art algorithms on WFG problems.

Table 7 :
The properties of test problems.

Table 11 :
Overall performance comparison based on the Friedman test ranking based on HV and IGD metrics on the entire benchmark problem.