An Improved Hybrid Algorithm Based on Biogeography / Complex and Metropolis for Many-Objective Optimization

1Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai University, Shanghai 200072, China 2Department of Mechanical Engineering, Hubei University of Automotive Technology, Shiyan 442002, China 3School of Materials, Manchester University, Sackville St Blg, Manchester M13 9PL, UK 4Department of Production and Quality Engineering, Norwegian University of Science and Technology, 7491 Trondheim, Norway


Introduction
In the scientific research and engineering practice, multiple objectives are usually needed to be optimized simultaneously.Because of the conflict between multiple targets in multiobjective optimization, the performance improvement of one subobjective may cause the performance of another subobjective to decrease.Only through the compromise method can all objectives go as far as possible to attain optimal.The set of all the optimal Pareto optimal solutions is known as the Pareto set (PS) and their corresponding objectives vector in the objective space is the Pareto Front (PF) [1].The purpose of many multiobjective optimization evolutionary algorithms (MOEAs) is to determine a better approximation of the PF and PS.Although Many MOEAs have effectively solved multiobjective optimization problems (MOPs) with only two or three objectives [2], the MOPs with more than three objectives are too difficult to be solved by most already existing MOEAs [3].In the recent literature report, the MOPs that have more than three objectives are often described as many-objective optimizations (MaOPs) [4,5].
It is generally agreed that, evolutionary algorithms (EAs) are well suited for MOPs, due to their population based strategy for achieving an approximation to the PF.EAs achieve a Pareto approximation set in MOPs via pursing the entire PF and maximizing the diversity of solutions.The fundamentally balanced convergence and diversity between the EAs depend on the selection operator [6].As far as we know, the popular MOEAs, including Hypervolume Estimation Algorithm for multiobjective optimization (HYPE) [7], nondominated sorting genetic algorithm II (NSGAII) [8], and grid-based evolutionary algorithm (GREA) [9], could effectively deal with two or three objectives' optimization problems.However, there MOEAs are all confronted with great difficulties in many-objective optimization.With the increase of the objective space in size, the first problem is almost all solutions to the original population becoming nondominated with one another, which is mainly caused by the phenomenon called dominance resistance [10].This will result in the selection pressure toward PF severely deteriorating and considerably slowing down the evolutionary process.This is primarily because most of EMO algorithms used Pareto dominance as selection criteria.The other problem is that the conflict between diversity and convergence becomes deteriorated.This is mainly because most of the current diversity operators (like crowding distance) prefer selecting the dominance resistant solutions.The third problem is the computational complexity and efficiency.With the increase of the number of objectives, the complexity of the EMO algorithm is significantly increased, and the efficiency of the algorithm is substantially reduced.To tackle the above problems, a lot of methods have been proposed [11,12], which can be roughly divided into mainly four categories: (1) The dimensional reduction-based methods: by analyzing the relationship between objectives or using feature selection techniques, this method reduces the amount of objectives.In order to decrease the difficulty of the original problem, this method attempts to reduce those unimportant objectives.However, the method assumes that the MaOPs have redundant objectives.This assumption may restrict the application of the dimensional reduction-based methods [13].
(2) The relaxed Pareto-dominance-based methods: the relaxed Pareto-dominance-based methods enhance the selection pressure on the Pareto Front, such as the methods of -dominate [14] and -dominate [15].But the main drawback of this method is that it involves one or more parameters and these parameters are difficult to choose.(3) The indicator-based approaches: the indicator-based method is not subject to the selection pressure problem, since it is not depending on the Paretodominance to push the population toward the PF.However, it has also suffered from the curse of dimensionality [16].The current metrics available to Hypervolume is probably the most popular one that is employed for multiobjective search.However, computation cost of the Hypervolume grows exponentially with the number of objectives [17].
(4) The decomposition-based method: this method has developed the decomposition strategy and the neighborhood concept.As one of the popular algorithms, many-objective evolutionary algorithm based on decomposition (MOEA/D) was proposed by Zhang and Li (2007) [18].The aggregation function is used to compare the solutions and the uniform distribution weight vectors preserving solution convergence and diversity.
MOEA/D has a good convergence and diversity, with low computational complexity, so as to get an effective method.With the BBO/complex algorithm decomposition option, each subsystem has multiobjectives and multiple constraints [19].It has more flexible decomposition options compared to traditional methods founded on decomposition.BBO/complex detailed explanations are introduced in Section 2. Despite the advance in adapting MOEAs for leading with MaOPs, very few have been recorded in the sense of improving BBO/complex for solving MaOPs so as to balance both convergence and diversity simultaneously.We present a new algorithm, the hybrid Metropolis BBO/Complex (Hmp/BBO) for many-objective optimization; Hmp/BBO, under the basic framework of BBO/Complex algorithm, improves the convergence and diversity in many-objective optimization by introducing the decomposition strategy and PBI aggregation function in MOEA/D.Furthermore, selection in BBO/complex plays a major role in the information exchange among subsystems.We hope the useful information can be forwarded to the appropriate subsystems to improve them and will not mislead other unsuitable subsystems.In the original version of BBO/complex, the roulette wheel selection is based on the emigration rates to select the emigration islands.On the stage of within-subsystem migration, each SIV in an immigrating island will have a chance to be replaced by SIV from an emigrating island.However, it is not clear whether the new emigration islands are suitable for these subsystems.During the within-subsystem migration phase, some solutions with high quality are easily found at initial search stages, and they will be replaced easily by most current solutions.Consequently various subsystems will be trapped at their local convergence.The simulated annealing (SA) algorithm was proposed by Kirkpatrick et al. [20] and Černý [21].SA is an intelligent algorithm based on probabilistic stochastic search optimization.It has the capability of potential jumping, and it can accept noninferior solution and inferior solution.Therefore, it effectively avoids falling into the local minimum solution and keeps solutions diversity.We are inspired here by the Metropolis criterion of SA algorithm to solve the problem posed above.Details about SA will be introduced in next section.
On the other hand, when sharing information in the subsystem, because there are numerous objectives and constraints, we need a new method to reduce the computation time of the central processor.From discussion above, this paper mainly focuses on the Hmp/BBO algorithm that promotes the performance of convergence and diversity in many-objective optimization.Our contributions to this topic are summarized as follows: (1) We have designed a new framework of Hmp/BBO algorithm for many-objective optimization.
(2) We have introduced PBI aggregation function to improve the convergence and diversity for Hmp/BBO.
(3) We have to adopt the Metropolis criterion to improve the performance of exploration and exploitation.
(4) We have introduced a checking unit; this unit ensured that the new islands are only generated within the Hmp/BBO, so it has saved an additional amount of CPU time.
The remainder of this paper is organized as follows: the principles of BBO/Complex and SA are, respectively, covered in Section 2. The hybrid Metropolis BBO/Complex algorithm (Hmp/BBO) is presented in Section 3. In Section 4, the experimental studies are given to demonstrate the efficiency of the proposed method as well as some discussions on this paper.Finally, Section 5 concludes the paper and future research directions are proposed.

BBO/Complex and Simulated Annealing
2.1.BBO/Complex.The biogeography-based optimization algorithm is an inventive algorithm, introduced for the first time in 2013, and according to [19] it provides competitive optimization performance with NSGAII [8], differential evolution (DE) [22], ant colony optimization (ACO) [23], and a lot of other algorithms.BBO/Complex extends BBO algorithm system of multiple subsystems; each subsystem contains multiobjectives and multiconstraints.The BBO/complex framework comprises  archipelagos, where  equals the number of subsystems.Each archipelago appears to have a lot of islands.These islands represent candidate solutions to the problem.BBO/complex framework is distinct from other MOEA algorithms.It includes framework and optimization algorithm, as showed in Figure 1.It provides an efficient model for communication between subsystems and provides a new way of migrating to share information between within-subsystems and cross-subsystems.
The classical BBO/Complex algorithm proposed can be described with the following algorithm: (1) Initialize the control parameters: population size, stop condition, and mutation probability.Initialization of the population is initialized by randomly generated individuals.
(2) Get the objective and constraints value similarity levels between all pairs of subsystems.
(3) Obtain the rank of islands in each subsystem.
(7) Replace the worst island in the population with the generation's good islands.
(8) If the termination condition is not met, go to step (3); otherwise, terminate.

Simulated Annealing Algorithm.
The algorithm is built on the metaheuristic technique of thermodynamics of material annealing [24][25][26].At the beginning of the process the temperature rises and it is gradually cooled down to a minimum.The objective is to minimize the cost function and is expected to reach the lowest cost function for the freezing temperature.When the process is there, the temperature decreases and a new state is created.A simulated annealing algorithm based on statistical mechanics is established.In 1953, [27] adopted the concept of Boltzmann's probability distribution.This means that if a system maintains its thermal equilibrium at temperature , the probability distribution  of its energy  can be calculated by the following [27]: where   is Boltzmann's constant.The difference in energy Δ means the difference in cost function between the past and current iterations, which can be determined as follows: For minimization problems, Δ ⩽ 0 means (  ) ⩽ (  ), so the new point is directly accepted.Otherwise, the Metropolis criterion will be enabled to decide whether to accept or reject   .For this case where Δ > 0, the acceptance is treated probabilistically according to the relation  = 1/1 +  (Δ/max()) .It can be seen that it is affected by the temperature of the receiving process.For the maximum amplitude of , the acceptance probability is also selected to be a much worse state too.This process will avoid falling into local optima.As the temperature decreases, the algorithm accepts only states which minimize the  0 cost.Thus, in the iterative process, the temperature reduction way is one of the key parameters, which is named as the cooling schedule.On the other hand, once the iteration  is complete, the next cycle will begin Output: population  (1) Initialization all the parameters (2) generating parent population, weighting vectors and neighborhood index set (3) decomposition strategy (4) while the termination condition is not met do (5) calculate the nondominated ranking system in subsystem ( 6) perform within-subsystem migration (7) perform cross-subsystem migration ( 8) performmutation (9) clear the duplicate solution (10) replace the worst solutions with good islands (11) end while (12) return  Algorithm 1: Hmp/BBO pseudocode.
with a smaller value.Therefore, the number of cycles in  and the number of iterations  per each cycle are crucial settings.Large  and  lead to better performance but with a longer process time, and vice versa, for small  and , based on the need to choose a compromise between the quality of the solution and the processing speed.The Metropolis criterion can help the algorithm balance the performance of exploration and exploitation.

The Hybrid Metropolis BBO/Complex Algorithm
3.1.Framework of Proposed Algorithm.Hmp/BBO uses the original BBO/complex framework but extends it to multiple subsystems environments to accommodate many-objective optimization problems.First, many-objective problems are decomposed into multiple subsystems.Generally speaking, there are two decomposition methods: one is built on the system requirements and the other is based on the physical system; also the number of subsystems is set by the user according to the decomposition strategy.The original BBO/complex decomposes the problem based on system requirements.Because the objective space dimension is higher, we need to use the new decomposition method.Afterwards, like Figure 2, a PBI aggregate function decomposition method can enhance the convergence and diversity of the algorithm when solving many-objective problems.With the above step, Hmp/BBO migration is divided into two categories: within-subsystem and cross-subsystem.During the within-subsystem migration phase, we enhance the probability of the solution and employ the Metropolis criterion to make the algorithm jump out of local optimal.During the cross-subsystem migration stage, we adopt the roulette wheel method to get best solution.Then we perform mutation and clear the duplicate algorithm.Our algorithm does not take into account constraints problem.Constraints processing research questions will be in the rest of the work.Algorithm 1 presents the general framework of Hmp/BBO.

Generating Weighting Vectors.
We set weight vectors such that the optimal solutions of their subsystems are uniformly distributed along the PF.Most of the approaches for generating weight vectors for different decomposition strategies in MODE/D have been suggested [28]; we use Das's method in [29], with a predefined integer  which controls the divisions along each axis.The total number of such vectors is  =  −1 +−1 .Distributed reference points are   =   /,  = 1, 2, . . ., .Taking the three-dimension problem as an example, there are  2 5 = 10 reference points.

Decomposition Strategy.
The Hmp/BBO initialization generates   weight vectors; PBI aggregate function [30] is employed to divide the weight vectors into a set of   ( 1 ,  2 , . . .,   ).Because the weight vectors uniformly distribute in the objective space, the number of weight vectors in one subsystem is approximate to Nw/L.Therefore, the whole objective space is divided into   subsystems.

The Nondominated Ranking System (NDRS)
. NDRS was introduced in [31] as the ranking system in the multiobjective genetic algorithms (MOGA).It uses inconsecutive integers as ranks to reflect the relative performance of each individual in a population.Assume that we have a subsystem:   is the rank (1) Probabilistically choose the immigrating islands based on the NDRS.
(2) Use the Metropolis criterion based on emigration rates to select the emigrating island.
(3) each SIV in an immigration island will have a chance to be replaced by an SIV from an emigrating island.
(1) Randomly select  indices from () (choose similar subsystem) and select  immigrating islands based on immigration rages from population .(2) Calculate Euclidean distance between the islands of neighboring subsystem (3) Use the roulette wheel selection based on emigration rates to select the emigrating islands.(4) each SIV in an immigration island will have a chance to be replaced by an SIV from an emigrating island.
of the th island, where a lower rank is better.

PBI Aggregate Function.
In MOEA/D, several methods have been proposed for decomposing MOPs into a singleobjective optimization subproblem, such as weighted sum method, Tchebycheff method, and Penalty-based Boundary Intersection (PBI) method, but it was shown that the MOEA/D-PBI is only best for many-objective optimization problem in [32].This paper uses the PBI approach.A scalar optimization function is defined as In Figure 2,  * is the ideal point,  1 is the Euclidean distance between origin point and the foot point drawn from the solution to the reference direction, and  2 is the perpendicular distance of the solution from the reference direction.The  value on the performance of Hmp/BBO is present in Section 4.3.

3.7.
Cross-Subsystem Migration.During cross-system migration of the stage, it is desirable that the migration should be chosen from a neighborhood subsystem as much as possible.
In Hmp/BBO, each island is uniquely specified by a weight vector; each weight vector has been assigned based on the Euclidean distance from a neighborhood.So we can select islands from its neighboring subsystems by neighboring index.First, we select islands between subsystems based neighboring index set ().Second, emigrating islands are selected based on the PBI distance.Furthermore, we need to eliminate those poor candidate islands to improve the algorithm diversity between subsystems.The inferior migrated island will not be selected unless it passes the roulette wheel method.With this restriction on the cross-subsystem migration, the Hmp/BBO algorithm's diversity performance can be enhanced.The algorithm is described in Algorithm 3.

Mutation Algorithm.
In the Hmp/BBO, these events are modeled as SIV mutation.The mutation rate   can be determined by the following number of species involved in the probability   to the following equation: where the  max and  max = max(  ) are a user-predefined maximum mutation rate that   can reach.The ISI is the variable and function of SIV.The mutation processes are described in Algorithm 4.
3.9.Clear the Duplicate Algorithm.Clear duplication will increase the diversity of solutions and avoid duplication of features on other islands.The algorithm can be described in Algorithm 5.
( In conclusion, The Hmp/BBO would prove the performance of convenience and diversity by introducing a new framework for many-objective optimization.In addition, the SA algorithm comes with only one individual, while the Hmp/BBO is a population based algorithm.Instead of running SA with only one island, it can be executed with all islands in subsystems.Thus, internal searching loops  of SA with each cycle of  can be invalid in Hmp/BBO without affecting the solution quality.This method will save one significant amount of CPU time.By these methods, the exploration and exploitation of the Hmp/BBO algorithm are much improved.The Hmp/BBO framework is described in Algorithm 6.

Computational Complexity of One Generation of
Hmp/BBO.In this section, we discuss the computational complexity of the Hmp/BBO.For a population size  and a problem of  objectives, the major computational costs are in steps (5), (6), and (7) of Algorithm 1. Step (5) is the computational complexity of NDRS; it requires () computations.Computing of within-subsystem (step (6)) needs ( 1 2 ) computations, where  is cycles of Metropolis and  1 is generated by parents' populations.Cross-subsystem migration (step (7)) needs ( 1 2 ) computations and  is equal to the number of subsystems.So for  objectives, reviewing all the above computations, the overall complexity of one generation of Hmp/BBO is (+ 1 2 + 1 2 ).

Simulation Results
In this section, to prove the validity of Hmp/BBO, we compare it with five state-of-the-art algorithms for comparisons, including BBO/Complex, NSGAIII, MOEA/D-PBI, HYPE, and GREA.NSGAIII is based on the Pareto dominance relationship, but the maintenance of population diversity is supported by the use of a group of uniformly distributed reference points.MOEA/D-PBI is a representative of the decomposition-based approach and keeps the diversity of solutions using a series of predefined weight vectors.HYPE is a Hypervolume-based evolutionary algorithm for manyobjective optimizations.GREA uses the NSGA-II framework and introduces two concepts (grid dominance and grid difference) and three grid-based standards, and a finesse adjustment strategy.We describe the test questions used in Section 4.1 and the quality indicators in Section 4.2.
Then we introduce five state-of-the-art algorithms that used comparison and the corresponding parameter settings in Section 4.3.Finally, the discussion results are in Section 4.4.

Test Problems.
In checking to verify the proposed algorithm, the well-known test functions definitions of DTZ1-DTZ4 [33] functions and all of WFG test suite [34] are listed in Table 1.We only consider DTLZ 1-4 problems for DTLZ test suite, because the nature of DTLZ5 and DTLZ6's PFs is unclear beyond three objectives [35].DTLZ1 is a liner and multimodal function; DTLZ2 is a concave function; DTLZ3 is concave and multimodal function; DTLZ4 is concave and biased function, as summarized in Table 1.

Quality Indicators.
In our empirical study, the following three extensively used quality indicators are examined.The first one can reflect the convergence of an algorithm.In the second one and last one, the convergence and diversity of the solutions can be recorded simultaneously.
(1) Generational Distance (GD).Let  be the last series of nondominated points in the objective space and  * be a set (1) Initialization with all the parameters.Initialization   weight vectors,   are the center of the weight vectors cluster.  are the number of subsystems and neighborhood set of weight vectors ().(2) Decomposition strategy (3) for  ← 1 to  do { = number of generations} (4) Calculate the rank of islands in each subsystem (NDRS) (5) Probabilistically select the immigrating islands based on the islands rank, probabilistically select the emigrating islands based on the emigration rates.( 6) if  = 1 then (7) find the initial temperature  1 (8) else (9) Update the temperature for the th generation (10) end if (11) Save the vectors of all the populations (before migration) in matrix mm 1 with size  1 *  2 and their cost functions vector vv1 with length .(12) Do within-subsystem migration.(13) Save the vectors of all the populations (after migration) in matrix mm 2 with size  1 *  2 and their cost functions in vector vv2 with length .
Accept mm 2 (I,1→n) vector of matrix mm 2 as an updated population for ISI (20) else (21) Re-select the past mm 1 (I,1→n) vector of matrix mm 1 as an updated population for ISI  (22) end if (23) end if (24) Randomly select  indices from () and select  immigrating islands based immigration rates.(25) if pbi ( new | ,  * ) ⩽  pbi ( | ,  * ) (26) save the best islands  new for emigrating ( 27) else (28) Save the vectors of all the populations (before mutation) in matrix mm 3 with size  3 *  4 and their cost functions in vector vv 3 with length  (29) Do cross-subsystem migration (30) use the roulette wheel method to select population with length .(31) Do mutation (32) Do clear duplicated SIV (33) if  > 1 then (34) Repalce the worst ISI with the good ISI saved in the elitism stage (35)  of points evenly distributed over the actual PF.GD can only reflect the performance of convergence for an algorithm; a smaller value means better quality.Then GD is described as follows: (2) Inverted Distance (IGD).Let  * be a set of points uniformly sampled over the actual efficient front (EF), and let  be the set of solutions obtained by an algorithm.IGD may be used to measure convergence and diversity in a sense, and a smaller value means better quality.Then IGD is described as follows: (3) Hypervolume (HV) Indicator.HV indicator can measure both convergence and diversity of a solution set.The HV is used for WFG test suite, and the larger the HV value is, the better the solution's quality for the PF is.Then HV is described as follows: is the number of objectives.A is the set of nondominated points obtained in the objective space by an algorithm. = ( 1 ,  2 , . . .,   ) is a reference point in the objective space which is dominated by all Pareto points.

Parameters Setting.
The parameters for the six MOEAs considered in this study are listed below.
(8) Number of points in Monte Carlo sampling: it is placed at 1,000,000 to ensure accuracy.

Result and Discussion
. In this section, the GD metrics are used to compare the convergence capacity between the proposed Hmp/BBO and other five algorithms.Table 4 shows the best, median, and worst GD values obtained by six algorithms on DTLZ1 to DTLZ4 with different number of Figure 3 shows the parallel coordinates of PF obtained by Hmp/BBO and the other five MOEAs with the five-objective of DTLZ1.This run is involved in the result closest to the mean IGD value.A parallel coordinates system is a visual representation of the overall performance of the algorithm for high-dimensional space, where each row represents a point in a high-dimensional space.If the lines on each object on the graph are distributed in [0, 1], it is shown that the convergence of the better algorithm, if the lines can be evenly distributed in [0, 1], indicates that the distribution of the algorithm is better.From this figure, it is found that the In Figure 4, the average performance scores for different number of objectives and different test problems are briefly summarized.Score of Hmp/BBO is smaller than other algorithms in the corresponding number of objectives.

Conclusions
In this paper, we propose a novel many-objective optimization algorithm called Hmp/BBO.It addresses the problem of a many-objective optimization which uses the mechanism of adaptive hierarchical.The problem is decomposed into many subsystems through PBI aggregate function.Through combining the advantages of MOEA/D, Hmp/BBO is expected to enhance the convergence ability and the valuable diversity performance.As it can be seen in the results obtained in Section 4, Hmp/BBO performance can be enhanced by PBI aggregation function and the control the migration flow of  SIV between rich and poor islands.Within this process, the old features will not always be overwritten by the newly emigrated features from other islands.It has more flexible decomposition optimization options compared to other five state-of-the-art algorithms.
The future work will discuss the Hmp/BBO in terms of objective number greater than five and in terms of solving constrained many-objective problems.The constrained many-objective problems seem to be a new challenge for MaOPs, and we try to extend our algorithm to solve them by combining constraint handling techniques.And also we would want to apply Hmp/BBO to solve more real-world problems.

Figure 2 :
Figure 2: Illustration of decomposition strategy and PBI distance.

Figure 4 :
Figure 4: Average performance score over all test problems for different number of objectives.The smaller the score, the better the performance in the corresponding number of objectives.
The inferior migrated island will be selected only if they pass the Metropolis criterion of SA.With this restriction on the with-subsystem migration, the Hmp/BBO algorithm can avoid falling into the local optimum.The algorithm is described in Algorithm 2.
with the new values that come from the probabilistically selected original islands mentioned above.Instead,  SIVs of the islands are maintained in two temporary matrices with size  1 ×  2 .Each row of the matrices represents one individual.The old independent variables are used again if and only if the modified individual exhibits lower solution quality and does not comply with the Metropolis criterion.
1) for  ← 1 to  do {where L = number of islands} (2) Calculate probability   based on   and   {by iterative method} (3) Calculate mutation rate   (4) if rand <   and  ≥   then {  is a predefined mutation range} (5) Replace  SIVs vector of ISI  with a randomly generated  Check all  SIVs on all ISI  (1) while there is a duplicated SIV do (2) for  ← 1 to  do {where  = number of islands} (3) if any duplicated SIVs is detected then (4) Replace the duplicated SIVs in ISI  with a randomly generated SIVs (5) end if Algorithm 5: Clear duplication pseudocode. Require:

Table 1 :
Test functions utilized in this experiment.

Table 2 :
Maximum number of fitness evaluation for different function.

Table 3 :
Settings of grid division div in GREA.
(2) Stop condition: each algorithm runs 20 times independently on each test question.All algorithms are implemented on a 2.6 GHz CPU desktop computer, 8 GB RAM, and Windows 10.The stop condition of the algorithm is the maximum number of fitness evaluations, as outlined in Table2.

Table 4 :
Best, median, and worst GD values obtained by DTLZ1 to DTLZ4 with six algorithms by different number of objectives.

Table 5 .
It shows both the mean and IGD values of over 20 independent runs for the six compared MOEAs, where the best mean and standard deviation values are marked in bold.Furthermore, Table6presents the average and standard deviation of the Hypervolume over 20 independent runs on WFG problems, where the best performance is highlighted in bold.The quality of the solution sets obtained by the six algorithms on all WFG test problems in terms of Hypervolume was compared.From the experimental results of DTLZ and WFG test problem, we find that Hmp/BBO shows better performance than the other five algorithms.

Table 5 :
Best, median, and worst IGD values obtained by DTLZ1 to DTLZ4 with six algorithms by different number of objectives.Performance scores are introduced to the global performance of compared algorithms.For a problem test question, suppose that there are  algorithms al 1 , al 2 , . . ., al  ; let   be 1, if al  outperforms al  in terms of IGD metric, and 0 otherwise.This value indicates how many other algorithms are better than al  on the problem test question.Therefore, the smaller the value is, the better the algorithm is.Then, performance score (al  ) is determined as

Table 6 :
Average and standard deviation of the HV values on WFG1 to WFG9 with six algorithms by different number of objectives.