An Enhanced Differential Evolution Based Algorithm with Simulated Annealing for Solving Multiobjective Optimization Problems

. An enhanced differential evolution based algorithm, named multi-objective differential evolution with simulated annealing algorithm (MODESA), is presented for solving multiobjective optimization problems (MOPs). The proposed algorithm utilizes the advantage of simulated annealing for guiding the algorithm to explore more regions of the search space for a better convergence to the true Pareto-optimal front. In the proposed simulated annealing approach, a new acceptance probability computation function based on domination is proposed and some potential solutions are assigned a life cycle to have a priority to be selected entering the next generation. Moreover, it incorporates an efficient diversity maintenance approach, which is used to prune the obtained nondominated solutions for a good distributed Pareto front. The feasibility of the proposed algorithm is investigated on a set of five biobjective and two triobjective optimization problems and the results are compared with three other algorithms. The experimental results illustrate the effectiveness of the proposed algorithm.


Introduction
Many real world problems are MOPs and it has prompted a wide research boom about the MOPs over the past few decades.A lot of multiobjective evolutionary algorithms (MOEAs) have been suggested such as these famous algorithms in [1][2][3][4].For comprehensive overviews, the reader can refer to [5][6][7].
In the present study, we emphasize on the differential evolution (DE) for solving MOPs.DE, one of the most popular evolutionary algorithms, was initially presented by Storn and Price [8,9].DE has a simple principle and is easy to be implemented.DE was originally used for solving single objective optimization problems in [10].In 2001, Abbass et al. [11] presented a Pareto-frontier differential evolution algorithm (PDE) for solving MOPs.In 2002, Abbass [12] further improved the PDE with self-adaptive crossover and mutation operator.In the same year, the nondominated sorting and the concept of ranking used in NSGA-II are incorporated in the DE to solve MOPs in [13].Xue et al. [14] proposed an algorithm called Pareto based multiobjective differential evolution (MODE), which incorporates the concept of Pareto optimal into the mutation operator.Parsopoulos et al. [15] presented a vector evaluated differential evolution (VEDE) for solving MOPs.A domination selection operator is adopted to improve the performance of the algorithm.Kukkonen and Lampinen [16,17] proposed a generalized differential evolution (GDE) and an improved version of GDE for solving MOPs with constraints.In 2006, Hernández-Díaz et al. [18] combined the DE and the rough set theory for MOPs.Wang et al. [19] proposed a self-adapted differential evolution algorithm, which adopts an external elitist and a crowding entropy diversity measure tactic.Huang et al. [20,21] improved their self-adaptive differential evolution (SaDE) to solve MOPs by a multiobjective adaptive differential evolution (MOSaDE).Moreover, they further extended MOSaDE by objective-wise learning strategies and called it as WO-MOSaDE.Ali et al. [22] proposed a modified differential evolution (MDE) for solving MOPs.In 2012, they [23] further continued their research by giving an enhanced version of MDE, named multiobjective differential evolution algorithm (MODEA), which incorporates the oppositionbased learning [24] and the concept of random localization in mutation.
Besides the DE method, in this paper, the concept of simulated annealing is utilized for controlling the acceptance of the candidate solutions.Simulated annealing, proposed by Kirkpatrick et al. [25] is a popular probabilistic metaheuristic for solving the optimization problems.Simulated annealing is inspired by the annealing in metallurgy, a technique of controlling the temperature of a material from a high temperature to low one in a probability for increasing the volume of its crystals and reducing their defects.The simulated annealing algorithm works for finding minimal cost solutions by minimizing the associated energy function.In the process, the temperature is first in a high level ( max) and then decreases gradually, if the cooling is sufficiently slow, the global minimum will be reached.The viewpoint has been proven by S. Geman and D. Geman in [26].There are some attempts of adopting simulated annealing to solve MOPs.Especially, there have been a few studies that utilize the concept of Pareto-dominance in the multiobjective simulated annealing, the reader can refer to [27] for a review about this research.In [28], an archived multiobjective simulated annealing algorithm named AMOSA is proposed.The algorithm incorporates the concept of archive and adopts a domination status to determine the acceptance of a new solution.
The rest of the paper is organized as follows.In Section 2, the problem formulation is introduced.The proposed algorithm is presented in Section 3. In Section 4, some numerical examples are given to show the performance of the proposed algorithm with the other algorithms, while the conclusion is reached in Section 5.

Problem Formulation
We assume that the MOPs given in this paper are minimization problems, while each maximization problem can be transformed into a minimization problem.A MOP can be formally described as follows: where  is the number of objective functions and   (x) is the th objective function to be optimized.  (x) and ℎ  (x) are the set of inequality and equality constraints, respectively. is the number of inequality constraints and  is the number of equality constraints.x( 1 ,  2 , . . .,   ) is the vector of variables which is called a solution for a MOP, where  is the number of variables.That is to say, a MOP is to find all x( 1 ,  2 , . . .,   ) which satisfies (2) and (3).Suppose that there are two solutions x( 1 ,  2 , . . .,   ) and y( 1 ,  2 , . . .,   ), x is said to dominate y if ∀ ∈ {1, 2, . . ., },   (x) ≤   (y) and ∃ ∈ {1, 2, . . ., },   (x) <   (y).x dominates y which can be denoted as x ≺ y.A solution x is called a Pareto-optimal solution if there is not another solution x  satisfying x  ≺ x.A set containing all the Paretooptimal solutions is called the Pareto-optimal set (PS).A set containing all the Pareto-optimal objective vectors is called the Pareto front (PF) such that PF = {(x) | x ∈ PS} [29].

The Proposed Algorithm
The proposed algorithm is an extension of MODEA algorithm proposed by Ali et al. [23].In the proposed MODESA, we focus on improving the convergence of the obtained solution towards the Pareto-optimal front, and the diversity and distribution of the solutions.Besides using the simulated annealing process to control the acceptance of the candidate solutions, an efficient diversity maintenance mechanism [30] is also adopted to improve the distribution of these obtained solutions.

Population Initialization.
In MODESA, initial population is generated in the same way as that of MODEA [23].Because of the lack of a prior information about the solution, its corresponding opposite estimate is considered for getting a better solution.Here, the concept of opposite solution is given.Suppose that the current solution is x( 1 ,  2 , . . .,   , . . .,   ) and the lower and upper bounds of   are V ,min and V ,max , respectively.Then the opposite solution x  (  1 ,   2 , , . . .,    , . . .,    , ) of x is calculated as    = V ,min + V ,max −   [23].
The advantage of doing this is that we can choose the better one between the solution and the opposite one as the initial solution.The procedure of initialization in the paper is given as follows.A population with  solutions is generated using uniform random distribution and a population containing their opposite solutions is constructed too.Next, a combined population with the population and the opposite population is constructed and the best N solutions of the combined population are selected by using the nondominated sorting in NSGA-II [2] and the diversity maintenance mechanism based on vicinity distance [30].

Differential Evolution (DE). Differential Evolution (DE)
presented by Storn and Price [8,9] is an easy and efficient evolutionary algorithm.DE maintains a population of  solutions and contains three main operators: mutation, crossover, and selection.
The mutation operation is defined as follows: where x r1 , x r2 , and x r3 are randomly selected from the current population and 1 ̸ = 2 ̸ = 3 ̸ = . is a control parameter which is set in (0, 2] in Storn and Price [8,9].For the sake of convenient description, the solution which is selected to plus a perturbed value is called as the base vector.x r1 (the base vector) is perturbed by adding to it the product of the control parameter and the difference of x r2 between x r3 .z( 1 ,  2 , . . .,  ) is the generated perturbed solution.It is important to note that if   is out of bounds, it must be repaired.As mentioned above, the lower and upper bounds of variable   are V ,min and V ,max , respectively.In this study, if   is less than V ,min ,   is set to V ,min ; if   exceeds V ,max ,   is set to V ,max .Ali et al. [23] have proved that using the one having the best fitness value in these three randomly selected individuals as the base vector has two advantages.The first advantage is that it is neither merely random nor merely greedy.The other one is that it can offer a localized effect to help in finding more different regions of the search space around the underlying solutions [23].Therefore, in MODESA, after choosing the three individuals, we select the best one of them to be the base vector.
After the perturbed solution is generated, a trial solution w is produced by the crossover operation which can be defined as follows.For each variable , where  ∈ {1, . . ., } is a random number and cr ∈ [0, 1] is the crossover rate.In general DE, the selection operator is utilized to decide whether the trial solution w replaces the target solution x.The selection operation can be defined according to the following: In the proposed MODESA, the target individual x is not simply replaced by the trial solution w.A simulated annealing procedure introduced in Section 3.3 is employed to determine the acceptance of the trial solution.
3.3.Simulated Annealing Procedure.After the mutation phase, the crossover operator is performed, and the algorithm enters the selection phase.In MODESA, we incorporate the principle of simulated annealing into the selection phase.Before introducing the proposed selection phase, we first give a concept of domination in simulated annealing.

The Concept of Domination.
To calculate the acceptance probability of a new solution, the amount of domination used in AMOSA [28] is adopted here.Given two solutions x and y, the amount of domination is calculated by the following equation: where  is the number of objectives and   is the range of the th objective.When   is unknown, the solutions in the current population are used for calculating it.

Simulated Annealing Used in the Selection
Operator of the Proposed MODESA.As ( 6), if the trial solution dominates the current solution, the general DE simply replaces the current solution by the trial solution.However, doing this may make the algorithm over converge and easy to trap into local optimal.As is known, there are three possibilities when comparing two solutions, the current solution x and the new solution y, as follows: (1) x dominates y; (2) x is dominated by y; (3) x and y are incomparable.
If x dominates y, it is obvious that y is not better than x, so that we can simply replace y by x.If x is dominated by y, we accept the new solution y distinctly.If x and y are incomparable, the solution y may have the potential to guide the solutions towards an unreached area of the search space.Therefore, in the DE process, we use the simulated annealing to accept y under a certain probability.Meanwhile, we need to control the acceptance probability, and the reason is that if we do not control the acceptance probability, all these potential solutions which are not absolutely good will enter to the next generation which will confuse the algorithm and make it unable to have a right direction towards the Paretooptimal front.That is to say, we must decrease the acceptance probability.In simulated annealing, with the decreasing of the temperature, the acceptance probability is decreased too, which will avoid the algorithm producing too much potential solutions in the end.It is worth mentioning that if we allow some potential solutions existing during the procedure, the next nondominated sorting procedure may again remove these potential solutions, which will make the purpose of allowing some potential solutions to search more areas of the searching space fail.Therefore, in the proposed MODESA, each individual is assigned a prior life cycle.The number of prior life cycles means how many times the individual can be selected preferentially in the nondominated sorting.For the sake of convenience, the life cycle value of solution   is denoted as   .Once the individual is selected preferentially in the nondominated sorting, the prior life cycle of the individual is minus one when it is greater than zero.Note that, in the population initialization, the prior life cycle of each individual is set to zero.
In this paper, given a temperature temp, the new solution is selected with a probability prob as follows: where  is the number of solutions that dominates the new solution.It is worth mentioning that Δdom avg represents the average amount of domination of the new solution and the  solutions that dominate it.Suppose that Δdom avg is constant, the acceptance probability decreases when the temperature decreases.The simulated annealing used in the selection operator of MODESA is as shown in Algorithm 1.

Diversity Maintenance Mechanism.
The diversity maintenance mechanism adopted in the paper is presented by Kukkonen and Deb [30] for pruning of nondominated solutions.Instead of using the crowding distance in Deb et al. [2], the method proposed a new crowding estimation called vicinity distance using nearest neighbors.In this paper, we call this method as the vicinity distance based pruning method for convenience.The basic process of the method is to remove the most crowded solutions in the nondominated solution one by one.Moreover, after each removal, the vicinity distance values of the remaining solutions must be updated again.The vicinity distance is used to get an estimate of the density of solutions surrounding a particular solution, it calculates the product of  nearest neighbors of the particular solution.The solutions with the smallest value of vicinity distance are considered to be the most crowded ones.In [30], the value of  is assigned to the number of objectives; that is,  = .The reader can refer to [30] to get more information about this method.

The Framework of the Proposed MODESA Algorithm.
As mentioned before, the proposed algorithm first uses the population initialization phase to generate a population with  solutions.After the nondominated sorting, each solution has a ranking value.Next, the main loop begins.We introduce the th generation of MODESA in the following.
At the th generation, an empty temporary population is created first.The temporary population is going to store the new good solutions generated by the DE in the loop.Then for each individual in the current population, the DE operators are used to generate a trial individual.Next, there are three possibilities when comparing the current individual and the trail individual.If the trial individual dominates the current individual, we simply replace the current solution by the trial solution.If the trial individual and the current solution are incomparable, the proposed simulated annealing process is adopted to determine whether the trial individual is accepted.If the trial individual is dominated by the current solution, we also put the trial individual into the temporary population as MODEA does.After all the individuals are handled, the current population and the temporary population are combined as a union population.Note that, if the current generation is the last generation, the members that are dominated by any other individuals must be removed in the union population.The reason of doing this is that, because we allow some potential solutions which are not absolutely good and have a positive life cycle to exist in the population; therefore, at the end of the generation, these solutions must be removed or they will influence the effectiveness of the results.Finally, the nondominated sorting and the vicinity distance based pruning approach are used for getting the best  solutions.
In this paper, the parameters that need to be set a priori are listed in Table 1 and the Pseudocode of MODESA is shown in Pseudocode 1.

The Computational Complexity Analysis.
The proposed MODESA approach is a simple algorithm to be implemented.In the following, the overall computational complexity of MODESA is analyzed.Basic operations and their worst case complexities are given as follows.
(3) The procedure of checking the domination status of the trial solution and the current solution is (×), and the procedure of simulated annealing is (( +  × ) × ).
Here,  and  represent the number of objectives and the population size, respectively.We can see that the overall worst case complexity of the MODESA is ( 2  log  +  2 ).

The Difference between MODESA and MODEA.
Considering that the proposed algorithm MODESA is an improvement of MODEA, it is necessary to provide the difference between MODESA and MODEA.There are two main differences between MODESA and MODEA as follows.
(1) When comparing the new solution and the current solution, there exist two processing ways according to two different results of comparison.If the new solution dominates the current solution, the MODEA replaces the current solution with the new solution and puts the current solution into the temporary population; otherwise, the new solution is taken into the temporary population.After each solution in the population is handled, the two populations are combined.It indicates that the new solutions are all retained.However, MODESA works in three kinds of ways according to different results.If the new solution dominates the current solution, the current solution is replaced with the new solution and the current solution is removed directly, which means that if the current solution is worse than the new solution, it would not be retained.If the new solution and the current solution are incomparable, the proposed simulated annealing algorithm is adopted to determine the acceptance of the current solution; moreover, the usage of a prior life cycle enables some potential solutions preferentially to be selected in the next generation.Otherwise, the new solution is added to the temporary population and the current solution is retained in the current population.
(2) The diversity maintenance mechanism is different.
In MODEA, the diversity preserving strategy based on crowding distance in NSGA-II is utilized, while in MODESA, the vicinity distance based pruning method in [30] is adopted.

Results and Discussion
4.1.Test Problems.The performance of the proposed algorithm is tested on a set of five biobjective and two triobjective optimization problems.The first five problems are ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6, which were described in [31].The two triobjective optimization problems: DTLZ1 and DTLZ2 are taken from Deb [32].None of these problems has any constraint.All objective functions are to be minimized.
The number of variables, their bounds, the Pareto-optimal solutions, and the characteristics of the Pareto-optimal front of these test problems are also shown in Table 2 [2].For the test problems DTLZ1 and DTLZ2, the number of objectives is set to 3. Therefore, the number of variables for DTLZ1 and DTLZ2 is equal to 7 and 12, respectively.

Performance Measures.
In this section, three performance measures will be introduced to evaluate the effectiveness of the proposed algorithm MODESA with the other algorithms.Here, let  * and  denote the Pareto-optimal front and the Pareto front which are obtained by the algorithm, respectively.In this study, three performance measures are used for evaluating the convergence, the diversity, and both of them, separately.It is worth mentioning that all the three performance measures can be used only when the Pareto-optimal front of the particular test problem is known.In this paper, 500 evenly distributed solutions in PF are selected as  * for each biobjective problem, and 990 solutions for triobjective problem, which is the same as that in [33].
The reader can download these solutions from the website http://dces.essex.ac.uk/staff/qzhang/index.html.

Inverted Generational Distance (IGD).
The concept of inverted generational distance (IGD) is proposed by Coello and Cortés in [34].The metric can be formally described as follows: where  (x, ) is the minimum Euclidean distance between the solution x and the solutions in .The IGDmetric value can evaluate both the convergence and diversity of  to a certain degree.Algorithms with smaller value of IGD are satisfying.
The metric calculates the distance between  and  * .It can be defined as follows: . . .
. .where  (x,  * ) is the minimum Euclidean distance of solution x and the solutions in  * .GD evaluates the closeness of the obtained Pareto front and the Pareto-optimal front.Algorithms with smaller value of GD are desirable.

Spread (Δ).
The spread metric Δ was firstly proposed by Deb et al. [2] to measure the spread effect of the nondominated solutions obtained by the algorithm.However, it is only suitable for biobjective problems.Wang et al. [19] further extended the metric Δ and made it suitable to measure problems that have more than two objectives.The improved Δ can be stated mathematically as follows: where Ω is a set of solutions,  is the number of objectives, and ( 1 , . . .,   ) are  extreme solutions in the set of true Pareto front (PF).The smaller the value of Δ, the better the distribution and diversity of the obtained nondominated solutions.

Experimental Setup.
In this paper, the population size and the number of generations are set to 100 and 250, respectively.For the other parameters in NSGA-II and MODEA, they have remained the same with their original studies [3,23].For MODESA, there are four parameters:  max ,  min , , and    that need to be set.Here we set  max = 100,  min = 1 − 7,  = 0.6, and    = 1 for the study.Each algorithm runs 10 times independently for each problem.All the test problems have been executed on Microsoft Window XP Intel core 2 Duo CPU E8400 3.00 GHz, with 2 GB RAM.All methods have been implemented in C++.

Results of Biobjective Optimization
Problems.Tables 3  and 4 show the mean and standard deviation IGD-metric and Δ-metric values of the obtained nondominated solutions in 10 runs on the group of ZDTs.It is obvious from Tables 3  and 4 that the IGD-metric values of MODESA achieve the best value in the three algorithms except for the test problem ZDT4.
The test problems ZDT1 and ZDT2 may be the most simple problems in the series of ZDTs.The Pareto-optimal front of ZDT1 is convex while that of ZDT2 is nonconvex.It is clear from Tables 3 and 4 that both the values of the IGD metric and the spread metric of MODESA are better than that of NSGA-II and MODEA.Figures 1 and 2 demonstrate that MODESA converges to the Pareto-optimal front and has a good spread over the entire front on the test problems ZDT1 and ZDT2.It can be concluded that the proposed MODESA can well deal with these two kinds of test problems.
The third test problem ZDT3 is different from the above two test problems.The Pareto-optimal front of ZDT3 consists of five disjoint curves.From Figure 3, it is obvious that the nondominated solutions obtained by the MODESA almost converge to the Pareto-optimal front and have a good distribution over the entire front.From Tables 3 and 4, MODESA obtains the best values of IGD-metric and spread-metric between the three algorithms.
The fourth test problem is ZDT4 which is perhaps the most difficult problem in the five problems.It has 21 9 different local Pareto-optimal fronts that confuse the multiobjective optimization algorithm [2].From Tables 3  and 4, NSGA-II obtains the best values on the IGD-metric and the spread-metric in these three algorithms.In fact, in the 10 independent runs, some results of MODEA and MODESA are better than that of NSGA-II; however, MODEA and MODESA get bad results in a few runs which affects the mean value of the 10 runs.Indeed, the best IGD-metric value of MODESA in the 10 runs is 0.0044915, which is better than NSGA-II.Figure 4  values found by MODESA in the 10 runs.It is clear from Figure 4 that MODESA can get quite good convergence and distribution over the Pareto-optimal front, which reveals that the proposed algorithm is a promising but unstable algorithm for solving the test problem ZDT4.We will leave this problem for future work.
The last test problem ZDT6 is such a problem with thin density and nonuniformed spread of solutions [23].We can observe from Tables 3 and 4 that MODESA finds the best convergence and spread in comparison to the other two algorithms.To show the superiority of MODESA, plots of the nondominated solutions found by MODESA and MODEA on ZDT 6 are shown in Figures 5 and 6, respectively.It is clear from Figures 5 and 6 that MODESA gets a better spread than MODEA.

Results of Triobjective Optimization Problems.
In this section, to show the effectiveness of the proposed MODESA for solving problems that have more than two objectives, we choose two triobjective optimization problems DTLZ1 and DTLZ2 for comparison.The results of NSGA-II, MODEA, and MODESA are given in Table 5.It is obvious from Table 5 that MODESA performs better than the other two algorithms in terms of both the IGD-metric and spread-metric.Figures 7, 8, 9, and 10 show the nondominated solutions obtained by MODEA and MODESA, and it is obvious that MODESA  outperforms MODEA on both of the two triobjective optimization problems.In particular, it is clear that MODESA has a better distribution than MODEA, the reason is that the vicinity distance based pruning method performs better than the crowding distance of Deb in maintaining diversity and distribution of the nondominated solutions.

Effect of the Proposed Simulated Annealing Procedure.
As mentioned in Section 3.5.3,one of the main differences of MODEA and MODESA is the employment of the proposed simulated annealing procedure.To show the effect of the proposed simulated annealing procedure, a version of MODESA: MODESA-CD (multiobjective differential evolution with crowding distance) is proposed for comparison.The MODESA-CD is analogous to MODESA except that it replaces the vicinity distance based pruning method in MODESA with the crowding distance based approach in NSGA-II.In other words, the difference between MODEA and MODESA-CD is the using of the proposed simulated annealing procedure.Table 6 shows the mean and standard deviation GD-metric values of nondominated solutions in 10 runs where  = 100 and  = 250.It is obvious fromTable 6 that all the GD-metric values of MODESA-CD are better than that of MODEA.The reason of the bad performance of MODEA is that MODEA always accepts the current and the new solutions to the next generation and that makes it chaotic and unable to have a right direction towards the Pareto-optimal front.In MODESA-CD, we remove the bad solutions and only accept some potential solutions in an alterable probability.

Conclusion
In this paper, an enhanced differential evolution based algorithm with simulated annealing, named MODESA, is presented for solving MOPs.During the selection, MODESA employs a procedure of simulated annealing to control the acceptance of every candidate solution, and a concept of the prior life circle is proposed to allow some potential solutions entering the next generation to escape from the local optimum.Finally, a fast and efficient diversity maintenance mechanism is adopted.The proposed algorithm is tested on five biobjective and two triobjective optimization problems in terms of IGD-metric and spread metric, and the experimental results show that MODESA outperforms the other two algorithms on these test problems except on ZDT4.Furthermore, the effect of convergence is also tested in comparison to MODEA, the computational results show the superiority of the proposed MODESA.
Improvement can be made further in the performance of the proposed MODESA on the test problem ZDT4.In a word, MODESA is an easy and efficient method for solving MOPs.

Figure 1 :Figure 2 :
Figure 1: Plots of the nondominated solutions with the lowest IGDmetric values found by MODESA in 10 runs in the objective space on ZDT1 where  = 100 and  = 250.

Figure 3 :Figure 4 :
Figure 3: Plots of the nondominated solutions with the lowest IGDmetric values found by MODESA in 10 runs in the objective space on ZDT3 where  = 100 and  = 250.

Figure 5 :
Figure 5: Plots of the nondominated solutions with the lowest IGDmetric values found by MODESA in 10 runs in the objective space on ZDT6 where  = 100 and  = 250.

Figure 6 :
Figure 6: Plots of the nondominated solutions with the lowest IGDmetric values found by MODEA in 10 runs in the objective space on ZDT6 where  = 100 and  = 250.

3 Figure 7 : 3 Figure 8 :
Figure 7: Plots of the nondominated solutions with the lowest IGDmetric values found by MODEA in 10 runs in the objective space on DTLZ1 where  = 100 and  = 250.

Figure 9 : 1 Figure 10 :
Figure 9: Plots of the nondominated solutions with the lowest IGDmetric values found by MODEA in 10 runs in the objective space on DTLZ2 where  = 100 and  = 250.
,   , ,  min ,  max ,   ,  temp ,    If ( >  min ) { Compute the value of Δdom avg by (9).Calculate the acceptance probability prob: prob = exp(−Δdom avg /) With a probability prob Do Put the current individual   to the temporary population:  temp =  temp ∪   .Replace the current individual   by the trial individual   :   =   .Assign the life value    to the new individual   :   =   , where   represents the life cycle value of solution   .otherwise Put the current individual   to the temporary population:  temp =  temp ∪   .}// End If Algorithm 1: The proposed simulate annealing algorithm. input:

Table 1 :
The parameters used in the proposed algorithm.

Table 2 :
Test problems used in the paper.

Table 3 :
The mean and standard deviation IGD-metric values of nondominated solutions in 10 runs on ZDTs where  = 100 and  = 250.

Table 4 :
The mean and standard deviation Δ-metric values of nondominated solutions in 10 runs on ZDTs where  = 100 and  = 250.

Table 5 :
The mean and standard deviation IGD-metric and Δ-metric values of nondominated solutions in 10 runs on DTLZ1 and DTLZ2 where  = 100 and  = 250.

Table 6 :
The mean and standard deviation GD-metric values of nondominated solutions in 10 runs where  = 100 and  = 250.In the near future, applying MODESA for constrained MOPs and real-life application problems are our work.