Disruption-Based Multiobjective Equilibrium Optimization Algorithm

Nature-inspired computing has attracted huge attention since its origin, especially in the field of multiobjective optimization. This paper proposes a disruption-based multiobjective equilibrium optimization algorithm (DMOEOA). A novel mutation operator named layered disruption method is integrated into the proposed algorithm with the aim of enhancing the exploration and exploitation abilities of DMOEOA. To demonstrate the advantages of the proposed algorithm, various benchmarks have been selected with five different multiobjective optimization algorithms. The test results indicate that DMOEOA does exhibit better performances in these problems with a better balance between convergence and distribution. In addition, the new proposed algorithm is applied to the structural optimization of an elastic truss with the other five existing multiobjective optimization algorithms. The obtained results demonstrate that DMOEOA is not only an algorithm with good performance for benchmark problems but is also expected to have a wide application in real-world engineering optimization problems.


Introduction
Conventional mathematical optimization methods have the disadvantage of getting trapped in local optima for nonlinear optimization problems. Moreover, such optimization algorithms are highly complex and specialized. Inspired by the idea of biological evolution in nature, metaheuristic optimization algorithms have attracted huge attention due to the advantages of local avoidance and easy implementation. Research on optimization algorithms achieves rapid development due to the emergence of metaheuristic optimization algorithms. Many nature-inspired optimization algorithms have been proposed in the past few decades, including particle swarm optimization (PSO) [1], ant colony optimization (ACO) [2], evolution strategies (ES) [3], genetic algorithm (GA) [4], artificial bee colony algorithm (ABC) [5], gravitational search algorithm (GSA) [6], bat algorithm (BA) [7], flower pollination algorithm (FPA) [8], grey wolf optimizer (GWO) [9], whale optimization algorithm (WOA) [10], disruption particle swarm optimization (DPSO) [11], and equilibrium optimization algorithm (EO) [12]. Most of them are used to handle single objective optimization problems.
However, there is usually more than one objective needs to be optimized in real-world optimization problems, which means the common characteristic of real problems is multiobjective. In contrast to single objective problem, a multiobjective problem takes several conflicting objectives into consideration simultaneously. Instead of a single optimal solution, there is usually a set of alternative trade-offs between the objectives called Pareto optimal solutions in a multiobjective optimization problem [13]. Traditional methods to handle multiobjective optimization problems sometimes cannot produce well-distributed solutions along the Pareto front, and it may have difficulty in finding Pareto optimal solutions in nonconvex regions [14]. In 1985, Schaffer et al. [15] proposed the vector evaluated genetic algorithm (Vega) and applied it to solve the optimization problem involving multiple objectives for the first time. en, a series of initial multiobjective optimization algorithms based on Pareto optimal were proposed successively, such as multiple objective genetic algorithms (MOGA) [16], niched Pareto genetic algorithm (NPGA) [17], and a nondominated sorting genetic algorithm (NSGA) [18]. e characteristics of these multiobjective optimization algorithms are the individual selection method based on nondominant ranking and the population diversity maintaining strategy based on fitness sharing mechanism. Due to the effectiveness of nature-inspired optimization algorithms, research on multiobjective optimization algorithms has attracted lots of attention in the past few decades. Some classical multiobjective evolutionary algorithms have been proposed, including nondominated sorting genetic algorithm version 2 (NSGA-II) [19], region-based selection in evolutionary multiobjective optimization (PESA2) [20], improving strength-Pareto evolutionary algorithm (SPEA2) [21], a multiobjective evolutionary algorithm based on decomposition (MOEA/D) [22], multi objective particle swarm optimization (MOPSO) [23], and multiobjective simulated-annealing algorithm (MOSA) [24]. In recent years, various novel multiobjective optimization algorithms have been proposed such as multiobjective gravitational search algorithm (MOGSA) [25], grid-based evolutionary algorithm (GrEA) [26], multiobjective grey wolf algorithm (MOGWO) [27], multiobjective ant lion optimizer (MOALO) [28], and multiobjective whale optimization (MOWOA) [29].
On the basis of absorbing the excellent searching mechanism of the equilibrium optimization algorithm and a novel mutation operator proposed in this work, this paper presented a disruption-based multiobjective equilibrium optimization algorithm (DMOEOA), which is able to handle multiobjective optimization problems. e novel mutation operator named layered disruption method is first proposed in this work with the aim of enhancing the exploration and exploitation abilities of DMOEOA. In addition, according to the No Free Lunch theorem [30], one optimization algorithm cannot solve all optimization problems effectively. is theorem also provides researchers with opportunities and motivations to propose new multiobjective optimization algorithms.
In this paper, the basic concepts of multiobjective optimization problems and grid mechanism are given in Section 2. e introduction of equilibrium optimization operator and layered disruption method is presented in Section 3.Section 4 provides experimental results and analysis of DMOEOA on benchmark functions with five multiobjective optimization algorithms. e analysis of the layered disruption method and the parametric study is also conducted in this section. In addition, the application of DMOEOA in the structural optimization of an elastic truss is presented in Section 5. Finally, some concluding remarks are given in Section 6.

Basic Concepts
In this section, the concepts of multiobjective optimization problems (MOPs) are given first, then some definitions of grid mechanism are provided.

Multiobjective Optimization Problems.
e optimization of a problem with more than one objective is called multiobjective optimization. Without a loss of generality, the MOP can be formulated as a minimization problem as follows: where X � (x 1 , x 2 , . . . , x n ) refers to the decision vector in the search space R n , f k (X) denotes the k th objective to be optimized in the objective space R k , L s , and J s represent the lower limit and upper limit of the decision variable, respectively.
Definition 1 (Pareto dominance). Given two decision vectors X, Y, the corresponding objective vectors are denoted as Definition 2 (Pareto optimality). An obtained solution X is Pareto optimal iff ∄Y ∈ R n : Y≺X.
Definition 3 (Pareto optimal set). e set of Pareto optimal solutions is called the Pareto optimal set (PS) iff it is defined as follows: PS � X ∈ R n |∄Y ∈ R n : Y≺X .
Definition 4 (Pareto optimal front). Given a Pareto optimal set (PS), the Pareto optimal front is defined as follows:

Grid Mechanism.
Grid mechanism [23,26,29] is introduced into DMOEOA due to its conciseness and high efficiency. In this mechanism, each individual is assigned a grid location in each dimension of the objective space. e grid mechanism is able to reflect the diversity and convergence of the obtained solutions. Some definitions of grid mechanism used in this work are as follows [26]: Definition 5 (Grid boundary). min f i (x) and max f i (x) represent the minimum and maximum values of the i th objective, respectively; the lower limit l i and the upper limit u i of the grid in the i th objective space are as follows: 2 Computational Intelligence and Neuroscience where div represents the number of divisions (i.e., grids) in the objective space in each dimension.
Definition 6 (Grid location). e grid location of an individual can be determined as follows: where d i is the width of the grid in the i th objective, [·] represents the function of rounding up. For example, in Figure 1, the grid locations of individuals B and C are (1, 4) and (2, 3), respectively.
Definition 7. (Grid ranking). e grid ranking (GR) of an individual is defined as the summation of its grid location in each objective as follows: e smaller the GR(x) value, the more individuals in the obtained solutions are dominated by an individual x. As shown in Figure 1, the grid ranking of A is 4; in contrast, the grid ranking of D is 6, which means that the individual A is closer to the true Pareto front than the individual D. e normalized Euclidean distance between an individual and the minimal boundary point in its grid is called grid coordinate point distance (GCPD) , which is defined as follows: As for individuals who have the same grid ranking, the one who has a smaller GCPD value should be selected first. For example, in Figure 1, individuals E and F have the same GR value. However, the GCPD of the individual E is smaller than the individual F, so the individual E should be preferred. e general framework of the grid mechanism is shown in Algorithm 1.

Equilibrium Optimizer (EO).
e equilibrium optimization algorithm is first proposed by Faramarzi et al. [12]. e equilibrium optimizer is inspired by the control volume mass balance model, which is applied to the estimation of dynamic and equilibrium states. In equilibrium optimizer, each individual (solution) with its concentration C (position) is regarded as a search agent. In EO, each individual in the population is similar to a solution and the individual's concentration is similar to a particle's position in the particle swarm optimization algorithm [1]. More information about EO may refer to [12]. Due to the simple principle, easy implementation, and fast convergence, EO has been widely applied to solve various single objective optimization problems, including economic dispatch [31], structural design optimization [32], and image segmentation [33]. e position updating formulation of EO is as follows [12]: where V is defined as unit, C e refers to the equilibrium candidate, F and G represent exponential term and generation rate, respectively. λ � (λ 1 , λ 2 , . . . , λ n ) T is a random vector in the interval of [0, 1], n is the number of dimensions of the individual's concentration C.

Equilibrium
Pool C e,pool and Equilibrium Candidate C e . e equilibrium state indicates the final convergence state of EO. At the beginning of the search process, there is no knowledge about the final equilibrium state, and the equilibrium candidate C e is used to provide a search guide for individuals in the population. In equilibrium optimizer, equilibrium candidates are defined by the four best individuals selected according to their fitness value during the whole optimization process and an individual whose concentration is the average of the above four best individuals. e equilibrium pool consists of five individuals.
However, as for multiobjective optimization problems, there is usually a set of alternative trade-offs between these objectives. We cannot sort the solutions based on their fitness value. erefore, in DMOEOA, the classical external repository Rep in MOPSO [23] is used to construct the equilibrium pool. Solutions in the external repository are  Computational Intelligence and Neuroscience regarded as equilibrium candidates. e equilibrium pool in DMOEOA is shown below: where Rep represents the external repository, and the Rep is used to keep a historical record of the nondominated solutions found along the whole search process. Each individual in each iteration updates its concentration C (position) with roulette wheel selection among equilibrium candidates C e . e more equilibrium candidates with the same GR value in the equilibrium pool, the less likely they are to be selected to guide the particles in the population. e above selection method is able to maintain the diversity of the obtained solutions in the search process.

Exponential Term
F. e concentration updating rule is mainly controlled by the exponential term F.
where t is the function of iterations, t decreases with the number of iterations, iter and IT represent the current iteration and the maximum iteration, respectively. a 2 is a constant value which controls the exploitation ability of EO.
With the aim of achieving high convergence by slowing down the search speed, t 0 is defined as follows: where a 1 is a constant value that affects the exploration ability, sign(r 0 − 0.5) is applied to control the direction of exploration and exploitation, r 0 is a random number in [0,1]. In this work, the values of a 1 and a 2 are set to 2 and 1, respectively. e selection of the two values is consistent with the original EO algorithm. erefore, the exponential term F can be formulated as follows: 3.1.3. Generation Rate G. Generation rate plays an important role in the equilibrium algorithm. It is used to improve the exploitation ability of EO.
where G 0 represents the initial value. GCP is called the generation rate control probability. GP represents the generation probability, which is set to 0.5 according to the original EO algorithm. r 1 and r 2 are two random numbers in [0, 1]. κ indicates the decay vector. is study assumes κ � λ. us, the generation rate can be formulated as follows:

Layered Disruption Method (LDM).
Inspired by the disruption phenomenon of astrophysics, a novel operator named "Disruption" and its variants are introduced into single objective evolutionary algorithms [11,34,35]. In this paper, a layered disruption method is integrated into a multiobjective equilibrium optimization algorithm to enhance its exploration and exploitation abilities.

Disruption Phenomenon.
"When a swarm of gravitationally bound particles having a total mass, m, approaches too close to a massive object, M, the swarm tends to be torn apart. e same thing can happen to a solid body held together by gravitational forces when it approaches a much more massive object" [36]. is is called disruption phenomena. e disruption phenomenon is originated from astrophysics [36]. As shown in Figure 2, the swarm will be torn apart when the following condition is satisfied [11]: where R is the distance between the center of mass of the swarm m and the mass M, and r represents the radius of the swarm m.

Layered Disruption Condition.
In order to simulate the disruption phenomenon, individuals in the population Pop with the same GR value are treated as one group, and different groups have different disruption conditions. It is different from Liu et al. [11], Sarafrazi et al. [34], and Ding et al. [35]. All individuals are treated as one group. Here, we define the disruption coefficient Q i as follows: where c is the number of groups. i represents the i th index after sorting all groups by increasing order according to GR values.
In the i th group, individuals with the S i smallest GCPD values are treated as a whole and denoted as the mass M.
Other individuals who will be disrupted have the total mass m. S i is defined as follows: where U i is the number of individuals in the i th group, and [·] is the function of rounding up.

Disruption Operator.
When the individual satisfies the disruption condition, a random number which obeys the Cauchy distribution is utilized to disrupt the individual. e cauchyrnd is defined as follows: e disruption equation is as follows: where C j is the position vector of individual j, iter is the current iteration, IT represents the max iteration. Cau refers to the disruption operator which is a matrix consisting of a set of Cauchy random numbers. It is worth noting that different dimensions of individual j have different Cauchy random numbers, which is different from Liu et al. [11]. All dimensions of individual j have the same cauchyrnd.
We can observe that the individual with large GR value and GCPD value is more likely to be disrupted to explore in a wide region at the early stage. As the number of iterations increases, the individual will fully exploit its surrounding area. erefore, the disruption method proposed in this paper is able to enhance the exploration and exploitation abilities of the proposed algorithm. e general framework of the layered disruption method is shown in Algorithm 2.

e Pseudocode of the DMOEOA Algorithm.
e pseudocode of the DMOEOA algorithm is shown in Figure 3.

Computational Complexity Analysis of the DMOEOA Algorithm.
e computational complexity of an algorithm indicates the number of resources required to run it; the computational complexity of an algorithm can reflect the performance of the algorithm. N refers to the number of individuals in the population and K represents the number of objectives. e computational complexity of the main steps of DMOEOA is shown in Table 1.
erefore, the computational complexity of DMOEOA is of O(KN 2 ). e computational complexity of DMOEOA is the same as the algorithms employed to compare with DMOEOA in this paper, including MOPSO, MOALO, NSGAII, MOWOA, and MOGWO.

Parameter Setting and Instances.
In this section, three kinds of standard benchmark test suites including ZDT suites [37], DTLZ suites [38], and UF suites [39] are utilized to validate the performance of the proposed DMOEOA algorithm. e optimal Pareto fronts of these test functions include continuous, discontinuous, convex, and concave. Five multiobjective optimization algorithms, including MOPSO, MOALO, MOWOA, NSGAII, and MOGWO, are employed to compare with DMOEOA. e parameters of algorithms shown in Table 2 are chosen. ese parameters are selected in accordance with the original algorithms. For all of the following simulation experiments, the maximum number of iterations and populations is set to 300 and 200, respectively. As for ZDT suites [37] and UF suites [39], the dimension of the search space is set to 30 and the dimension of the search space of DTLZ suites [38] is set to 12. To eliminate the randomness of the results, each algorithm runs 30 times on each benchmark test function.

Performance Metrics.
In order to minimize the distance of the Pareto front produced by DMOEOA with respect to the optimal Pareto front and maximize the diversity of solutions found, two performance metrics are employed to quantify the performance of multiobjective optimization algorithms, including Inverted Generational Distance (IGD) [40] and metric of Delta (Δ) [19]. e performance metrics of Inverted Generational Distance and Delta are formulated as follows: Computational Intelligence and Neuroscience 5 where P represents the number of true Pareto optimal solutions, D i indicates the Euclidean distance between the i th true Pareto optimal solution and its nearest solutions in the external repository. In addition to reflecting the convergence of the obtained solutions, IGD can reflect the uniformity and coverage of the obtained solutions. e smaller the IGD value, the better coverage and convergence of the obtained solutions.
Calculate GCPD of each individual in the Pop by equation (11) Sort individuals in the ith group by increasing order according to the GCPD value for j � S i + 1:U i do Disrupt the jth individual in ith group by disruption equation (27)

Discussion and Analysis.
is section provides the statistical results of DMOEOA and five multiobjective optimization algorithms, including MOPSO, MOALO, MOWOA, NSGAII, and MOGWO, for IGD metric and Delta metric. e results obtained by those six algorithms upon test functions are shown in Tables 3 and 4 and Figures 4 and 5. e best value is shown in bold. In addition, the Wilcoxon rank-sum test is employed to compare the IGD results obtained by DMOEOA, and those five compared algorithms at a significance level of 0.05. e IGD results for two-objective test functions and threeobjective test functions are shown in Tables 3 and 4, respectively, in which the "+/�/− " represent the proposed algorithm is better than, similar to, or worse than its corresponding competitor, respectively. e results are represented by "w/t/l", which means that compared to the competitor, DMOEOA wins on w test functions, ties on t test functions, and loses on l test functions. e statistical results shown in Table 3 indicate that DMOEOA provides better performance in convergence and coverage than MOPSO, MOALO, MOWOA, and NSGA-II. From Figure 5, we can observe that MOPSO shows better diversity of obtained solutions than other five algorithms on ZDT1. e statistical results of the algorithms on ZDT2 and ZDT3 for IGD in Table 3 show that the proposed DMOEOA algorithm provides better results on average and standard deviation of IGD than the other five algorithms. IGD is a performance metric that reflects the convergence and coverage performance of an algorithm, which means that the DMOEOA algorithm provides better convergence and coverage of obtained solutions on ZDT3. In Figure 5, the boxplot of Delta on ZDT2 indicates that DMOEOA and MOALO show similar performance in a diversity of obtained solutions, and the boxplot of Delta on ZDT3 suggests that the DMOEOA algorithm has better performance in a diversity of obtained solutions than MOALO, MOWOA, and MOGWO.
As shown in Table 3, the statistical results of the six algorithms on ZDT4 and ZDT6 test problems for IGD show that the proposed DMOEOA algorithm is able to outperform the other five algorithms on average, and the Wilcoxon rank-sum test results indicate that the proposed algorithm has superiority in both coverage and convergence performance. From Figure 5, we can observe that although MOPSO and NSGAII outperform the other four algorithms in a diversity of obtained solutions on ZDT4 and ZDT6, the above two algorithms show poor convergence ability on ZDT4 and ZDT6 test functions. e statistical results of the algorithms on UF1 and UF2 for IGD in Table 3 show that the proposed DMOEOA algorithm provides better results on average and standard deviation of IGD than the other five algorithms, which means that the DMOEOA algorithm shows better performance in convergence and coverage on UF1 and UF2. As shown in Figure 5, the boxplot of Delta on UF1 indicates that DMOEOA provides better performance in the diversity of obtained solutions than MOALO, NSGA-II, and MOGWO. In contrast, the boxplot of Delta on UF2 indicates that MOPSO shows better performance in diversity than the other five algorithms. e best results on average and standard deviation of IGD for UF3 belong to MOGWO and MOWOA, respectively (see Table 3). e statistical results of the algorithms on UF4 for IGD (see Table 3) indicate that DMOEOA shows better performance in convergence and coverage than the other five algorithms. e boxplot of Delta on UF3 and UF4 shown in Figure 5 indicates that DMOEOA has better performance in diversity than MOALO, MOWOA and NSGA-II.
UF5 test function has discontinuous Pareto optimal front. As shown in Figure 4, the best obtained optimal Pareto fronts of the six algorithms for UF5 suggest that the nondominated solutions obtained by DMOEOA are more uniformly distributed than the other five algorithms. According to the Wilcoxon rank-sum test results, the convergence ability of the proposed algorithm on UF5 is similar to that of MOGWO. e statistical results of the algorithms for the IGD metric on UF6 (see Table 3) show that the convergence and coverage performance of the proposed algorithm DMOEOA is similar to that of MOPSO, MOALO, and MOWOA.
UF7 benchmark has a linear Pareto optimal front. Compared to test functions with disconnected Pareto optimal fronts, it is easier for algorithms to obtain well-distributed solutions on the UF7 test problem. e statistical results of the algorithm for the IGD metric (see Table 3) on UF7 prove that the DMOEOA algorithm has better performance in convergence and coverage than MOPSO, MOALO, and NSGAII. As depicted in Figure 5, the boxplot of Delta on UF7 suggests that the DMOEOA algorithm shows better performance in the diversity of obtained nondominated solutions than MOALO, MOGWO, and MOWOA. Computational Intelligence and Neuroscience 7 UF8, UF9, and UF10 are triobjective test problems, and these three benchmarks have complex Pareto optimal fronts, which make them challenging for all the six algorithms. As shown in Figure 4, the best obtained Pareto optimal front of DMOEOA on UF8 is more distributed than the other five algorithms. Meanwhile, the statistical results shown in Table 4 suggest that DMOEOA provides better results on average and standard deviation of IGD. It can be stated that the proposed DMOEOA algorithm has better performance in both convergence and distribution than the other five algorithms on the UF8 test problem. From Figure 4, we can observe that all the six algorithms show poor convergence and distribution on UF9. Compared to the other five algorithms, the DMOEOA has superiority in both coverage and convergence of obtained solutions on UF10 according to the Wilcoxon rank-sum test results shown in Table 4.
DTLZ1 and DTLZ2 are triobjective test problems and they have multiple local Pareto optimal fronts. e statistical results of the algorithm for IGD on DTLZ1 in Table 4 show that the proposed DMOEOA algorithm performs better in convergence than the other five algorithms. As shown in Figure 4, the best obtained optimal Pareto front of NSGA-II on DTLZ2 is far from the true Pareto optimal front. In contrast, the obtained optimal Pareto fronts of DMOEOA and MOPSO are more uniformly distributed than the other four algorithms. Meanwhile, the statistical results of the DMOEOA for IGD on DTLZ2 in Table 4 prove that DMOEOA and MOPSO have superiority in convergence ability.
Both DTLZ3 and DTLZ4 have concave Pareto optimal fronts. e statistical results of the algorithms on DTLZ3 and DTLZ4 for IGD (see Table 4) show that DMOEOA shows better results on average and standard deviation of IGD than the other five algorithms. MOPSO and MOGWO provide better performance in the diversity of obtained solutions than the other four algorithms on both DTLZ3 and DTLZ4 (see Figure 5).     Computational Intelligence and Neuroscience DTLZ5 and DTLZ6 are both three-objective test problems with degenerate Pareto optimal fronts. As shown in Table 4, the statistical results of the algorithms for IGD on DTLZ5 indicate that DMOEOA has a similar performance in convergence and coverage with MOPSO and MOGWO. MOPSO shows better performance in both diversity and convergence of obtained solutions than DMOEOA, MOPSO, MOALO, and NSGA-II on DTLZ6 (see Table 4 and Figure 5).
DTLZ7 is disconnected in both the Pareto optimal set and the Pareto optimal front. In Table 4, the statistical results of the algorithms for IGD on DTLZ7 suggest that DMOEOA provides better results on average and standard deviation of IGD than the other five algorithms, which means that the DMOEOA algorithm shows superiority in both convergence and coverage ability on DTLZ7, and the boxplot of Delta on DTLZ7 indicates that DMOEOA shows better performance in the diversity of obtained solutions than MOALO, MOWOA, and NSGA-II (see Figure 5). e above results demonstrate that the DMOEOA algorithm is able to show competitive and promising results on multiobjective test functions, especially for three-objective test problems, and the test results indicate that DMOEOA does exhibit better performances in these problems with a better balance between convergence and distribution. e statistical results for IGD demonstrate the high convergence ability of DMOEOA. e layered disruption method plays an important role in improving the convergence and distribution performance of DMOEOA. LDM can prompt the population to conduct extensive   Computational Intelligence and Neuroscience 13 searches in each iteration. As the number of iterations increases, the individual will fully exploit its surrounding area. us, the exploration and exploitation ability of the proposed algorithm can be enhanced.

Analysis of Layered Disruption Method (LDM).
In this work, the layered disruption method (LDM) is introduced into DMOEOA with the aim of enhancing its exploration and exploitation abilities. us, it is important to investigate the impact of the LDM on DMOEOA. In this section, ZDT3, ZDT4, ZDT6, and DTLZ2 test problems which include discontinuous, convex, and concave Pareto optimal fronts are employed as test instances.  Table 1. Simulation results are depicted in Figures 6-9. e results of the algorithms on ZDT3 depicted in Figure 6 show that DMOEOA is able to find the true optimal Pareto front after the 60th generation. In contrast, MOEOA cannot completely converge to the true optimal solutions even in the 300th generation. As shown in Figure 7, the DMOEOA converges to the true Pareto front after the 80th generation on ZDT4. By comparison, MOEOA shows poor convergence and distribution ability on ZDT4. Similarly, simulation results of the algorithms on ZDT6 in Figure 8 indicate that DMOEOA converges to the optimal Pareto front after the 60th generation. As for MOEOA, there are still some poor solutions in the 300th generation. In addition, as shown in Figure 8, the distribution of best obtained optimal Pareto solutions of DMOEOA is better than MOEOA. Compared to MOEOA, the DMOEOA is able to find the optimal solutions of DTLZ2 shown in Figure 9 faster. From simulation results depicted in Figures 6-9, we can observe that the LDM is able to enhance the exploration and exploitation ability of the proposed algorithm.

Parametric Study.
e proposed DMOEOA algorithm introduces the grid mechanism. Meanwhile, the layered disruption method is based on the grid mechanism, and the number of grid divisions is a key parameter in the grid mechanism. erefore, it is necessary to investigate the effect of grid division in the performance of the DMOEOA algorithm. In this section, UF {5, 6, 7, 8, 9, 10} test suites are utilized as test instances. Inverted Generational Distance (IGD) is employed as the performance metric. We performed runs using different numbers of grid division to see the effect of this parameter in the performance of the DMOEOA algorithm. e number of grid divisions ranges from 5 to 15. Other parameters of the DMOEOA algorithm are the same as in Table 1. To eliminate the randomness of Computational Intelligence and Neuroscience the results, for each given grid division, the proposed algorithm runs 30 times on each benchmark test function. As shown in Figure 10, with the increase of grid divisions from 5 to 9, mean IGD values of DMOEOA on UF {5, 6, 8, 9} test instances decreased gradually, then the mean values of IGD increased with the number of divisions. As for UF7 and UF10 test instances, the mean IGD values increased slowly with the numbers of grid divisions from 9 to 15. From Figure 10, we can observe that too many or too few divisions will affect the performance of the algorithm. e appropriate number of grid divisions is beneficial to improve the convergence and coverage ability of the proposed algorithm. In general, DMOEOA performs well at div ∈ [9, 11] for both biobjective and triobjective test problems.

Application in Structural Optimization of an Elastic Truss
In this section, the proposed DMOEOA algorithm is applied to the structural optimization of a 4-bar elastic truss as a demonstration. e 4-bar elastic truss design optimization problem is a well-known engineering problem in the   structural optimization field [41]. e structure of the 4-bar truss is shown in Figure 11. e truss is designed with joint displacement and structural volume as the objectives. And areas of member cross sections are set as design variables. Mathematically, this engineering problem is as follows:   Computational Intelligence and Neuroscience algorithms upon the above structural optimization problem are shown in Table 5 and Figure 12. e statistical results shown in Table 5 suggest that DMOEOA provides better results on best, average, and standard deviation of IGD than the other five algorithms.
Although the better result on the worst of IGD is obtained by MOPSO, the superiority of DMOEOA in convergence is significant. From Figure 12, we can observe that DMOEOA is able to converge to the true optimal Pareto front. In contrast, NSGAII and MOALO show the poor distribution of obtained solutions on this structural optimization problem.

Conclusion
is paper proposes a disruption-based multiobjective equilibrium optimization algorithm (DMOEOA). is algorithm integrates a layered disruption method proposed in this work. Layered disruption method (LDM) is proposed to enhance the exploration and exploitation abilities of the proposed algorithm. To validate the effectiveness of the DMOEOA algorithm, three kinds of benchmark test suites have been selected with five different multiobjective optimization algorithms which include well-known algorithms and state-of-the-art algorithms. e test results suggest that the DMOEOA algorithm is able to show well performance in these test problems with a better balance between convergence and distribution. e impact of the layered disruption method is analyzed. In addition, we discuss the influence of division numbers on the performance of the proposed DMOEOA algorithm. Moreover, the new proposed algorithm is also applied for solving the structural optimization problem of a four-bar elastic truss. Compared with the other five optimizers, the results show that DMOEOA is not only an algorithm with well performance for benchmark test functions but also expected to have a wide application in engineering design optimization problems. Future research should focus on applying the proposed DMOEOA algorithm to handle constrained real  engineering problems and many-objective optimization problems (Table 6).

Data Availability
All data included in this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare no conflicts of interest.