A Fusion Multiobjective Empire Split Algorithm

In the last two decades, swarm intelligence optimization algorithms have been widely studied and applied to multiobjective optimization problems. In multiobjective optimization, reproduction operations and the balance of convergence and diversity are two crucial issues. Imperialist competitive algorithm (ICA) and sine cosine algorithm (SCA) are two potential algorithms for handling single-objective optimization problems, but the research of them inmultiobjective optimization is scarce. In this paper, a fusion multiobjective empire split algorithm (FMOESA) is proposed. First, an initialization operation based on opposition-based learning strategy is hired to generate a good initial population. A new reproduction of offspring is introduced, which combines ICA and SCA. Besides, a novel power evaluation mechanism is proposed to identify individual performance, which takes into account both convergence and diversity of population. Experimental studies on several benchmark problems show that FMOESA is competitive compared with the state-of-the-art algorithms. Given both good performance and nice properties, the proposed algorithm could be an alternative tool when dealing with multiobjective optimization problems.


Introduction
With the continuous innovation of technology and the rapid development of industrial production, the multiobjective optimization problem (MOP) has gradually become a research focus in the current scientific and engineering fields [1]. e subobjectives in a multiobjective optimization problem are usually mutually constrained. ere is no absolute optimal solution, and only the dominance of the solutions can be adopted to evaluate the pros and cons of the solutions. erefore, intelligent algorithms are usually used to approximate to the Pareto front, and a set of Pareto optimal solution sets of algorithm is received. e swarm intelligence evolution algorithm is optimized in the form of swarms. In the past few decades, swarm intelligence evolution algorithm has been continuously developed and extended, such as genetic algorithm (GA) [2], differential evolution (DE) [3], evolution strategy (ES) [4], ant colony algorithm (ACO) [5], and particle swarm algorithm (PSO) [6]. ey have been proven to be very effective single-objective optimization algorithms and are widely used in engineering practice, such as water cycle [7], distributed assembly permutation flow shop problem [8], a fuel cell/ battery/supercapacitor hybrid power train [9], and workforce planning [10]. One of their most significant features is that they can obtain multiple solutions in one iteration. erefore, they have the advantages of simple implementation, low calculation cost, and high efficiency. Recently, the employment of swarm intelligence evolutionary algorithms to deal with multiobjective optimization problems has been widely studied, and a large number of swarm intelligence-based algorithms have been proposed. According to different candidate solution selection mechanisms, multiobjective evolutionary algorithms (MOEAs) can be roughly divided into the following three categories. e first type is domination-based algorithms. is type of algorithm is represented by methods such as NSGAII [11] and SPEA2 [12]. eir basic idea is to use Pareto dominance relationships to classify groups and then calculate the density of individuals in each category. Based on the dominant relationship and density estimation, the population is fully sorted, and the relatively superior individuals are selected to enter the next generation. Crowding distance, k-nearest neighbor, ε-domination, and grid-divided are often used to estimate the density of individuals. Since the MOEA algorithm based on the Pareto dominance relationship has the advantages of simple principles, easy understanding, and fewer parameters, this type of algorithm has attracted many researchers' in-depth research and extensive applications. e second type is the MOEA algorithm based on decomposition. MOEA based on decomposition (MOEA/D) [13] is another classic MOEA algorithm framework. MOEA/ D decomposes MOP into a series of subproblems and organizes these subproblems to solve at the same time to obtain an approximation of the Pareto solution set. Because the algorithm runs only once, it can achieve high computing efficiency. In MOEA/D, each subproblem is a single-objective optimization problem, so the MOEA/D algorithm framework can accommodate various types of single-objective optimization methods and local search methods.
us, it naturally establishes the connection between heuristic algorithms and traditional mathematical programming methods. e MOEA/D framework has attracted more and more researchers' attention [14][15][16]. However, this algorithm requires a predefined set of weight vectors. Without prior knowledge, uniformly defined weight vectors cannot solve Pareto front with special shapes, which add unnecessary difficulty to solve MOPs [17]. e third type of algorithm is the MOEA algorithm based on evaluation indicators.
is type of algorithm is represented by algorithms such as IBEA [18] and HypE [19]. e basic idea is to optimize the original MOP indirectly by directly optimizing the performance indicator of the Pareto approximation set.
is is actually converting a multiobjective optimization problem into a single-objective optimization problem. In this type of method, hypervolume evaluation indicators are the most commonly used. is type of MOEA algorithm avoids a large number of comparison problems caused by Pareto dominance, but due to the large amount of calculation of the hypervolume, it is necessary to introduce methods such as Monte Carlo estimation to improve the calculation speed [19]. In addition, with the increase of the number of objectives, the calculation amount of such methods is increasing exponentially.
In the past two decades, a variety of intelligent optimization algorithms have been introduced to deal with multiobjective optimization problems and have achieved the expected results, such as particle swarm optimization, grey wolf optimization [20], imperialist competitive algorithm [21], and sine cosine algorithm [22]. Among them, the imperialist competitive algorithm (ICA) [21] is an evolutionary algorithm based on the imperialist colonial competition mechanism proposed by Atashpaz-Gargari and Lucas in 2007. It belongs to a socially inspired random optimization search method. At present, ICA has been successfully applied to a variety of optimization problems, such as ship design optimization [23], water distribution networks [24], and production scheduling problems [25]. ICA has a fast convergence speed, but it is easy to fall into a local optimum. But in dealing with multiobjective optimization problems, there is not much research on ICA. In solving the single-objective optimization problem, multiple colonies obtain the final optimal solution by approaching the empire. It should be noted that, at the end of the algorithm, only one empire remains in the population, and the position of the empire is the optimal solution. However, when dealing with multiobjective optimization problems, a set of trade-off solutions is needed at the end of the algorithm. erefore, inspired by ICA, a reverse ICA algorithm is studied in this paper. While the algorithm is running, the nondominated solutions are considered as the individual empire. In the end, the algorithm gets an approximate Pareto solution set composed of a group of empire individuals.
In order to prevent the population from falling into the local optimal solution, mutation operation is a commonly used strategy in multiobjective evolutionary algorithms. However, the use of mutation operations may change the original trajectory of some excellent solutions. erefore, in order to improve the global search ability and improve the diversity of the proposed algorithm, a hybrid generation method is introduced. e sine cosine algorithm (SCA) [26] is a new metaheuristic algorithm. It is a numerical optimization calculation method based on self-organization and group intelligence based on the sine cosine function. e self-organizing model of the concept of sine and cosine was first proposed by Mirjalili. In this paper, SCA is used to update iterations of empire individuals to improve their global search capabilities.
In addition, archive strategy is also adopted by many algorithms [27][28][29]. e archiving strategy can better save the historical information during the algorithm running process, so as to obtain the final better candidate solution. However, the update operation of the archive strategy often occupies additional computing resources and increases the computational burden of the algorithm. In this paper, archive strategy is abandoned. e excellent solution of each generation of the population is regarded as the empire individual, and the evolutionary direction of the colony is updated. In other words, the candidate solutions after each iteration directly guide the evolution of the next generation.
In this paper, a fusion multiobjective empire split algorithm (FMOESA) is proposed. Inspired by the imperialist competitive algorithm, when dealing with multiobjective optimization, the excellent solutions are saved through the behavior of empire splitting. e approach of colonial individuals to empire individuals was also exploited when producing offspring individuals. In addition, in order to balance convergence and diversity, a new individual power assessment mechanism is also proposed. Finally, in order to verify the effectiveness of the proposed algorithm, FMOESA is compared with four state-of-the-art MOEAs on various well-known benchmark MOPs. e experimental results show that FMOESA is capable of obtaining high-quality solutions and the power assessment mechanism works well on maintaining good balance of population diversity and convergence.
e main contributions of this paper are highlighted as follows.
(i) A new reproduction of offspring is introduced, which combines three operators: ICA, SCA, and GA. For empire individuals, the offspring are generated by performing sine and cosine operators with other empire individuals. For colonial individuals, the offspring are generated through crossover and mutation operations with their empire. In other words, the idea of colonial individuals approaching empire individuals through crossover and mutation operations is similar to that in ICA. (ii) A novel power evaluation mechanism is proposed to identify individual performance. When evaluating the performance of an individual, convergence and diversity are considered simultaneously. is evaluation mechanism is not only used to distinguish between imperial and colonial individuals but also to eliminate redundant individuals in the population. is strategy can balance the convergence and diversity of the entire population. (iii) Archive strategy is not adopted in this paper. e outstanding individuals selected after each iteration constitute a candidate solution set, and, at the same time, they are used as the parent population of the next iteration to directly guide the process of the next iteration. is operation saves computing resources and reduces unnecessary computational complexity.
e remainder of this paper is organized as follows. Section 2 presents the main concepts of multiobjective optimization and basic content of ICA and SCA. e proposed algorithm is described in Section 3. Section 4 presents the experimental design and results. Finally, Section 5 presents conclusions and future research directions.

Multiobjective Optimization Problems.
In general, multiobjective optimization problems (MOP) can be defined as Definition 3. A set PS of all Pareto optimal solution is called Pareto optimal set, PS � X | ∃X ′ ≺ X .

Definition 4.
A set that contains all the Pareto optimal objective vectors is called the Pareto front, denoted by

2.2.
e Traditional Imperialist Competitive Algorithm. e traditional imperialist competitive algorithm (ICA) is mainly used to deal with single-objective optimization problems. It is a sociopolitical evolutionary algorithm proposed by Atashpaz-Gargari and Lucas [21] in 2007, which is a simulation of the colonial competition process of human society. e main process of the Empire competition algorithm is as follows.
(i) Step 1: form an empire. e imperialist competitive algorithm first generates an initial country by a random method, and each country represents a solution to the problem. ese countries are divided into two categories: colonial countries and colonies according to the size of the power (the quality of the solution). e former several countries became colonial countries.
en the remaining countries were assigned as colonies to the colonial countries in order according to the power of the colonial countries. e more powerful the colonial countries, the more colonies were assigned. e colonial countries and the colonies they belong to are collectively called empires. (ii) Step 2: assimilation and revolution. After the formation of the empire, the economic, cultural, and language attributes of the colonies will inevitably tend to belong to the colonial countries. is process is called assimilation. e goal of assimilation is to improve the quality of the solution of all countries, which can increase the influence of colonial countries on colonies. In order to be consistent with history, the process of colonial tending to empire always has a certain deviation. In extreme cases, there may even be a reverse deviation, that is, the colonial revolution. If the colony is undergoing assimilation and revolution, in the process, the power surpassed the colonial country; then the colony will replace the colonial country and establish a new empire. (iii) Step 3: colonial competition. ere is competition between empires. e decline of the power between empires will make the colonies of weak empires deprived of powerful empires until the weak empires disappear. At the same time, new empires will continue to appear in the competition. After generations of assimilation, revolution, and competition, ideally only one empire will remain, and all countries are members of that empire. rough such a series of evolutionary operations, the algorithm finally finds the global optimal solution.

e Standard Sine Cosine Algorithm.
e standard sine cosine algorithm (SCA) was proposed by Australian scholar Mirjalili in 2016 [26]. e algorithm starts with a set of random solutions and continuously approaches the global optimal solution through the search and development phases. First, the position of the solution is initialized, and the algorithm is updated as shown in where X t ij is the position of the i-th individual in the j-th search space in the t-th iteration; r 1 is a linear decreasing function, and it is updated as shown in In (3), a is a constant and a � 2; T max is the maximum number of iterations; r 2 is a random number in the range [0, 2π]; r 3 is a random number in the range [− 2, 2]; P gj is the global extremum of the population.
As shown in (2), there are four main parameters in the SCA, that is, r 1 , r 2 , r 3 , and r 4 . e parameter r 1 determines the area (or moving direction) of the next position. It can be either the area between the current solution and the target solution, or it can be an area other than the two. Parameter r 2 defines the distance of the current solution toward or away from the target solution; parameter r 3 randomly assigns a weight to the target solution, the purpose of which is to strengthen (r 3 > 1) or weaken (r 3 < 1) the distance defined by the target solution. e parameter r 4 indicates that the probability of updating the sine and cosine in (2) is equal. e theoretical advantages of SCA in solving the global optimal solution of an optimization problem with sufficient candidate solution set size and number of iterations are as follows: (i) SCA is the same as other swarm intelligence optimization algorithms. Compared with the algorithm of a single candidate solution, the candidate solution set of a certain size achieves a stronger search ability and the ability to escape the local optimal trap, and SCA has sufficient random search ability. (ii) e optimal candidate solution is always retained during the iteration process and used as a basis for updating the candidate solution set, and there is a tendency to move to the optimal solution space during the search process. (iii) e randomness of its global search and local development has stronger adaptability and stability than searching and developing in two stages.

Proposed Algorithm
In this section, the details of the proposed fusion multiobjective empire split algorithm (FMOESA) are documented. Particularly, the framework of the proposed algorithm is presented first in Section 3.1. In the following, the different key stages in the algorithm are described.

Framework of the Proposed
Algorithm. e first part of the algorithm is the initialization phase (lines 2-8 in Algorithm 1). Consistent with most optimization algorithms, N individuals are randomly generated. Immediately after, the opposition-based learning is performed to obtain a reverse population. en, the power of the existing 2N individuals is evaluated and the empire and colony are divided according to the value of power (lines 7-8 in Algorithm 1). e power assessment mechanism is described in detail in Section 3.4. At the end of the first phase, the N individuals with the greatest power value constitute the final initial population. e second part of the algorithm is the iterative update phase (lines 10-26 in Algorithm 1). If the maximum number of fitness evaluations (FE m ax) is not reached, the iterative cycle is not terminated. First of all, the offspring population is generated through a fusion reproduction strategy (line 11 in Algorithm 1). e detailed steps of the fusion reproduction strategy are described in Section 3.3. After fitness and power evaluation, the empire and the colony are divided (lines 12-16 in Algorithm 1). If there are less than N empire individuals, the colony individuals are arranged in reverse order of power value, and (N − empire) individuals are selected as candidate solutions (lines 17-19 in Algorithm 1). If the number of empire individuals is equal to N, we assign all empire individuals to X as the parent population of the next iteration (line 24 in Algorithm 1). If the number of empire individuals is greater than N, the population is deleted to obtain the final N empires with better performance (lines 21-23 in Algorithm 1). Detailed empire reduction strategy is exhibited in Section 3.6.

Initialization Based on Opposition-Based Learning.
Most of the traditional algorithms adopt the initialization method to generate the initial population. Due to the lack of prior knowledge, the probability of searching the population to a better area is greatly reduced. Opposition-based learning (OBL) [30] is an important method to enhance the performance of stochastic optimization algorithms, which greedily selects the fitness value of the objective function between the current solution and the backward solution population. In this way, the diversity of the population is enhanced, and the ability of the algorithm to approach the global optimal solution is improved.
In this paper, a reverse population is obtained based on OBL after randomly generating an initial population. In the original population and the reverse population, the optimal N candidate solutions are selected to form the initial population of the algorithm. is strategy can obtain more suitable initialization candidate solutions without prior knowledge, thereby increasing the probability of the population exploring a better region. For an individual X i j � X i1 , X i2 , . . . , X i D in the population, its reverse solution is generated according to the following: where j � 1, 2, . . . , D and a j and b j are the maximum and minimum values in the j-th dimension decision variable, respectively. All the solutions in the original population X are reversed to generate a reverse population X ′ . In these two populations, N individuals are selected to form the final initial population based on individual power assessment.
e detailed operation of individual power assessment is in Section 3.4.

Fusion Reproduction Strategy.
In the algorithm, population X consists of two parts, the empire individuals, and their colony individuals. In general, the individual empire is superior to the individual colony. erefore, these two parts need to be updated with different evolution operators. In this paper, sine and cosine operator (SCA) is employed to generate offspring individuals for empire individuals. For colonial individuals, their goal is to move closer to the empire and to gain greater power. erefore, the colony obtains better convergence through crossover operations with the empire, and then performs mutation operations to increase diversity. e crossover operation adopts simulated binary crossover (SBX) [31], and the mutation operation uses polynomial mutation (PM) [3]. e detailed operation steps are shown in Algorithm 2. It should be noted that each colony belongs to an empire. In SBX operation, the colony crosses its corresponding empire (line 6 in Algorithm 2).
rough the fusion reproduction strategy, sufficient information was exchanged not only between empires but also between colonies and empires. e former focuses on diversity, while the latter focuses on convergence.

Empire Power Evaluation Mechanism.
e empire power evaluation mechanism includes two different evaluation methods. e first case is the assessment of the empire's power during the initialization phase. e second case is the assessment of the empire's power during the update iteration.
In the initialization phase, convergence is generally poor, so the assessment of individual power is mainly based on convergence. For the i-th individual X i in the population X, its power is calculated according to the following: where m is the number of objective functions. In the second stage, not only does the entire population need to improve convergence, but also diversity is important. erefore, when evaluating individual power, it is necessary to consider both convergence and diversity. During the update iteration phase, for the i-th individual X i in the population X, its power is calculated according to Sort the colonies in reverse order (19) X � empire ∪ (N − |empire|)colonies (20) else (21) if |empire| > N (22) Implementing empire reduction strategy until the number of empires is N (23) end if (24) X � empire (25) end if (26)  Journal of Control Science and Engineering 5 where C i ′ � f 1 i + f 2 i + · · · + f m i and C max and C min are the maximum and minimum values of C, respectively. Similarly, D max and D min are the maximum and minimum values of D, respectively. D i ′ is defined by the following equation: where angle(A, B) means the angle between A and B. D i ′ means the minimum angle between X i and all empires. It is worth noting that before calculating individual power, the convergence index (C) and the diversity index (D) need to be unified to the same order of magnitude. In other words, the value range of C i and D i is [0, 1]; thus, the value range of power is [0, 2]. e greater power of X i means that X i has better convergence and diversity.

Empire Distribution Strategy.
In the population, each individual has his own role: empire or colony. In the FMOESA, individuals are assigned to corresponding roles according to their nondominated level and power. First, all nondominated solutions in the population are selected as empires.
en, the remaining individuals are sorted in descending order according to the value of power. Finally, allocate colonial individuals among the remaining populations. e specific method is as follows: we pick the individual with the largest power at a time and then assign it to the nearest empire, until (N − empire) individuals are selected and assigned tasks are completed.
It should be noted that the number of colonies contained in each empire is not fixed. at is, it is possible that some empires do not have colonial individuals, and some empires contain more than one colony individual.

Empire Reduction Strategy.
In the later stages of the algorithm, the number of empires may exceed N. At this point, reduction strategies are essential. e main goal at this time is to select N better-performing solutions as the parent population of the next iteration. In this paper, N empires with high power are selected. e power of empire is evaluated according to (6). e empire with the lowest power is deleted one by one until the number of empires is N. e detailed steps are shown in Algorithm 3.

Experimental and Discussion
In order to demonstrate the effectiveness of the proposed framework, we compare its results with respect to those obtained by MOEA/D [13], dMOPSO [32], NSGAII [11], MOEA/D-STM [33], and MOEA/D-ACD [34]. Firstly, the selected performance metrics and benchmark are introduced and then the experimental setup is presented. e corresponding parameter settings are shown in Table 1.
en, the performance comparison and convergence analysis on these MOEAs are demonstrated.

Performance
Metric. In our experimental study, the widely used metrics inverted generational distance (IGD) [37] and hypervolume (HV) are chosen to evaluate the performance of each algorithm. ey are used as performance metrics because they provide a joint measurement of both the convergence and diversity of the obtained solutions.
e IGD metric is defined as follows. If P * is a set of nondominated points uniformly distributed along the true Pareto front in the objective space and P is the obtained approximation set of nondominated solutions in the objective space from a MOEA, the IGD value for the approximation set is calculated by where dist (z * , P) is the minimum Euclidean distance between z * and points in P and |P * | is the cardinality of P * . If |P * | is large enough to represent the Pareto front, both diversity and convergence of the approximated set P could be measured using IGD(P, P * ). If the approximated set P is close enough to the true Pareto front and cannot miss any part of the whole PF, the smaller values of this metric are obtained. e advantages of IGD measure include two aspects: one is its computational efficiency and the other is its generality. For an MOEA, a smaller IGD value is desirable. e low value of IGD indicates that the obtained solution set is close to the true Pareto front. e hypervolume (HV) metric is defined as the volume of the hypercube enclosed in the objective space by the reference point and every vector in the Pareto approximation set P. is is mathematically defined as follows: where s i is a nondominated vector from Pareto approximation set P, VOL i is the volume from the hypercube formed by the reference point and the nondominated vector s i , and reference point is z ref in the objective space. e HV metric is applied to access both convergence and maximum spread of the solutions for the Pareto approximation set obtained with any MOESs. In addition, lager values of this measure value that the solutions are closer to the true PF and that the solutions cover a wider extension of it. In this paper, the HV metric is calculated with respect to a given reference point z  Table 2 shows the mean and standard deviation (std) results on all the four ZDT and four DTLZ test instances in terms of the IGD metric. e Wilcoxon rank-sum test is recorded as p value in Table 2, and the results with confidence level 0.95 have been conducted based on the IGD values to assess the statistical significance. In Table 2, the symbols "+," "�," and "− ," respectively, indicate that FMOESA performances are statistically better than, equivalent to, and slightly worse than the compared algorithms.

Results and Analysis.
As shown, FMOESA is the most effective algorithm in terms of the number of the best results it obtains. MOEA/D performs very competitively to FMOESA.
For the two-objective test function, FMOESA obtains the best performance on ZDT2 and ZDT3. e performance of dMOPSO is poor compared to that of its competitors, and it performs best only on ZDT1 and ZDT6. Our proposed

Algorithm
Parameter settings    As shown in Table 3, the experimental results of HV value are similar to that of IGD value. FMOESA obtains the best value on four test functions, that is, ZDT2, ZDT3, DTLZ2, and DTLZ7. FMOESA performs slightly worse than dMOPSO on ZDT1 and ZDT6. On DTLZ1 and DTLZ4, the performance of MOEA/D and FMOESA is very competitive.
Except for ZDT6, the performance of MOEA/D-ACD is worse than FMOESA.
To further compare the difference between FMOESA and the other compared algorithms, the Pareto fronts of some adopted test problems have been shown in Figures 1-4 with clearer discrimination. In these figures, the blue dots  represent the approximate Pareto front obtained by each algorithm, while the solid red line represents the true PF of the test function. ese figures demonstrate the abilities of those algorithms in converging to the true Pareto front. As seen from the performance diagram of ZDT3, DTLZ1, and DTLZ7, our algorithm can well converge to its true PF, especially in DTLZ1, dMOPSO, and MOEA/D-STM may be trapped in local optimization and cannot well converge to the true PF. Besides, it can be observed that FMOESA produces better distribution than other algorithms, especially in ZDT2, ZDT3, and DTLZ7.

Conclusion and Future Work
In this paper, a fusion multiobjective empire split algorithm has been proposed. In the proposed algorithm, the opposition-based learning is adapted in the initialization phase. In this way, the diversity of the initial population is enhanced, and the ability of the algorithm to approach the global optimal solution is enhanced. en, a fusion reproduction strategy is utilized to produce high-quality offspring population. Inspired by the ICA, individuals with better convergence and diversity are identified as empire individuals in the selection mechanism. is process marked the demise of the old empires and the birth of new empires. Finally, a novel type of power evaluation mechanism was proposed to select candidate solutions with superior performance. For quantifying the performance of the proposed algorithm, a series of well-designed experiments are performed against peer competitors, which include four state-of-the-art MOEAs competitors and four multiobjective evolutionary algorithm competitors, upon a couple of widely used benchmark test suites with 2-, and 3-objective. It is demonstrated by the experimental results that FMOESA is quite competitive on the majority of all the test instances. FMOESA especially shows significant improvement over other peer algorithms in terms of the distribution of the obtained solutions.
In future research, we will investigate into more effective mechanisms for the selection process. Also, we will put efforts into the MOPs with irregularly shaped Pareto fronts.

Data Availability
e data used to support the findings of this study are included within the article. e test data are also available from the corresponding author upon request via email.

Conflicts of Interest
e author declares no conflicts of interest with respect to the research, authorship, and/or publication of this article.