Evolutionary algorithms have proved to be efficient approaches in pursuing optimum solutions of multiobjective optimization problems with the number of objectives equal to or less than three. However, the searching performance degenerates in high-dimensional objective optimizations. In this paper we propose an algorithm for many-objective optimization with particle swarm optimization as the underlying metaheuristic technique. In the proposed algorithm, the objectives are decomposed and reconstructed using discrete decoupling strategy, and the subgroup procedures are integrated into unified coevolution strategy. The proposed algorithm consists of inner and outer evolutionary processes, together with adaptive factor
Many-objective optimization problems (MaOPs) refer to the optimizations which consisted of a larger number of objectives. Generally, when the number of objectives in the multiobjective optimization problems (MOPs) reaches four, the problems are classified as MaOPs.
To MOPs, there exist many evolutionary algorithms which can be successfully applied in the optimization problem with two or three objectives. The commonly used evolutionary algorithms such as NSGA-II, SPEA2, MOPSO, and their improved or integrating methods are devoted to finding the solutions with uniform distribution to the Pareto front (PF) and maintaining the group diversity during the evolution as well [
Generally, approaches for evolutionary many-objective optimization (EMO) operate on three directions in order to face the high dimension challenges. The first one is discovering more efficient dominance relationship based on multicriteria decision-making, such as narrowing the search area by inputting a user-specified reference point [
The second or the most recently discussed strategy to MaOPs is the decomposition-based method. One common way of decomposition is generating subproblems mapped on several scalars and transforming the multiple objectives into single objective with directional vector. As the fundamental algorithm, MOEA/D [
The other well-known modification takes effort to keep the elite individuals by improving the diversity management mechanism and the congestion strategy. The elite preservation also can be found in popular evolutionary algorithms such as NSGA-II and SPEA2. In this direction Chen et al. have developed the idea of congestion control by relative positions and increase the evolutionary rate in sparse regions [
Particle swarm optimization (PSO) is a well-known metaheuristical optimization technique inspired by the social behavior of birds while looking for food. Its advantages of fast convergence and high diversity are fully demonstrated in MOPs. MOPSO proposed by Coello et al. is the most popular PSO-based MOP method, which introduced a secondary repository of particles to guide the individuals, while using adaptive hypercubes in archive maintenance. Based on this, Tripathi et al. introduced the adaptive weight and acceleration parameters to improve the efficiency [
In this paper, a novel strategy, named decomposition-based unified evolutionary algorithm using particle swarm optimization (UMOPSO-D), is proposed for the many-objective problems. UMOPSO-D optimizes the many objectives through decomposition and coevolution. Benefiting from the objective decomposition, the MaOP transforms to several MOP subproblems with three or less objectives, in which PSO method takes advantages (proved in Section A novel framework for MaOP algorithm is proposed, integrated with the idea of unified PSO. An adaptive unification factor is designed to dynamically balance the convergence and diversity according to the incremental entropy which stands for swarm stability. The entropy is calculated through hypercube strategy. Two grouping strategies, relevant coupling and conflict coupling, are introduced and compared. The simulation results illustrate the different advantages of the two techniques. The objective decomposition presents problems in population regroup and repository management. In UMOPSO-D, the population regroup is based on the leading objectives which are assigned for each particle. The repository management strategy includes two aspects, which are applied to local and global parts, respectively. As a PSO-based algorithm, the best selection is essential. In this paper, different strategies are assigned for personal best, local best, and global best, respectively.
The remainder of this paper is organized as follows. Section
To the best of our knowledge, decomposition-based methods can be divided into two levels. The first level is named scalar-based decomposition, which changes the research space through several scalars and gets close to the optimizations in scalar space. The second level is dividing the objectives into subsets following specified criterion and changing a many-objective problem to a bunch of multiobjective problems, named objective-based decomposition. The main challenges in both levels lie in the grouping criterion and coevolution strategy. In this section, we briefly review the current methods toward the two challenges and other techniques of MOP in state-of-the-art literatures as the background of this paper.
In objective-based decomposition, Gong et al. decomposed the original objectives into several subgroups and added an aggregating function in each group [
Density is the most commonly used quality for archive management. It can be based on several criteria, such as the following: Kernel approach, based on the sum of values from a distance function either in genotypic or in phenotypic space [
Clustering mechanism is another proposal for maintaining an archive. When the population in archive reaches the constraint size, a clustering operation distributes the external nondominated solutions into several subsets and chooses a representative from each subset.
Particle swarm optimization (PSO) is an evolutionary computation technique which originated from the study of the bird predatory behaviors; similar to genetic algorithm, PSO is also approaching the optimal value of the solution space through constant learning and iteration. Each solution is seen as the ideal particle which has no quality or volume, updating their velocity and position in each iteration, approaching the optimal solution of fitness function gradually. If in
However, these positive features usually bring a rapid decline of diversity and make the algorithm greedy, leading to premature convergence in complicated algorithms such as MOPSO. As the number of objectives increases, the deterioration becomes more serious, and the particles were confused in following the best leader and result in frequent fluctuation. Moreover, given the archive maintaining based on crowding distance, with an excess of nondominated solutions, some excellent solutions may be deleted and set back the evolutions. The convergence and diversity metric curves of MOPSO evolution with 3, 4, and 5 objectives are calculated in Figure
Convergence and diversity metric curves in MOPSO evolutions.
Unified particle swarm optimization (UPSO) was proposed as a scheme that harnesses the global and local PSO variants, combining their exploration and exploitation properties [
The proposed algorithm is based on the idea of coevolution through the concept of unified particle swarm optimization.
In light of the deterioration of MOPSO when the number of objectives rises, in the proposed technique, the many-objective optimization problems described in Section
Dimensionality reduction technique always combines objectives based on correlation and coupling. As the strengthened nondominance pressure applying on subgroups, the Pareto-optimal set increases quickly and holds some extreme directions, which make sure of the searching area at the early iteration. Yet, in most MaOPs, the connections between the many objectives can not be defined clearly, so that farfetched decomposition and recombination may cause the incomplete description of the origin. We introduce the correlation-based decoupling for the many-objective optimization. As coupling degree is related to the correlation degree, correlation-based decoupling techniques can be divided into relevant decoupling and conflict decoupling. On the one hand, in relevant decoupling, the relevant objectives have similar trends of increasing and decreasing; as a result, the optimal solutions gather together, which provide advantage for the particles to search in a smaller neighborhood. On the other hand, in conflict decoupling, the conflict objective space holds a larger number of nondominated solutions, which can guarantee the searching diversity in low-dimensional spaces. In this paper, both of the correlation-based decoupling methods are considered in the simulations. According to the comparison result in Section
There are several correlation coefficient measurements; the most common ones of these are Pearson’s correlation coefficient and Spearman’s rank correlation. Pearson’s correlation coefficient is sensitive to a linear relationship between two variables and has demand to the variance ranges, so it is not appropriate for the MaOPs’ decomposition. In comparison, rank correlation is a nonparametric measure of statistical dependence between two variables, makes the coefficient less sensitive to nonnormality in distributions, and gives good descriptions of consistency and tendency. Spearman’s rank correlation coefficient between objective functions
Using this technique, the coupling degrees change corresponding with particles fitnesses; in other words, the coupling relations depend on the particles’ evolution procedure. To update the evolutionary process and express the nonlinear objectives, the decoupling operates before every decomposition procedure.
To balance the computational loads between subgroups, the formulation between the number of subgroups
(1) (2) (3) Put the last objectives into one subgroup (4) (5) Exit the while loop (6) (7) Calculate the rank correlation objectives (8) Select the largest (or smallest, relevant decoupling and conflict decoupling relatively) two (9) Put (10) Rearrange the rest objectives (11)
In the updating formulation of PSO, the global best
In the UPSO, both the subgroup local best
The personal best is defined as the best position that the particle has ever been during the searching process, which raises challenge in MOPs, and
the current operating group
(1) Update personal repository (2) Select the memories in belong to the best candidates. The number of the candidates is (3) from (4) in (5)
The high dimension objectives lead to large area of
The maintenance of global repository
The algorithm regroups the objectives and population after several iterations according to the stability. After regrouping, the repositories of new subgroup become empty again. We propose the repositories recompose strategy that assigns each solution in
Take a simple example; the distribute hypercube indexes and the leading objective indexes of the known Pareto front of
Solutions’ leading objectives, hypercube indexes, and potential indexes.
Solutions | Objective values | Leading | Hypercube indexes | PHS | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
Index |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In order to have the visual representation of the solutions’ routes and express the feasible detailed designs in the coevolution process, we proposed a novel hypercube technique. For the proposed algorithm itself, this novel hypercube technique helps to assess the convergence of subgroups, balance the learning directions of particles, and set rules for parameters.
The hypercubes are used to grid the solutions in subgroup archives, which are adaptive, similarly to [
In a previous paper [
After the decomposition, the solutions in
Illustration of the particle trace in UMOPSO-D.
The algorithm of UMOPSO-D is detailed in Algorithm
number if inner iterations
(1) Decompose the objectives according to Algorithm (2) Randomly initialize the position of each particle; initialize the velocity (3) Store the nondominated particles into the subgroup repository Update (4) Initialize the unification factor (5) Initialize each particle’s best position (6) (7) (8) (a) Update the velocity and position according to the unified particle swarm optimization, the selection procedures of follow the algorithm in Section (9) (b) If the particles’ position beyond the decision space, the corresponding variable should be replaced by the value of the boundaries. (10) (c) Evaluate the new cost of each particle in (11) (d) Update the dominated solutions out. If the number of particles exceed the capacity of (12) (e) Update the capacity of (13) (f) Calculate (14) (g) (15) (16) Regroup the objectives according to Algorithm (17) Recompose the subgroup repositories. (18)
In this section, the benchmark function set DTLZ is employed as the test problem, and three metrics are selected to value the performance of the seven algorithms MOPSO, MOPSO/D, NSGA-II, MOEA/D, Gong,
We select performance metrics to evaluate the convergence performance, the diversity performance, and the running efficiency, to provide straightforward statistical quantifications on the comparison of MOEAs.
Setting of reference point.
Function | Reference point |
---|---|
DTLZ1 |
|
DTLZ2 |
|
DTLZ3 |
|
DTLZ4 |
|
In this section, the detailed comparison results are organized in the following. The experimental results, including the mean and standard deviation of the performance metric GD generated by 15 independent simulations performed on each test instance, are summarized in Table
Comparisons of GD between proposed algorithm and other MOEAs.
Functions | MOPSO | MOPSO/D | NSGA-II | MOEA/D | Gong | DEMO |
|
|
|
---|---|---|---|---|---|---|---|---|---|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
|||||||||
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
|||||||||
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ3 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
|||||||||
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
On GD metric, first of all, we focus on the comparison between UMOPSO-D and the peer algorithms. Both relevant decoupling and conflict decoupling contribute to the UMOPSO-D performance. For experiments when the number of objectives equals 9, UMOPSO-D achieves best result in DTLZ1, DTLZ2, and DTLZ3, and, in DTLZ4, the deviation from the best performance is tiny. MOEA/D indicates a good convergence performance in DTLZ4. For results on
Secondly, the GD comparison between relevant decoupling and conflict decoupling in UMOPSO-D is illustrated as follows. Overall speaking, conflict decoupling has lower GD values in most situations, especially in DTLZ1 and DTLZ4. The differences between the two techniques in DTLZ2 and DTLZ3 are relatively small.
The HV metrics of the simulations are shown in Table
Comparisons of HV between proposed algorithm and other MOEAs.
Functions | MOPSO | MOPSO/D | NSGA-II | MOEA/D | Gong | DEMO |
|
|
|
---|---|---|---|---|---|---|---|---|---|
DTLZ1 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ1 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ1 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ1 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
|||||||||
DTLZ2 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ2 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ2 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ2 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
|||||||||
DTLZ3 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ3 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ3 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ3 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
|
|||||||||
DTLZ4 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ4 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ4 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
DTLZ4 | Mean |
|
|
|
|
|
|
|
|
|
Std. |
|
|
|
|
|
|
|
|
Furthermore, the HV metric indicates that the performance of relevant decoupling and conflict decoupling depends on the formulations of the given test instances. The conflict decoupling has obvious advantage in DTLZ3 but has relatively poor performance in DTLZ2. However, when taking GD into consideration, conflict decoupling has slight advantage compared to relevant decoupling.
The ARE metrics of the simulations are shown in Table
Comparisons of ARE between proposed algorithm and other MOEAs.
|
MOPSO | NSGA-II | MOEA/D | Gong | DEMO | UMOPSO-D |
---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure
Performance curves on DTLZs in high dimension experiments.
From the above statistical perspective and analysis, UMOPSO-D can guarantee a better or at least competitive performance in both convergence and diversity compared with the other six experimental algorithms in high-dimensional DTLZ problems. And conflict coupling has slight advantage compared to relevant coupling in objective decomposition. The computational efficiency is relatively low compared to
MaOP is a challenge to most MOEAs; the loss of selection pressure results in poor convergence and diversity. In this paper, we introduce the decomposition-based unified evolutionary algorithm and merge it with MOPSO, to improve the performance of convergence and diversity. The innovations include five points: (1) decomposing the many objectives with discrete decoupling; (2) making connections between subgroups through unified coevolutionary strategy; (3) adaptive factor
UMOPSO-D is an attempt to solve MaOPs through reduction without discarding information; in other words, the decomposition and coevolution process reserve the original demand and objectives, which guarantees the authenticity, integrity, and originality of the optimization problem. According to the no-free-lunch principle, the complexity and data size will pay for the good performance of UMOPSO-D, and the reaction speed is slow compared with some state-of-the-art algorithms. The further experiments and statistical analysis recording the performance of scaled or other benchmark problems should be continued. In addition, probability-based strategy and coevolution based multiagent structure can reduce and share computational cost which will be exploited to UMOPSO-D to achieve better quality in our following research.
The authors declare that they have no competing interests.
This work was sponsored by the National Natural Science Foundation of China under Grant no. 61503287, no. 71371142, and no. 61203250, Program for Young Excellent Talents in Tongji University (2014KJ046), Program for Shanghai Natural Science Foundation (14ZR1442700), Program for New Century Excellent Talents in University of Ministry of Education of China, and Ph.D. Programs Foundation of Ministry of Education of China (20100072110038).