MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 10.1155/2016/6012805 6012805 Research Article A Novel Multiobjective Optimization Method Based on Sensitivity Analysis http://orcid.org/0000-0002-1938-3131 Li Tiane 1 http://orcid.org/0000-0001-9672-8554 Sun Xiaoying 1 Lu Zhengzheng 1 Wu Yue 1 Laudani Antonino Key Lab of Structures Dynamic Behavior and Control of the Ministry of Education Harbin Institute of Technology Harbin 150090 China hit.edu.cn 2016 11102016 2016 05 05 2016 22 08 2016 04 09 2016 2016 Copyright © 2016 Tiane Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

For multiobjective optimization problems, different optimization variables have different influences on objectives, which implies that attention should be paid to the variables according to their sensitivity. However, previous optimization studies have not considered the variables sensitivity or conducted sensitivity analysis independent of optimization. In this paper, an integrated algorithm is proposed, which combines the optimization method SPEA (Strength Pareto Evolutionary Algorithm) with the sensitivity analysis method SRCC (Spearman Rank Correlation Coefficient). In the proposed algorithm, the optimization variables are worked as samples of sensitivity analysis, and the consequent sensitivity result is used to guide the optimization process by changing the evolutionary parameters. Three cases including a mathematical problem, an airship envelope optimization, and a truss topology optimization are used to demonstrate the computational efficiency of the integrated algorithm. The results showed that this algorithm is able to simultaneously achieve parameter sensitivity and a well-distributed Pareto optimal set, without increasing the computational time greatly in comparison with the SPEA method.

National Natural Science Foundation of China 51678192
1. Introduction

Multiobjective optimization is widely used in many practical engineering problems. Instead of a single optimal solution, multiobjective optimization problem (MOOP), with conflicting subobjectives, provides a set of compromise solutions, which is known as Pareto optimal set . In the Pareto optimal set, corresponding to Pareto front, no one solution can be considered to be better than any other, which means that they cannot be improved in one objective without degrading another . There are two main alternative ways to obtain the Pareto optimal set. One way is to introduce a strategy based on weight coefficients that convert the multiple objectives into a monoobjective. Only an optimal solution corresponding to the defined weights can be obtained in a single run, so multiple optimization runs with variable objective weights are needed to obtain the solution set. Furthermore, this method cannot be used to find Pareto optimal solutions in problems having a nonconvex Pareto optimal front . The second way enables obtaining Pareto optimal set in a single run and has been emphasized in recent years. As the basis of decision-making, the Pareto optimal set provides the decision maker with insight into the characteristics of the problem before choosing a final solution.

For most multiobjective optimization methods that can obtain the Pareto optimal set in a single run, attention is focused on preventing local optimal or designing individual sorting or fitness assignments. However, one ignored thing is when the influences of different parameters on the model are disparate, and it may be uneconomical to spend a lot of time on the secondary parameters. A wise approach is to give higher priority to those parameters with significant influence on the optimization objectives (a.k.a. “parameters with high sensitivity”).

In this study, a new strategy which combines the methods of parameter sensitivity analysis and multiobjective optimization is proposed. In the process of optimization, parameter sensitivity is updated in real time with no extra analysis sample and then guides optimization by setting the parameter priority.

The rest of this paper consists of the following. Section 2 presents the drawbacks of conventional methods. Section 3 describes the multiobjective optimization method and the parameter sensitivity analysis method. Furthermore, some improvements are made for these basic methods. Then, Section 4 introduces the integrated algorithm. To validate the effectiveness of the proposed algorithm, Section 5 examines a mathematical problem, an airship envelope optimization, and a truss topology optimization. Finally, Section 6 provides the concluding remarks.

2. Problem Representation

MOOP is characterized by many optimization variables. Consider a multiobjective optimization model with five parameters. If each parameter is allocated 8 binary bits, hence the gene is 40 bits long, and its sample space reaches 2401012. If this was to be calculated directly, the computational effort would be astronomical . Even if 0.01% of the total amount is calculated, there are 100 million individuals. However, if after sensitivity analysis it is found that only three of the parameters have significant influence on the model, while the other two parameters have little influence, higher priority can be allocated to the first three parameters. A considerable amount of time can be saved by decreasing the time spent on the insensitive parameters. One extreme case is when the sensitivity of three parameters is 33.33% each, and the sensitivity of the other two parameters is 0%. Then the sample space is decreased to 2241.6×107. The reduced sample space means the significant improvement of optimization efficiency.

To consider parameter sensitivity in optimization process, a traditional approach is to conduct a sensitivity analysis at the first step, then to ignore insensitive parameters and retain the sensitive parameters for optimization. However, this approach has two obvious defects:

Complicated and Duplicated Calculations. A large amount of samples is needed in the global sensitivity analysis to obtain accurate parameter sensitivity. It is time-consuming to analyze these samples, and they cannot be utilized in the later optimization. When the sensitivity analysis is completed, the samples are discarded. So if the samples in the sensitivity analysis can be shared with the optimization process, considerable computing time can be saved.

Single Choice. In the traditional approach, the answer to the question of whether a parameter should be set as an optimization variable is either “yes” or “no.” For parameters with low sensitivity, but which cannot be ignored, there is no intermediate answer. Another situation for the multiobjective optimization model is that some parameters are very sensitive to objective function A, but insensitive to objective function B. In this case, to reduce the calculation effort, separate analysis of the two targeted analytical models, considering objective A and objective B separately, is necessary. This is not a good choice, not only because it is complex, but also because all the Pareto optimal solutions for all objectives cannot be obtained. In optimization process, creating two or more models should be avoided, or fatal differences may be unknowingly produced by the modifications.

In order to eliminate the above defects, a new strategy which integrates sensitivity analysis and optimization is proposed. In this strategy, the sample resources are shared between the sensitivity analysis and the optimization. Meanwhile, results of sensitivity analysis can be directly used to guide the optimization process.

3. Basic Analysis Methods 3.1. Optimization Method 3.1.1. Strength Pareto Evolutionary Algorithm (SPEA)

Evolutionary algorithms (EA), which are random exploring optimization algorithms based on the idea of the biological evolutionary, are widely used and well suited for MOOP to look for the global optimum . Genetic algorithm  (GA), which simulates natural selection and survival of the fittest, is a typical example of EA. Many algorithms have been proposed based on the basic concept of the GA, such as Vector Evaluated Genetic Algorithm (VEGA) , Multiobjective Genetic Algorithm (MOGA) , Nondominated Sorting Genetic Algorithm (NSGA) , the Niched Pareto Genetic Algorithm (NPGA) , Strength Pareto Evolutionary Algorithm (SPEA) , Nondominated Sorting Genetic Algorithm II (NSGA II) , Pareto Envelope-based Selection Algorithm (PESA) , the Pareto Archived Evolution Strategy (PAES) , and Micro-Genetic Algorithm (Micro-GA) . The core of the above methods is the GA and differences are primarily the selection mechanism and fitness evaluation . Apart from the EA, swarm intelligence , which derived from the concept of the social and collective behavior, emerged recently and mainly includes Ant Colony Optimization (ACO) [20, 21] and Particle Swarm Optimization (PSO) .

In this paper, the SPEA is employed, which has been recommended as one of the most efficient multiobjective evolutionary algorithms [25, 26]. The characteristic of this algorithm is the definition of an external population P for storing the nondominated solution amongst all the solutions currently considered. The nondominated solutions within the entire search space constitute the Pareto optimal set. The basic steps of the algorithm are showed schematically in Figure 1. The steps of SPEA can be summarized as follows: initialization; updating the external set; fitness assignment; selection; crossover and mutation; termination .

Flowchart of SPEA.

The first key technique of the SPEA is the calculation of clustering during updating of the external population. The nondominated solutions of each generation are stored in the external population. The size of the external population should be limited to avoid approaching infinity during the iteration. When the amount of the external population exceeds the limit capacity N, an elimination strategy, based on Fuzzy cluster analysis, is initiated. When the elimination strategy is executed, the first step is to obtain the distance between two clusters in the external nondominated solution set, then to merge the closest clusters into one. This process is repeated until the number of clusters reduces to below the limit capacity N. In this process, once two clusters are merged, the distances of the remaining clusters will be recalculated. The computational complexity is O(mn2), where n represents the actual individual amount of the external solution set, and m denotes the extra amount above the limit capacity of the external nondominated solution set.

Another key issue, namely, the calculation of individuals fitness, can be obtained as follows. Firstly, the external nondominated individual fitness is defined as a percentage of the individuals covered by it  and the percentage is expressed as a real value in 0,1 (also called strength) . Subsequently, the fitness of the other individuals in the population P is defined as the sum of strengths of all external Pareto solutions by which it is covered. This fitness assignment ensures the diversity of the dominated set and the nondominated set .

A tournament selection mechanism is adopted to choose the individual, from both the external population P and the evolutionary population P, to take part in the evolution. Then the crossover and mutation operations are performed, to generate the next generation or to exit the optimization process if the generation counter exceeds the defined value.

3.1.2. Improvement of SPEA

The elimination strategy “Fuzzy cluster analysis” is initiated when the number of clusters exceeds the limit capacity N. In this process, the removed individual cannot be recycled to the external nondominated set. Comparatively speaking, some optimal individuals generated by later generation may be worse than those removed in previous generations. This means that the Pareto front will retreat . A simple example of Pareto retreating is shown in Figure 2. In the former generation, individual B is removed because it is too close to individual A. In the next generation, a new individual B joins the nondominated set. However, B is dominated by B. In other words, if B still exists, B would never have the chance of being an optimal solution. In this way, the Pareto front retreats; that is to say, it cannot guarantee that the individuals of the external nondominated set are the optimal solutions. It does not meet the optimal concept and is more obvious in a linear model with a smaller capacity of the external population.

Recession of the Pareto front.

To prevent this phenomenon, the SPEA should be modified slightly. (1) The external nondominated set should be stored orderly using dichotomy according to some objective function; (2) a mapped distance collection should be used to store the data of cluster distance to avoid duplicate calculation; (3) a backup collection of external nondominated solutions is adopted to recycle those removed individuals because of overcrowding. This is similar to the ε-Pareto front selection strategy proposed by Laumanns et al. . With these improvements, the removed individuals have the chance to move back to the external nondominated set to guarantee no retreat of the Pareto front. Furthermore, the time complexity is decreased to O(nlogn), in which n represents the size of the external nondominated set.

3.2. Sensitivity Analysis Method 3.2.1. Evaluation of Traditional Methods

Sensitivity analysis is used to qualitatively, or quantitatively, evaluate the influence of parameters on the output variables . Multiparameter sensitivity analysis methods include local sensitivity analysis and global sensitivity analysis. Local sensitivity analysis obtains the influence of a parameter on the output variable by changing the parameter while other parameters remain unchanged. The essence of this method is single-parameter sensitivity analysis, which does not consider the correlation between parameters. What is more, the result is unstable for a nonlinear model. For the global sensitivity analysis, all the input parameters are varied to obtain the output variable; that is to say, correlation between parameters is considered. The traditional global sensitivity analysis methods include multivariate regression , Morris’s method , Sobol’s method , Fourier Amplitude Sensitivity Analysis , and Extended Fourier Amplitude Sensitivity Analysis .

However, these existing global sensitivity analysis methods cannot be directly embedded in optimization process. The main reasons are as follows:

Requirement for samples: randomness and unbiasedness are two basic properties of the samples used for traditional sensitivity analysis methods. But the optimization process, based on the GA, can only provide biased samples which are tending towards the optimal set.

Requirement for parameters: the analysis parameters of the traditional methods should follow certain rules. Assuming the sensitivities of parameters x1,x2,x3 to function f are analyzed by Sobol’s method, the combined parameters x1,x2,x3; x1,x2,x3; x1,x2,x3; x1,x2,x3 should be extracted from two sample groups, namely, (x1,x2,x3) and (x1,x2,x3). But in the optimization process, the parameters (optimization variables) of each generation cannot meet the above law.

Time-consumption: take Sobol’s method as an example; two sample groups would include thousands of parameters, which implies thousands of combined parameters are needed to determine parameter sensitivity.

In mathematical statistics, parameter sensitivity can be considered to reflect the correlation between the input parameters and output variables. Therefore, the correlation concept is applied in sensitivity analysis to overcome the above disadvantages.

3.2.2. Rank Correlation Coefficient

The correlation coefficient can be used for sensitivity analysis, as mentioned in literatures [35, 36]. Related evaluation methods have linear correlation coefficients with fixed variable distance and rank correlation coefficients with fixed variable order, which is also called the sequential correlation coefficient.

In the present investigation, the Spearman Rank Correlation Coefficient (SRCC) is used. The concept of SRCC is inherited from the Pearson product-Moment Correlation Coefficient (PMCC) . In statistics, they are frequently used as tools to analyze the correlation between the input variable X and the output variable Y. For PMCC, X-Y pairs must follow a normal distribution. However, this assumption is not feasible for each generation of optimization variables. SRCC obtains correlation coefficient based on the parameter rank rather than the raw value as PMCC. This operation is described as rank transformation. It linearizes monotonic nonlinear relationships between variables and reduces the effects of extreme values. This transformation converts the sensitivity measure from one of linearity to one of monotonicity and is widely used in parameter screening and sensitivity analysis of model output . Furthermore, the sample distribution has no influence on the SRCC result , which makes the SRCC calculation feasible using optimization variables.

Monotonically increasing transformation invariance and robustness are two important characteristics of the rank correlation coefficient . Monotonically increasing transformation invariance means that the value of the rank correlation coefficient is free from linear increase or nonlinear increase, as long as the variables transformation meets increasing trend. This is different from the linear correlation coefficient whose value is stable only when the transformation is increasing linearly. Assume that two variables x,y whose ranges are [0,1] satisfy uniform distribution and the relationship of x=y. Both linear correlation coefficient and rank correlation coefficient are 1.0. Some samples are extracted and transformed according to x1=x and y1=y3. In this case, the linear correlation coefficient changes into 0.8985, but the rank correlation coefficient is still 1.0. Robustness means a strong impact resistant ability against abnormal actions. It can reproduce its prediction results (e.g., the order of importance of the input parameters) when repeating the analysis on different samples of the same population .

Rank is defined as the increasing (or descending) sort value of the raw parameters. If two parameters have the same sort value, an average value will be adopted. Table 1 gives a simple example of ranking.

An example of ranking.

Raw Xi Rank Final rank xi
9.1 1 1
7.0 2 ( 2 + 3 ) / 2 = 2.5
2.6 4 4
7.0 3 ( 2 + 3 ) / 2 = 2.5

Assume that there exist random variables X={X1,X2,X3,,Xi} and Y={Y1,Y2,Y3,,Yi}; correspondingly, the SRCC, namely, ρs, is given by(1)ρs=i=1nxi-x-yi-y-i=1nxi-x-2·i=1nyi-y-2,where the xi,yi,x-,y- correspond to the ranks of the original value Xi,Yi,X-,Y- and x¯=(1/n)i=1nxi; y¯=(1/n)i=1nyi.

3.2.3. Evaluation of SRCC

For the rank assignment strategy, the SRCC needs to be recalculated when a new individual is generated. For tens of thousands of analyzed individuals, SRCC recalculation is a heavy burden. An alternative approach is to apply dichotomy in sorting the rank, instead of average-rank-strategy for those duplicated variables, which can reduce the time complexity from O(n) to O(logn), in which n is the size of analyzed individuals.

The value of SRCC ranges from −1.0 to 1.0, as shown in Figure 3. Greater absolute value of SRCC means greater influence of x on y.

Relationship of SRCC value versus variable distribution.

Another characteristic of SRCC is that all the straight lines with different slopes have the same SRCC value, which is 1.0 or −1.0, as shown in Figure 4.

SRCC value of straight lines.

According to the SRCC characteristic, the two straight lines with different slopes have the same sensitivity, as shown in Figure 5. In fact, when x1 and x2 change the same extent, Δy corresponding to the two curves is different. Based on the principle of single-parameter sensitivity analysis, the influences of x1 and x2 on the y are different. Superficially SRCC cannot confirm the true sensitivity. Fortunately, this embarrassing situation only occurs in the model with a single input parameter. For bivariate or multivariate models, the relationship between the insensitive input parameter x and the output variable y would be strongly affected by other sensitive parameters. In other words, the relationship between x and y will be evident in Figure 6, rather than a straight line with slope 0 corresponding to SRCC = 1.0.

Curves with different slope.

Output y versus insensitive parameter x.

The sensitivity based on the SRCC is decided as follows: firstly, obtaining an input parameter matrix by random variable technology; secondly, changing the input parameters simultaneously and obtaining the corresponding output variables; and thirdly, statistically analyzing the influence of the input parameters on the output variables. The essence of this method is global sensitivity analysis rather than local sensitivity analysis.

4. The Integrated Algorithm

Crossover probability (pc) and mutation probability (pm) are the two basic evolutionary parameters in EA. The smaller value of the two parameters implies the smaller probability to generate new genes [39, 40]. In the integrated algorithm SRCC-SPEA, the results of sensitivity analysis are used to guide the activity of the variables in the optimization process by modifying the pc and pm.

For an optimization model, which has four variables (A,B,C,D) and three objectives (f1,f2,f3), the modified pc and pm are obtained as follows:

Extract the normalized sensitivity, which means that the sum of all the variables sensitivities of the specified objective is 1.0, as listed in Table 2.

Sum the sensitivity of each variable. In this way, the influence of a variable on all the objectives is considered.

It is well known that the value of evolutionary parameters should not be too large to avoid nonconvergence of the optimization process . Therefore, the correction coefficient is set as normalized value based on the most sensitive variable to avoid the evolutionary parameters exceeding the defined value.

A brief example of modifying evolutionary parameters.

Variables ⁢Sensitivity Correction coefficient Evolutionary parameters
f 1 f 2 f 3 Sum p c p m
A 0.56 0.52 0.36 1.44 1.00 0.400 0.020
B 0.13 0.30 0.32 0.75 0.52 0.208 0.010
C 0.25 0.09 0.23 0.57 0.40 0.160 0.008
D 0.06 0.09 0.09 0.24 0.17 0.068 0.003

Assume that the original global pc and pm are 0.400 and 0.020, respectively. The actual probability for each variable can be obtained by multiplying pc,pm with their correction coefficient, as reported in Table 2. The most sensitive variable is A, whose correction coefficient is 1.0 and evolutionary parameters remain as the defined value. Variables with low sensitivity are assigned with low evolutionary parameters corresponding to smaller opportunity in following genetic evolution. Thus, more optimizing computation is allocated on the analysis of individuals with higher sensitivity, which can effectively accelerate the optimization process.

In this improved algorithm SRCC-SPEA, the individuals obtained from the optimization process serve as source samples for sensitivity analysis. The results of the SRCC provide the information for optimization priority of the variables. When the sensitivity analysis sample is small, SRCC results can be unstable and deviate far from the true values . So, SRCC should achieve stabilization before guiding the optimization process. The SRCC can be safely considered as stable when the deviation between 5 consecutive generations is less than 5%. Figure 7 depicts the process of the SRCC-SPEA.

Process flowchart for integrated algorithm.

5. Case Studies

Three cases are carried out to verify the practical applicability and superiority of the integrated algorithm SRCC-SPEA.

5.1. Mathematical Problem

In this section, a mathematical problem is used to verify the accuracy of optimization method SRCC-SPEA and sensitivity analysis method SRCC. The mathematical problem is defined as(2)f1=sinx+0.3y,f2=cosx-0.3y;x,y0,π2;minf1,f2.

All the evolutionary parameters are listed in Table 3.

The evolutionary parameters.

Number of each generation Number of external population Max generation Cross probability Mutation probability
50 50 50 0.40 0.02

The theoretical solution set of this model in the first quadrant is defined as f12+f22=1;f1,f20,1. From Figure 8, the Pareto optimal set of SRCC-SPEA shows a satisfactory agreement with theoretical result.

Comparison of Pareto optimal set between SRCC-SPEA and theory.

Figure 9 shows the comparison of parameters sensitivity between SRCC and Sobol’s method. The values obtained by the two methods are not too dissimilar and the sensitivity sort is in agreement. The result demonstrates the effectiveness of SRCC, used as sensitivity analysis, to guide the optimization. By checking the final data, it is found that x is relatively more sensitive than y. The correction coefficient of x and y is 1.0 and 0.19, so a higher evolution priority is allocated to x.

Comparison of parameter sensitivity between SRCC and Sobol’s.

5.2. Optimization of an Airship Envelope

A multiobjective optimization model of an airship envelope, whose geometry is shown in Figure 10, is proposed in this part.

Profile of a biaxial ellipsoid envelope.

5.2.1. Variables and Constraints

The optimization variables are listed in Table 4, in which t, P, and E represent material thickness, differential pressure, and elastic modulus, respectively. Except for the constraints for the optimization variables, the envelope volume, which is directly related with the payload capacity and maximum operating altitude , is fixed as 12.6 × 104 m2. The value of b is calculated based on the value of a1 and a2.

The optimization variables.

Optimization variables Reference value Constraints
a 1 /m 59.5 55.0~64.0
a 2 /m 84.0 80.0–88.0
P /Pa 400 350–450
t /mm 0.20 0.10–0.30
E /Gpa 12.0 11.0–13.0
5.2.2. Objectives

The objective function is expressed as minf1,f2,f3, where f1, f2, and f3 represent volume of the material, structural strain energy, and maximum envelope stress, respectively. With the fixed volume density, which is 550 kg/m3, volume of the material reflects the envelope self-weight. Structural strain energy and maximum envelope stress are defined to indicate the stiffness and ultimate strength of the envelope. The value of evolutionary parameters refers to Table 3.

5.2.3. Results

The three-dimensional Pareto optimal set of the fiftieth generation is shown in Figure 11. To further clarify the advantage of SRCC-SPEA, Figure 12 shows the Pareto optimal set of (f1-f3) and Figure 13 shows the optimal parameters (t-E) corresponding to the Pareto optimal set. Obviously, the Pareto optimal set of SRCC-SPEA shows better uniformity than SPEA after the 50th iteration, which indicates that more iteration steps are needed for SPEA to obtain the better result.

Pareto optimal set of SRCC-SPEA and SPEA.

Pareto optimal set (f1-f3) of SRCC-SPEA and SPEA.

Optimal parameters (t-E) of SRCC-SPEA and SPEA.

The time-consumption, which relates to the computer capacity, is about 5 hours for both SPEA and SRCC-SPEA. But sensitivity of the optimization variables can also be obtained by SRCC-SPEA, and their correction coefficients are listed in Table 5. Theoretically, variables P and E have no influence on the volume of the material f1. The sensitivities of P and E obtained by SRCC are 0.90% and 0.81%, which are nearly close to zero. The most sensitive variable is material thickness t, which indicates that optimization computation will focus on the evolution of t and less genetic opportunity is paid on other variables.

Sensitivities and correction coefficients of the optimization variables.

f 1 f 2 f 3 Correction coefficient
a 1 7.47% 8.48% 7.08% 0.10
a 2 2.94% 0.59% 0.89% 0.02
P 0.90% 17.36% 9.17% 0.12
t 87.87% 63.51% 72.89% 1.00
E 0.81% 10.05% 9.96% 0.09
5.3. Truss Topology Optimization

In this section, the proposed algorithm is demonstrated by a truss topology optimization with discrete variables. The truss simply supported at both ends (see Figure 14), has a length of 3 m and a regular triangle cross section with a side length of 0.3 m. The section of each rod is 0.006 m2. A concentrated force 900 N is applied at the middle of top chord and material density is set as 1700 kg/m3.

Schematic diagram of the truss.

The optimization variables are the truss segment number N, truss type marked as TYPE, and nominal length of each segment chord. The value of N is set as 6, 7, 8, and 9. TYPE, whose value is listed in Table 6, is defined to describe the difference of segment number between three chords (A,B,C). The nominal length of segment chord is stored by three arrays: A(i), B(i), and C(i), i=1,2,3,,9. The actual length of each chord is decided by the proportional weight of nominal length. The total number of optimization variables is 29.

The value of optimization variable TYPE.

TYPE value 0 1 2 3
Segment number
A N N + 1 N N
B N N N + 1 N
C N N N N + 1

Two optimal objectives are minimizing deformation (f1) and minimizing weight (f2). The evolutionary parameters are listed in Table 7.

The evolutionary parameters.

Number of each generation Number of external population Max generation Cross probability Mutation probability
60 60 1000 0.40 0.02

Figure 15 displays the optimization process of SRCC-SPEA and SPEA. For the SRCC-SPEA method, the Pareto optimal set changes little after the 500th iteration. But for the SPEA method, deviation still exists between the 750th iteration and the 1000th iteration. This indicates that the terminated generation can be set as 500 and much time can be saved for SRCC-SPEA.

Optimization process of SRCC-SPEA and SPEA.

Sensitivity of the 29 optimization variables to all the objectives can also be obtained, as listed in Table 8. It is found that the first two most sensitive variables to weight f2 are N and TYPE. Truss segment number N has the most obvious influence on deformation f1. The nominal length of each segment chord along chord A has greater influence on deformation than the other two chords B and C. After modification, more optimization computation will focus on the evolution of N.

Sensitivities and correction coefficients of the optimization variables.

Variables f 1 f 2 Correction coefficient
N 13.63% 18.95% 1.00
TYPE 5.03% 10.13% 0.47
A 1 0.19% 2.00% 0.07
A 2 1.28% 5.14% 0.20
A 3 11.52% 6.66% 0.56
A 4 3.23% 0.95% 0.13
A 5 9.89% 3.78% 0.42
A 6 12.58% 0.19% 0.39
A 7 6.51% 1.32% 0.24
A 8 3.44% 1.06% 0.14
A 9 3.76% 2.20% 0.18
B 1 0.30% 4.05% 0.13
B 2 0.59% 4.08% 0.14
B 3 2.56% 2.03% 0.14
B 4 3.04% 2.64% 0.17
B 5 0.08% 1.18% 0.04
B 6 1.09% 1.21% 0.07
B 7 3.17% 0.18% 0.10
B 8 0.81% 3.14% 0.12
B 9 3.09% 2.01% 0.16
C 1 2.30% 2.79% 0.16
C 2 2.20% 3.88% 0.19
C 3 2.49% 5.85% 0.26
C 4 1.08% 1.10% 0.07
C 5 0.61% 0.67% 0.04
C 6 1.99% 0.63% 0.08
C 7 1.83% 7.69% 0.29
C 8 1.29% 0.73% 0.06
C 9 0.43% 3.76% 0.13

The higher number of optimization variables implies the better effect of optimization computation reallocation, corresponding to the more obvious effectiveness of the integrated algorithm SRCC-SPEA. That is why the SRCC-SPEA obtains the even distributed Pareto optimal set much more quickly than SPEA for the truss topology optimization problem with 29 variables. What is more, conducting sensitivity analysis separately for 29 variables using traditional global sensitivity analysis method means considerable computational effort. But the time-consumption of SRCC-SPEA was almost equal to SPEA, which means lots of computational time would be saved.

6. Conclusions

In this paper, a novel integrated algorithm SRCC-SPEA was proposed based on the improvements of the optimization method SPEA and sensitivity analysis method SRCC. The elimination strategy “Fuzzy cluster analysis” of SPEA is improved to avoid the retreat of the Pareto front and reduce the time complexity. Dichotomy replaces the average-rank-strategy for SRCC rank assignment to reduce the time complexity.

In contrast with traditional evolutionary algorithm SPEA, the characteristics of SRCC-SPEA can be summarized as follows: (1) based on the results of sensitivity analysis, SRCC-SPEA effectively improves the survivability competence of sensitive variables by changing the evolutionary parameters; (2) SRCC-SPEA simultaneously obtains the parameter sensitivity and Pareto optimal set in a single run without extra samples, because the individuals of optimization work as samples of sensitivity analysis. Great computational cost can be saved compared to conducting sensitivity analysis and optimization analysis separately; (3) SRCC-SPEA obtains an even distributed Pareto optimal set more quickly than the SPEA, and this advantage is more obvious for optimization model with more variables.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The work has been supported by the National Natural Science Foundation of China (Grant no. 51678192).

Pareto V. Manuale die Economia Politica 1906 Milano, Italy Societa Editrice Libraria Lücken C. V. Barán B. Brizuela C. A survey on multi-objective evolutionary algorithms for many-objective problems Computational Optimization & Applications 2014 58 3 707 756 10.1007/s10589-014-9644-1 2-s2.0-84904997256 Abido M. A. Multiobjective evolutionary algorithms for electric power dispatch problem IEEE Transactions on Evolutionary Computation 2006 10 3 315 329 10.1109/TEVC.2005.857073 2-s2.0-33744773216 Deb K. Agrawal S. Pratap A. Meyarivan T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II Lecture Notes in Computer Science 2000 1917 849 858 2-s2.0-84947926042 Zhu H. Yang J. Lu W. Li J. Research on preference polyhedron model based evolutionary multiobjective optimization method for multilink transmission mechanism conceptual design Mathematical Problems in Engineering 2016 2016 13 9635912 10.1155/2016/9635912 MR3531356 Deb K. Multi-Objective Optimization Using Evolutionary Algorithms 2001 New York, NY, USA John Wiley & Sons Coello C. A. C. Lamont G. B. Van Veldhuizen D. A. Evolutionary Algorithms for Solving Multi-Objective Problems 2002 Norwell, Mass, USA Kluwer Academic Holland J. H. Adoption in Natural and Artificial System 1975 MIT Press Schaffer J. D. Multiple objective optimization with vector evaluated genetic algorithms Proceedings of the 1st International Conference on Genetic Algorithms 1985 Pittsburgh, Pa, USA 93 100 Fonseca C. M. Fleming P. J. Genetic algorithms for multiobjective optimization: formulationdiscussion and generalization Proceedings of the 5th International Conference on Genetic Algorithms 1993 Morgan Kaufmann 416 423 Srinivas N. Deb K. Multi-Objective function optimization using non-dominated sorting genetic algorithms Evolutionary Computation 1994 3 2 221 248 Horn J. Nafpliotis N. Goldberg D. E. Niched Pareto genetic algorithm for multiobjective optimization Proceedings of the 1st IEEE Conference on Evolutionary Computation June 1994 Orlando, Fla, USA 82 87 2-s2.0-0028585530 Zitzler E. Thiele L. Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach IEEE Transactions on Evolutionary Computation 1999 3 4 257 271 10.1109/4235.797969 2-s2.0-0033318858 Deb K. Pratap A. Agarwal S. Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II IEEE Transactions on Evolutionary Computation 2002 6 2 182 197 10.1109/4235.996017 2-s2.0-0036530772 Corne D. W. Knowles J. D. Oates M. J. The pareto envelope-based selection algorithm for multiobjective optimization Proceedings of the 6th International Conference on Parallel Problem Solving from Nature (PPSN '00), Paris, France, September 2000 2000 1917 Springer 839 848 Knowles J. Corne D. The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective optimisation 1 Proceedings of the IEEE Congress on Evolutionary Computation (CEC '99) July 1999 Washington, DC, USA 98 105 10.1109/cec.1999.781913 2-s2.0-84901404865 Coello C. A. C. C. Pulido G. T. A micro-genetic algorithm for multiobjective optimization 1993 Proceedings of the 1st International Conference on Evolutionary Multi-Criterion Optimization (EMO '01) March 2001 Zurich, Switzerland 126 140 Lecture Notes in Computer Science 10.1007/3-540-44719-9_9 Xunxue C. Multiobjective Evolutionary Algorithms and Their Applications 2008 Beijing, China National Defence Industry Press Bonabeau E. Dorigo M. Theraulaz G. Swarm Intelligence: From Natural to Artificial Systems 1999 Oxford, UK Oxford University Press Dorigo M. Maniezzo V. Colorni A. Ant system: optimization by a colony of cooperating agents IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 1996 26 1 29 41 10.1109/3477.484436 2-s2.0-0030082551 Pavani G. S. Tinini R. I. Distributed meta-scheduling in lambda grids by means of Ant Colony Optimization Future Generation Computer Systems 2016 63 15 24 10.1016/j.future.2016.04.005 Kennedy J. Eberhart R. Particle swarm optimization 4, no. 8 Proceedings of the IEEE International Conference on Neural Networks 1995 Perth, Western Australia 1942 1948 Pontani M. Conway B. A. Particle swarm optimization applied to space trajectories Journal of Guidance, Control, & Dynamics 2010 33 5 1429 1441 10.2514/1.48475 2-s2.0-78149346503 Laudani A. Fulginei F. R. Salvini A. Schmid M. Conforto S. CFSO3: a new supervised swarm-based optimization algorithm Mathematical Problems in Engineering 2013 2013 13 560614 10.1155/2013/560614 2-s2.0-84886945530 Van Veldhuizen D. A. Lamont G. B. Multiobjective evolutionary algorithms: analyzing the state-of-the-art Evolutionary Computation 2000 8 2 125 147 10.1162/106365600568158 2-s2.0-0034201456 Havel J. Link H. Hofinger M. Franco-Lara E. Weuster-Botz D. Comparison of genetic algorithms for experimental multi-objective optimization on the example of medium design for cyanobacteria Biotechnology Journal 2006 1 5 549 555 10.1002/biot.200500052 2-s2.0-33748519287 Abido M. A. Bakhashwain J. M. A novel multiobjective evolutionary algorithm for optimal reactive power dispatch problem 3 Proceedings of the 10th IEEE International Conference on Electronics, Circuits and Systems (ICECS '03) December 2003 Sharjah, UAE IEEE 1054 1057 10.1109/icecs.2003.1301691 2-s2.0-70349313858 Laumanns M. Zitzler E. Thiele L. On the effects of archiving, elitism, and density based selection in evolutionary multi-objective optimization Lecture Notes in Computer Science 2001 1993 1993 181 196 10.1007/3-540-44719-9_13 Crosetto M. Tarantola S. Uncertainty and sensitivity analysis: tools for GIS-based model implementation International Journal of Geographical Information Science 2001 15 5 415 437 10.1080/13658810110053125 2-s2.0-0034928004 Manache G. Melching C. S. Identification of reliable regression- and correlation-based sensitivity measures for importance ranking of water-quality model parameters Environmental Modelling & Software 2008 23 5 549 562 10.1016/j.envsoft.2007.08.001 2-s2.0-38349074924 Morris M. D. Factorial sampling plans for preliminary computational experiments Technometrics 1991 33 2 161 174 10.1080/00401706.1991.10484804 Sobol' I. On sensitivity estimation for nonlinear mathematical models Keldysh Applied Mathematics Institute 1990 1 2 112 118 Lu Y. Mohanty S. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method Reliability Engineering and System Safety 2001 72 3 275 291 10.1016/S0951-8320(01)00020-5 2-s2.0-0035368423 Saltelli A. Tarantola S. Chan K. P.-S. A quantitative model-independent method for global sensitivity analysis of model output Technometrics 1999 41 1 39 56 10.1080/00401706.1999.10485594 2-s2.0-0033079533 Hamby D. M. A review of techniques for parameter sensitivity analysis of environmental models Environmental Monitoring and Assessment 1994 32 2 135 154 10.1007/BF00547132 2-s2.0-0028501607 Saltelli A. Sobol' I. M. About the use of rank transformation in sensitivity analysis of model output Reliability Engineering and System Safety 1995 50 3 225 239 10.1016/0951-8320(95)00099-2 2-s2.0-0029509837 Wang J. Liang X. Nonparametric Statistical Analysis 2006 Beijing, China Higher Education Press Zx W. Copula and portfolio risk analysis [Ph.D. thesis] Hefei, China University of Science and Technology of China 2004 (Chinese) Ming Z. Shudong S. Genetic Algorithms: Theory and Applications Beijing, China National Defence Industry Press 1999 (Chinese) Zhang X.-Z. Chen L. Quan T. Study of ascent trajectory for stratospheric airship in the jet stream Electronic Design Engineering 2014 16 22 11 17 Wang Q.-B. Chen J.-A. Fu G.-Y. Duan D.-P. An approach for shape optimization of stratosphere airships based on multidisciplinary design optimization Journal of Zhejiang University: Science A 2009 10 11 1609 1616 10.1631/jzus.A0820814 ZBL1214.49040 2-s2.0-75649118873