High time-consuming computation has become an obvious characteristic of the modern multidisciplinary design optimization (MDO) solving procedure. To reduce the computing cost and improve solving environment of the traditional MDO solution method, this article introduces a novel universal MDO framework based on the support of adaptive discipline surrogate model with asymptotical correction by discriminative sampling. The MDO solving procedure is decomposed into three parts: framework level, architecture level, and discipline level. Framework level controls the MDO solving procedure and carries out convergence estimation; architecture level executes the MDO solution method with discipline surrogate models; discipline level analyzes discipline models to establish adaptive discipline surrogate models based on a stochastic asymptotical sampling method. The MDO solving procedure is executed as an iterative way included with discipline surrogate model correcting, MDO solving, and discipline analyzing. These are accomplished by the iteration process control at the framework level, the MDO decomposition at the architecture level, and the discipline surrogate model update at the discipline level. The framework executes these three parts separately in a hierarchical and modularized way. The discipline models and disciplinary design point sampling process are all independent; parallel computing could be used to increase computing efficiency in parallel environment. Several MDO benchmarks are tested in this MDO framework. Results show that the number of discipline evaluations in the framework is half or less of the original MDO solution method and is very useful and suitable for the complex high-fidelity MDO problem.

MDO solution method, also names MDO architecture or MDO strategy, is the most essential part of the MDO solving procedure [

Wang et al. [

In this article, a novel universal MDO framework based on the idea of the stochastic asymptotical sampling method is presented. At first, the idea and composition of the universal MDO framework are introduced and the elementary theory of the MPS algorithm is simply reviewed. Then, based on the MPS iteration process, the solving procedure of the universal MDO framework is proposed. The detailed implementation of all three levels is discussed and used to construct the universal MDO framework. Finally, some benchmarks are utilized to give a verification and validation of the effectiveness and high efficiency.

MPS algorithm has the ability of fast convergence which can be helpful for the time-consuming MDO problem. MDO problem is a combination of various disciplines with interdisciplinary coupling, and MDO solution method decouples this interdisciplinary relationship directly or indirectly to construct the optimization problem. Figure

The comparison of MDO coupling structure and MPS iteration structure.

MDO coupling structure

MPS iteration structure

The MPS iteration structure gives a fast convergence to the optimization problem; the results in [

In this article, the discipline model is regarded as the optimization problem of the MPS iteration structure shown in Figure

Based on the above design pattern, the MDO framework is decomposed into three parts: framework level, architecture level, and discipline level. The design skeleton of the universal MDO framework can be viewed as Figure

Framework level: framework level is the system level of the MDO framework. It is responsible for managing the organizational relationship and implementation process of the framework and guiding the convergence process of the MDO problem. The main feature of the framework level includes global convergence detecting and MDO solution method constructing

Architecture level: architecture level decouples the MDO problem into some optimization problems and solves it based on the discipline surrogate model under the control of the framework level. This process will repeat continuously with the updating of the adaptive discipline surrogate model. The computing efficiency of a single MDO solving procedure will gain great improvement by replacing the discipline model to a discipline surrogate model

Discipline level: discipline level executes discipline analysis at sample points based on the MPS sampling process and the current optimal solution and updates the adaptive discipline surrogate model. The two-way interaction between the discipline model at the discipline level and the MDO solution method at the architecture level is simplified to one-way relation by introducing the framework level, which has a great flexibility to carry out as a distributed and parallel system

Design skeleton of the MDO framework.

Based on the above definition, the whole MDO solving procedure is converted to a process with discipline surrogate model updating, MDO solution method solving, and discipline model analyzing. The MPS sampling method is used to establish and update the adaptive discipline surrogate model. Combining with these three levels, a novel MDO framework is established, called it disciplinary model pursuing sampling-MDO framework (DMPS-MDOF). The detailed introduction will be given in Section

MPS algorithm was proposed as a method to search for the global optimum of the black-box function problem by Wang et al. [

MPS algorithm contains three main parts: random selection, global surrogate model, and global convergence criterion. First, some design points are generated around the current minimum while statistically covering the entire search space, and then, a surrogate model is built to fit the design space by the above points; at last, a quadratic response surface is evaluated for checking convergence and meanwhile gives a preference probability to the sampling process. The main procedure of MPS can be elaborated as follows.

Set the number of uniform distributed basic sample points

Evaluate the unevaluated design points in DPS and fit a surrogate model with all design points in DPS, and then we have,

Uniformly create

GF is a function between 0 and 1 and could be regarded as the probability density function (PDF) of the objective function. The cumulative distribution function (CDF) is also evaluated from the sample points and revised by speed control factor

Evaluate the subregion of the convergence checking and build the local quadratic response surface. If the convergence criterion is not satisfied, update the speed control factor

Resample

For a better solution performance and execution control, a novel hierarchy design pattern with the framework level, architecture level, and discipline level is established in DMPS-MDOF to decompose the MDO solving procedure. The detailed implementation of all these three levels and the executing process of the DMPS-MDOF is introduced in this section.

We define the discipline model as a black-box function with input variable

Discipline model has no objective function, so we need a preference function to guide the direction of the MPS sampling. Approximation capability of the discipline surrogate model has a direct effect on the convergence accuracy and efficiency of the MDO framework; the following factors should be considered:

Improve fitting accuracy by placing the new updated sampling design points near the current optimal region as much as possible

Preference function must be continuous and smooth. This will enhance the stability of the MPS sampling

MPS sampling process of each discipline must be independent and has no relevance with the other disciplines

Due to all the requirements above, the following quadratic function is used as the preference function of the discipline surrogate model:

With the above definition, MPS sampling of each discipline will get closer to the optimal region as the iteration goes on in probabilistic meaning.

Normalization standardizes the design space and unifies all design variables into the same level, which improves the computing efficiency and convergence precision. The following linear transformation is used to normalize the input variables of discipline:

For better understanding, some illustration must be listed. We define the sample point as the cheap point which only needs to evaluate the preference function in (

Sample

Divide the

Sample disciplinary design points based on the rules in [

The local quadratic response surface is used to evaluate the disciplinary convergence precision. The disciplinary convergence criterion will be carried out by the framework level based on the multiple correlation coefficient

The updating of the speed control factor of each discipline surrogate model is the same as the MPS algorithm, which is controlled by

The flowchart of the discipline level is illustrated in Figure

Sample

Calculate the preference function

Generate the disciplinary discrete design space by sampling

Calculate CDF and correct it by a speed control factor, execute the MPS sampling process by the fixed

Rebuild the discipline surrogate model by DPS.

Generate local quadratic response surface by uniform sampling criteria in the local region around the new sampled design points and compare with discipline surrogate model to evaluate the disciplinary convergence precision

Fix the speed control factor by

Flowchart of the adaptive discipline surrogate model.

Architecture level has a high efficiency because of the replacement of discipline model with discipline surrogate model. The complex interaction between the MDO solution method and discipline model is avoided, and also with the continuity and smoothness characteristic of the discipline surrogate model, a better convergence performance can be achieved. Architecture level is designed as two parts: the architecture configuration and the architecture solving, as shown in Figure

Architecture configuration: according to the detail architecture configuration of the selected MDO solution method, the MDO problem is transformed to a system level optimization/iteration problem and some subsystem level optimization/iteration problems, and then, the MDO model can be generated

Architecture solving: the MDO model will be solved after finishing the architecture configuration based on the latest updated discipline surrogate models, and then, the current optimal result is returned to the framework level; a single MDO iteration is completed

Flowchart of architecture level.

The architecture solving has less computational requirements, so the success rate and the convergence precision are the primary purpose of the architecture level. The detail illustration of some common MDO solution methods can be found in [

Framework level initializes the MDO solving procedure, manages the iteration of the discipline level and the architecture level, and provides the basic support with auxiliary modules and convergence criterion.

The flowchart of the framework level is shown in Figure

Flowchart of framework level.

Both of the local response surface fitting precision of the discipline surrogate model and the state of the current MDO optimal solution are considered as the convergence criteria of the DMPS-MDOF. The multiple correlation coefficient

The convergence criterion of the discipline surrogate model is defined as

The convergence criterion of the MDO problem is defined as

When

When

When

The above hybrid multistep convergence criterion is conducive to accelerate convergence, improve convergence precision, and avoid premature convergence as much as possible.

With the three level design pattern, the flowchart of the DMPS-MDOF is shown in Figure

Initialize MDO problem. Define initial value and boundary of the disciplinary input and output variables, set objective function and constraint of the MDO problem, build disciplinary coupling relationship, and select and set up the MDO solution method and optimization algorithm.

Reconstruct MDO framework. Depending on the MDO problem and the selected MDO solution method defined in Step

Generate initial design points. Sample

Disperse disciplinary design space. Each discipline sample

MPS sampling. Each discipline sample

Analyze discipline model. Analyze all discipline models which are new added into DPS. This is the most time-consuming process of the MDO solving procedure, but for the independence of each design point, this process can be speeded up greatly by parallel computing.

Set up/update the discipline surrogate model. Set up or update the discipline surrogate model based on the DPS and calculate the multiple correlation coefficient

Solve the MDO problem. Solve the MDO problem with the MDO framework constructed in Step

Update the speed control factor. The speed control factor

Check the convergence criterion. The framework level carries convergence criterion checking based on the multiple correlation coefficient and the rate of objective function change; if the convergence criterion is satisfied, MDO iteration exits; otherwise, go to Step

For some simple discipline models, there is no need to use the discipline surrogate model. So, the discipline surrogate model can be replaced with the true discipline model in Step

Flowchart of the DMPS-MDOF.

In this section, we will give a performance analysis to the proposed DMPS-MDOF with some typical MDO benchmarks.

First, a single MDO example with two disciplines is used to illustrate the DMPS-MDOF solving procedure. The optimization problem is as follows:

Discipline 1:

Discipline 2:

The parameters of DMPS-MDOF are configured as follows:

Iteration history on the objective function.

Comparison between DMPS-MDOF with MDF and the original MDF method.

DMPS-MDOF with MDF | Original MDF method | |
---|---|---|

Objective function | −6.75000016 | −6.75 |

Consistency | 3.20408 |
2.41121 |

3.00000066 | 2.99999986 | |

0.99999843 | 0.99999922 | |

0.99999952 | 0.99999992 | |

Number of discipline 1 evaluations | 48 | 603 |

Number of discipline 2 evaluations | 48 | 603 |

CPU time (s) | 3.16 | 0.5 |

The iteration history on the distribution of discrete design points can be found in Figure

Iteration history on the distribution of discrete design points of DMPS-MDOF.

Discipline 1

Discipline 2

Iteration history on variables of discipline 1 and discipline 2.

Iteration | Marks | Number of discipline evaluations | |
---|---|---|---|

Discipline 1 | Discipline 2 | ||

0 | O (black) | 24 | 24 |

1 | + (yellow) | 6 | 6 |

2 | X (green) | 6 | 6 |

3 | . (blue) | 6 | 6 |

4 | 6 | 6 |

The first iteration in Table

The above single example gave an intuitive understanding of the characteristic of DMPS-MDOF. Another three test cases are used to analyze comprehensive performance. The first one is an analytic example has been previously solved by Sellar et al. [

Analytic problem

Discipline 1:

Discipline 2:

Golinski’s speed reducer

Discipline 1:

Discipline 2:

Discipline 3:

Aircraft problem

Discipline 1:

Discipline 2:

Discipline 3:

The parameters of DMPS-MDOF are set as follows:

Results of the three test cases are listed in Tables

Result comparison of the analytic problem.

Benchmark | MDO solution method | Framework | Number of discipline evaluations | Optimal value | |
---|---|---|---|---|---|

Discipline 1 | Discipline 2 | ||||

Analytic problem | MDF | Origin | 203 | 203 | 3.18341 |

DMPS-MDOF | 78 | 81 | 3.18339 | ||

IDF | Origin | 139 | 139 | 3.18339 | |

DMPS-MDOF | 57 | 60 | 3.18339 | ||

CSSO | Origin | 339 | 432 | 3.18340 | |

DMPS-MDOF | 258 | 264 | 3.18339 |

Result comparison of the Golinski’s speed reducer problem.

Benchmark | MDO solution method | Framework | Number of discipline evaluations | Optimal value | ||
---|---|---|---|---|---|---|

Discipline 1 | Discipline 2 | Discipline 3 | ||||

Golinski’s speed reducer | MDF | Origin | 73 | 73 | 73 | 2994.35 |

DMPS-MDOF | 54 | 51 | 54 | 2994.35 | ||

IDF | Origin | 73 | 73 | 73 | 2994.35 | |

DMPS-MDOF | 54 | 51 | 54 | 2994.35 | ||

CSSO | Origin | 773 | 1021 | 1071 | 2994.35 | |

DMPS-MDOF | 54 | 54 | 54 | 2994.35 |

Result comparison of the aircraft problem.

Benchmark | MDO solution method | Framework | Number of discipline evaluations | Optimal value | ||
---|---|---|---|---|---|---|

Discipline 1 | Discipline 2 | Discipline 3 | ||||

Aircraft problem | MDF | Origin | 638 | 638 | 638 | 875.654 |

DMPS-MDOF | 210 | 219 | 231 | 875.654 | ||

IDF | Origin | 213 | 213 | 213 | 875.654 | |

DMPS-MDOF | 87 | 78 | 81 | 875.654 | ||

CSSO | Origin | 5667 | 5401 | 6815 | 875.654 | |

DMPS-MDOF | 69 | 81 | 77 | 875.654 |

From the results, we know that at the same precision level, DMPS-MDOF with the discipline surrogate model has higher computing efficiency than the original MDO solution method with the discipline model directly. The number of discipline evaluations of DMPS-MDOF is half or less of the original MDO solution method in the three test cases. Especially for distributed architecture, such as CSSO, there is a significant reduction. The MDO solving process has a lightweight computation because of the full use of discipline surrogate models instead of the time-consuming original discipline models. The discipline models can be evaluated separately from the MDO solving process in the independent discipline level. This is helpful for the complex MDO optimization with high-fidelity discipline models.

DMPS-MDOF is a hierarchic framework based on the independent discipline models. The discipline models and disciplinary design points are all irrelevant; parallel computing could be utilized to improve the computing efficiency. For example, in a MDO problem with 5 disciplines, we suppose that each iteration uses 6 design points for every discipline, then a total 30 times of discipline model evaluations are needed for each iteration in this framework. If every discipline evaluation cost 5 minutes, the total CPU time will cost 2 hours and a half for one iteration. But the 30 times discipline model evaluations in each iteration are all independent. It only needs about 5 minutes if we have enough computing resources. So, the computing efficiency will be improved greatly by using parallel computing in DMPS-MDOF.

Another obvious phenomenon is that the discipline evaluations reduction between DMPS-MDOF and the original MDO solution method are not consistent for each MDO solution method. For example, in the analytic problem, CSSO is the most inefficient one with the original MDO solution method and DMPS-MDOF and in the aircraft problem, CSSO is also the most inefficient one with the original MDO solution method but the most efficient one is with DMPS-MDOF. Benefited from high computing efficiency in DMPS-MDOF, it is acceptable and comfortable to select the most suitable MDO solution method for a given MDO problem in DMPS-MDOF.

The MDO solution method, approximation method, and optimization algorithm are kept separated from the discipline model; these are designed as some algorithm libraries with generic interfaces in DMPS-MDOF. It is convenient to add new method to the algorithm library for extension. And also it is convenient for designers to participate in the MDO solving procedure with DMPS-MDOF. Designers can offer the following assistances: (1) Designers can modify, add, or remove design points per iteration according to the convergence state of the current iteration; (2) The MDO solution method, approximation method, and optimization algorithm can be changed per iteration manually by designers when the current one is hard to converge or has low computing efficiency; (3) Designers even can adjust the MDO problem per iteration to obtain a more efficient and more easy-solved way by reducing design space or removing inactive constraint condition.

In this article, the adaptive discipline surrogate model with the MPS sampling process is used to replace the time-consuming discipline model and a new universal MDO framework is developed to implement an effective and parallel MDO solving environment. This framework is based on a universal MDO model with standard coupling relationship definition, which is discussed in greater detail in [

Based on the adaptive discipline surrogate model, DMPS-MDOF has an excellent computing capability and clear framework architecture. The independent disciplines and MPS sampling process give a remarkable adaptation to parallel computing environment, which will bring further improvement for the computing efficiency. It is easy to change MDO solution methods or approximation methods in DMPS-MDOF. Designers could choose the most appropriate one flexibly to get a better performance. Designer’s experience and decision-making capacity could also be realized easily in DMPS-MDOF. These attractive characteristics of DMPS-MDOF make it extremely useful for solving the complex MDO problem with high-fidelity and time-consuming discipline models.

There are also some limits. DMPS-MDOF can solve the MDO problem more efficiently and carry out in parallel way with various MDO solution methods, and it has no obvious difference on the convergence characteristic compare to the original MDO solving procedure, so DMPS-MDOF cannot give a right answer to the problem that the original MDO solution method cannot solve. A possible way to improve the optimization performance is to carry out the MDO iteration process with different MDO solution methods in a nested manner. This will give a more flexible way to organize the MDO iteration process and may further increase the convergence performance. For simple MDO problem with analytic discipline model, DMPS-MDOF may likely cost more CPU time because of the surrogate model generation and the multiple optimization processes, so it is more appropriate for complex MDO problems with high-fidelity discipline models.

Future research will focus on the hybrid process combining with monolithic architecture and distributed architecture. More tests will be given to evaluate the performance of the DMPS-MDOF, especially for some complex MDO problems with high-fidelity discipline models. The proper distributed parallel computing environment with workstation clusters will be carried out to test and verify the parallel solving performance of the DMPS-MDOF.

The data used to support the findings of this study are available from the corresponding author upon request.

The authors declare that they have no conflicts of interest.

This research was supported by the funding from the National Natural Science Foundation of China (no. 51505385), the Shanghai Aerospace Science Technology Innovation Foundation (no. SAST2015010), and the Defense Basic Research Program (JCKY2016204B102 and JCKY2016208C001). The authors are thankful to the National Key Laboratory of Aerospace Flight Dynamics of NPU.