Bilevel Optimal Control , Equilibrium , and Combinatorial Problems with Applications to Engineering

1Department of Systems & Industrial Engineering, Tecnológico de Monterrey (ITESM), Campus Monterrey, Ave. Eugenio Garza Sada 2501 Sur, Monterrey, NL 64849, Mexico 2Department of Experimental Economics, Central Economics & Mathematics Institute (CEMI), Russian Academy of Sciences, Nakhimovsky Pr. 47, Moscow 117418, Russia 3Department of Computer Science, Sumy State University, Rimsky-Korsakov St. 2, Sumy 40007, Ukraine 4Department of Optimization, Institute for Numerical Mathematics and Optimization, TU Bergakademie Freiberg, 09596 Freiberg, Germany 5Department of Mathematics, Wayne State University, 656 West Kirby Avenue Detroit, MI 48202, USA 6Kharkiv Institute of Banking, University of Banking of the National Bank of Ukraine, Peremogy Avenue 55, Kharkiv 61174, Ukraine

In recent years, a large literature on bilevel programming in finite-dimensional spaces has emerged, much of it focusing on optimality conditions, theory, and algorithms associated with certain classes of such problems. The topic proposed to treat in this special issue goes one step further in attempting to deal with bilevel programming problems extended to infinite-dimensional spaces at the upper level, at the lower level, and at both. Such classes of optimal control problems often arise in engineering applications: robotics, water, and waste control problems, ecology in various environments, and so on.
Bilevel Optimal Control Problems are a special class of optimization problems combining Bilevel Programming Programs and optimal control theory. These problems are a generalization of the optimal control problems in cases with more than one decision maker. However, this new type of problems is much more complicated than optimal control problems in the sense that what constitutes a solution, for example, in the particular case of optimistic/pessimistic Stackelberg differential games, is no longer obvious. A lot of new ideas and approaches have been developed in this area by the active researchers throughout the world; see, for example, [1][2][3][4][5][6], among other publications.
A general formulation for Bilevel Optimal Control Problems includes an optimal control problem in both levels, commonly linked with the dynamical system. Nevertheless, other less investigated problems in this field of research can be treated. Hence, this special issue also invites contributions dealing with a special class of optimistic Bilevel Optimal Control Problems where only the upper level possesses an optimal control problem whereas the lower level is a finitedimensional problem whose parameter is the final state of the upper level state function. Recall that the optimistic approach models a cooperative behavior of leader and follower while the pessimistic one depicts a competitive situation. Thus, various optimality conditions can be derived by replacing the lower level problem by either its primal-dual optimality conditions or its optimal value function.
Although a wide range of applications fit the bilevel optimal control framework, real-life implementations are scarce, due mainly to the lack of efficient algorithms for tackling medium-and large-scale bilevel programming problems to which the Bilevel Optimal Control Problems are often reduced. Solving a bilevel (more generally, hierarchical) optimization problem, even in its simplest form, is a difficult task. A lot of different alternative methods may be used based 2 Mathematical Problems in Engineering on the structure of the problem analyzed, but there is no general method that guarantees convergence, performance, or optimality for every type of problem.
Mixed-integer bilevel programming problems (with part of variables at the upper and/or lower level being integer/Boolean ones) are even harder for the well-known conventional optimization techniques. For instance, a usual replacement of the lower level optimization problem with a corresponding KKT condition may not work if some lower level variables are not continuous. Therefore, the solid theoretical base including elements of combinatorial methods is necessary to be found, in order to propose efficient algorithmic procedures aimed at finding local or global solutions of such a problem.
Last but not least many new applied problems in the area of energy networks have recently arisen that can be efficiently solved only as mixed-integer bilevel optimal control programs. Among them are the natural gas cash-out problem, the deregulated electricity market equilibrium problem, biofuel problems, a problem of designing coupled energy carrier networks, wastewater control problems, and so on, if we mention only part of such applications. Bilevel models to describe migration processes are also in the running of the most popular new themes in the area of bilevel programming.
Engineering applications of bilevel optimization and combinatorial problems also include facility location, environmental regulation, energy and agricultural policies, hazardous materials management, and optimal designs for chemical and biotechnological processes.
This special volume comprises papers dealing with three main topics: bilevel programming and optimal control, equilibrium models, and combinatorial (integer programming) problems and their applications to engineering.
The volume opens with the research paper "Defense Strategy of Aircraft Confronted with IR Guided Missile" by H. Huang et al. that proposes and studies a model of aircraft and a surface-type infrared (IR) decoy. The latter can simulate the IR characteristics of the target aircraft, hence being one of the most effective equipment to confront IR guided missile. The authors investigate into the aircraft maneuver and radiation models based on real data of flight and exhaust system radiation in the state of different heights and different speeds. Thus, the most effective avoidance maneuver is simulated when the missile comes from the front of the target aircraft. Lastly, combining maneuvers with decoys, the best defense strategy is analyzed when the missile comes from the front of the aircraft. The result of the simulation, which is authentic, is propitious to avoid the missile and improve the survivability of the aircraft.
The next paper "Capacity Allocation and Revenue Sharing in Airline Alliances: A Combinatorial Auction-Based Modeling" by Y.-J. Gu and J.-F. Zhu attempts to establish a framework to help airline alliances effectively allocate their seat capacity with the aim of maximizing the alliances' revenue. By assuming the airline alliance to behave as an auctioneer with the seat capacity in an itinerary as lots, the Combinatorial Auction Model is constructed to optimize the allocation of the seat. To this end, the authors propose the revenue sharing method sharing revenue among the partners with the aid of the Vickrey-Clarke-Groves (VCG) tools. The results of the numerical study show that the seat capacity allocation is effective even without the complete information exchange, and the twofold revenue share method is more efficient for the airlines.
The contribution titled "Embodied Energy and Cost Optimization of RC Beam under Blast Load" was proposed by R. Yu et al. The authors note that reinforced concrete (RC) structures not only consume a lot of resources but also cause enormous continuing pollution. However, sustainable design of beams could make the RC structures more environment-friendly. One of the important indices for environmental impact assessment is embodied energy. The aim of the presented study is to optimize the embodied energy and the cost of RC beam subject to the blast loads. First, a general optimization procedure is described. Then, this procedure is used to optimize the embodied energy and the costs of RC beams. The optimal results for the costs and the embodied energy are then compared. It is demonstrated that the optimization results are strongly affected by the cost ratio (the ratio of the steel price to the price of concrete per unit volume) and the embodied energy ratio (the ratio of the embodied energy of steel to the embodied energy of concrete per unit volume). An optimal design that minimizes both the embodied energy and the cost simultaneously is obtained whenever the values of and are close enough.
An interesting piece of research titled "Multidimensional Dynamic Programming Algorithm for N-Level Batching with Hierarchical Clustering Structure" has been presented by J.-G. Kim et al. Their study focuses on the N-level batching problem with a hierarchical clustering structure. Clustering is the task of grouping a set of item types in such a way that the item types in the same cluster, in certain sense, are closer to each other than to those in other clusters. In a hierarchical clustering structure, more and more different item types are clustered together while the level of the hierarchy increases. The N-level batching is the process by which the items of different types are grouped into several batches passing from level 1 to level sequentially. This batching is completed for a given hierarchical clustering structure such that the batches in each level satisfy the maximum and minimum batch size requirements of the level. The authors of the paper consider two types of processing costs of the batches: the unit processing cost and the batch processing cost. Then they formulate the N-level batching problem with a hierarchical clustering structure as a nonlinear integer programming model with the objective of minimizing the total processing cost. To solve the optimization problem, the authors propose a multidimensional dynamic programming algorithm illustrated with an example.
An important practical problem is addressed by Q. Liu et al. in their study "Stability Analysis of Water-Resistant Strata in Karst Tunnel Based on Releasable Elastic Strain Energy." In this paper, the energy instability criterion of water-resistant strata and rock mass failure index (RMFI) are proposed based on releasable elastic strain energy . The Mathematical Problems in Engineering 3 RMFI is employed to represent the damage extent of waterresistant strata. When RMFI < 1.0, the rock mass is stable, and if RMFI = 1.0, the rock mass is in the critical instability state. However, if RMFI > 1.0, the rock mass is unstable. A numerical procedure to determine the releasable elastic strain energy U e and the index RMFI is proposed and performed with the aid of the FISH programming language of the Flac 3D software. Afterward, the authors apply Flac 3D in order to analyze the distribution law of the releasable elastic strain energy and the form of a failure zone for different values of the width of a concealed karst cave. Finally, combined with the numerical analysis, a case study is carried out to illustrate the proposed technique's rationality, effectiveness, and feasibility of the use of RMFI for predicting the safe thickness of the water-resistant strata.
The paper "Data Analytics Based Dual-Optimized Adaptive Model Predictive Control for the Power Plant Boiler" by Z. Tang et al. deals with the control of the furnace temperature of a power plant boiler. In order to do that with a high level of precision, a dual-optimized adaptive model predictive control (DoAMPC) method is designed based on the data analytics. In the proposed DoAMPC, an accurate predictive model is constructed adaptively by the hybrid algorithm of the least squares support vector machine and differential evolution method. Then, an optimization problem is constructed based on the predictive model with many constraints. To control the boiler furnace temperature, the differential evolution method is utilized, which finds the best values of the control variables by solving the optimization problem in question. The proposed algorithm can adapt to the time-varying situation by updating the sample data. The experimental results based on practical data illustrate that the DoAMPC can control the boiler furnace temperature with errors less than 1.5%, which can readily meet the requirements of the real production process.
The volume is finished with the paper "A New Calculation Method of Dynamic Kill Fluid Density Variation during Deep Water Drilling" proposed by H. Fan with coauthors. It is wellknown that there are plenty of uncertainties and enormous challenges in deep water drilling due to complicated shallow flows and deep strata of high temperature and pressure. This paper investigates density of a dynamic kill fluid and the optimum density during the kill operation process, in which the dynamic kill process can be divided into two stages, namely, the dynamic stable stage and the static stable stage. The dynamic kill fluid consists of a single liquid phase and different solid phases. In addition, the liquid phase is a mixture of water and oil. Therefore, a new method for calculating the temperature and pressure field of deep water wellbore is proposed. The paper calculates the changing trend of the kill fluid density under different temperature and pressure values by means of a superposition method, a nonlinear regression, and a segment processing technique. By employing the improved model of kill fluid density, deep water kill operation in a well is investigated. A comparison shows that the calculated density results are in line with the field data. The model proposed in this paper proves to be satisfactory in optimizing the dynamic kill operations ensuring the safety in the deep water.

Introduction
According to Wikipedia, clustering problem is the task of grouping a set of item types in such a way that item types in the same cluster are more similar (in some sense or another) to each other than to those in other clusters. It is the main task of exploratory data mining and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics. Hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types: agglomerative clustering and divisive one. Agglomerative clustering is a bottom-up approach: that is, each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. To the contrary, divisive clustering is a top-down approach: that is, all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy. In general, the merges and splits are determined in a greedy manner. The results of hierarchical clustering are usually presented in a dendrogram.
The hierarchical clustering problem has been studied for several decades in a wide range of fields including manufacturing, biotechnology, information technology (IT), logistics and transportation, financial, and postal industries. In the manufacturing sector, hierarchical clustering has been used to form manufacturing cells and processing batches. Vakharia and Wemmerlöv [1] investigated the performance of seven hierarchical agglomerative clustering techniques and eight dissimilarity measures in the context of cell formation in the cellular manufacturing system. Chen et al. [2] proposed a constrained agglomerative clustering algorithm for the single batch processing machine scheduling problem often encountered in semiconductor manufacturing and metal heat treatment. Hierarchical clustering is one of the most commonly used methods in biotechnology for classification. Cheng et al. [3] suggested hierarchical model-based clustering of DNA sequences by upgrading Bayesian model-based 2 Mathematical Problems in Engineering clustering. Cameron et al. [4] proposed hierarchical clustering of gene expression patterns consistent with the lineage of differentiating excitatory neurons during early neocortical development. Saunders et al. [5] used Markov clustering and hierarchical clustering to classify protein families of rust pathogens and rank them according to their likelihood of being effectors. Barzinpour et al. [6] proposed a spectral approach to community detection, where the multiplex is mapped onto Euclidean Space (using the top few eigenvectors) and applied -mean clustering. See Andreopoulos et al. [7] for a review of the clustering algorithms applied in bioinformatics.
Clustering is one of the most important techniques for image segmentation and data analytics in the IT industry. Arifin and Asano [8] presented a histogram thresholding algorithm using hierarchical cluster analysis for image segmentation. Nunez-Iglesias et al. [9] proposed an active machine learning approach for performing hierarchical agglomerative clustering from superpixels to improve segmentation of 2D and 3D images. See Zaitoun and Aqel [10] for a survey of image segmentation techniques. In relation to data analytics, Bouguettaya et al. [11] proposed a set of agglomerative hierarchical clustering methods, and Costa et al. [12] proposed a hierarchical approach for clustering XML documents with multiple forms of structural components. Hierarchical clustering also has been successfully applied to the logistics and transportation sector.Özdamar and Demir [13] proposed a multilevel clustering algorithm that groups demand nodes into smaller clusters at each planning level for coordinating vehicle routing in large-scale postdisaster distribution and evacuation activities. Zhu and Guo [14] extended the traditional hierarchical clustering method by generalizing flows to different hierarchical levels to aggregate and map large taxi flow data in an urban area. The hierarchical clustering problem arises in the postal industry as well. Lim et al. [15] studied the three-level presorting loading problem which occurs in the commercial bulk mail service. They considered the problem as a three-level hierarchical clustering problem and proposed an optimal solution algorithm. For the financial sector application, Aghabozorgi and Teh [16] suggested a novel three-phase clustering model to categorize companies based on the similarity in the shape of their stock markets. See Murtagh and Contreras [17] for an extensive survey on the agglomerative hierarchical clustering algorithms.
In this study, we consider an N-level batching with agglomerative hierarchical clustering structure in which the highest possible level of the hierarchy is . -level batching is the process by which items with different types are grouped into several batches passed from level 1 to level sequentially for a given hierarchical clustering structure such that batches in each level of the hierarchy should satisfy the maximum and minimum batch size requirements of the level. We assume that types of items that can be clustered together are given in each level (i.e., hierarchical clustering structure). Also, we assume that there exist the maximum and minimum batch size requirements at each level of the hierarchy. We consider two kinds of costs for processing batched items: batch processing cost and unit processing cost. If items in a batch are closely related, they can be processed as a batch simultaneously; hence, a batch processing cost occurs to the batch. On the other hand, if items in a batch are loosely related, they have to be processed separately; hence, a unit processing cost occurs to process each item in the batch. The objective of the problem is to minimize the total cost for processing all items.

-Level Batching Problem with
Hierarchical Clustering Structure Now we describe the -level batching problem (NLBP) with agglomerative hierarchical clustering structure considered in this study. The paper develops an integer nonlinear programming model for the NLBP using the notations (see Notations). An integer nonlinear programming formulation for the NLBP is now presented.
( ) ≥ 0 and integer ∀ , ( ) , ( ) ≥ 0 and integer ∀ , ( ) . (10) The objective function (1) to be minimized denotes the total processing cost for all batched items. Both of unit Ω 2 (2) Ω 1 (3) Ω 3 (2) 1 (3) 3 (2)  processing cost and batch processing cost are involved in the total cost. Constraint (2) balances the number of items to be batched, the number of items batched, and number of items not batched for all hierarchical clusters. Constraint (3) ensures that the total number of items to be batched at any cluster should be equal to the number of items not batched in the clusters at the immediate preceding level. Constraint (4) ensures that there is no remained item not batched until level . Constraints (5)- (7) indicate that items batched at any cluster should satisfy both the minimum and the maximum batch size requirements. Constraints (8)-(10) represent decision variables. Figure 1 provides an example of a NLBP with = 3 and nine original item types: that is, Λ (0) = {1 (0) , 2 (0) , . . . , 9 (0) }. As shown in the figure, NLBP can be represented as a network flow problem. The network consists of nine source nodes with (0) items to be batched through 3-level batching. That is, there are nine level-1 clusters ( (1) for = 1, 2, . . . , 9) where the first level batches are formed with Ω (1) satisfying both the minimum and the maximum batch size requirements of the clusters at level 1, three level-2 clusters ( (2) for = 1, 2, 3) where different types of items are batched (for example, four different types of items are batched at 1 (2) cluster), one level-3 cluster where all nine item types can be batched together, and finally one destination node 0. In the network, items are taken out to form batches passed from level-1 clusters to level-3 cluster sequentially with the objective of minimizing total processing costs of batched items. Here, processing costs of batched items, in general, increase as the level of cluster is deeper. Also, the minimum and maximum batch size requirements of clusters may be different. Item quantities to be batched at level (1 ≤ ≤ 3) are the total number of items not batched at level − 1.
Lim et al. [15] developed an optimal solution algorithm for a special type of 3-level batching problem that has tapering discount structure in unit processing cost of batched item: that is, (2) (2) for any . In this study, we challenge a more general problem than that of Lim et al. [15] by extending the hierarchical level to and considering more general cost structure for batched items. This paper develops a dynamic programming solution algorithm for the NLBP to obtain an optimal -level batching with hierarchical clustering structure.

Dynamic Programming
Algorithm for the NLBP In the dynamic programming algorithm for the NLBP, stage ( = 1, 2, . . . , ) is represented by the level and the state at a stage n is the numbers of items of cluster ( ) not batched yet until level n: that is 1 ( ) , 2 ( ) , . . . , ( ) . Also, possible alternatives at stage are the numbers of items of cluster ( ) batched at level n: that is, Ω 1 ( ) , Ω 2 ( ) , . . . , Ω ( ) , satisfying the balancing constraints (2)-(4) and the minimum and maximum batch size constraints (5)- (7). First, we give notation used in the DP recursive equations as follows: . . , ( ) ): the minimum processing cost for batched items during level 1 through n when the numbers of items not batched at level are . . , Ω ( ) ): the processing cost for batched items at level n when the numbers of items batched at level are The forward DP recursive equations for the NLBP are Batches from Ω (1) should satisfy the batch size constraints.
Batches from Ω ( ) should satisfy the batch size constraints.
Batches from Ω 1 ( ) should satisfy the batch size constraints.
It is necessary to reduce the number of states for computational efficiency. We find the range of ( ) needed to be considered in the DP recursive equations to find an optimal solution of the NLBP when unit processing cost, ( ) , is charged for batched items.
Proof. It is obvious from Property 1.
The next property gives the range of ( ) needed to be considered in the DP recursive equations to find an optimal solution of the NLBP when batch processing cost ( ) is charged instead of unit processing cost ( ) .
Proof. Let ( ) be the number of items of cluster ( ) not batched at level and assume that ( ) ≤ ( ) − 1. In this case (Case 1), the maximum processing cost of cluster ( ) Here, Ω ( ) is the number of items of cluster ( ) batched at level n with ( ) not batched items. Let ( ) be the number of items of cluster ( ) not batched at level but ( ) ≥ ( ) . Also, let Ω ( ) be the number of items of cluster ( ) batched at level n with ( ) not batched items. In this case (Case 2), the minimum processing cost of cluster ( ) becomes The difference between the maximum cost of Case 1 and the minimum cost of ). In other words, it is sufficient to consider ( ) < ( ( ) ( ) + ( ) ( ) )/( ( ) + ( +1) ) to obtain an optimal solution of the NLBP.
LetΦ( ( ) ) be the set of ( ) needed to be considered in DP recursive equations to obtain an optimal solution of the NLBP when batch processing cost, ( ) , is charged for batched items.

An Example for the Dynamic Programming Algorithm
In this section, we give an example to explain how to solve the NLBP with the DP recursive equations. Note that this example is the same as that given in Figure 1. Problem data is as follows. Here, we assume that all batched items are charged by unit processing cost.

Concluding Remarks
In this study, we consider the -level batching problem (NLBP) with a hierarchical clustering structure for minimizing the total cost for processing all items. In thelevel batching problem, given items with different types can be grouped into several batches at each level and this batching process is performed from level 1 to level (from the shallower to the deeper level) sequentially in the given hierarchical clustering structure until all of given items are grouped. In this problem, we assume that the less processing cost is incurred for the batches in the shallower level, since more similar items are required to be grouped in a batch of the shallower level. Both of unit processing cost and batch processing cost are considered for batches at each level for real-world applications. We formulate the NLBP as a nonlinear integer programming model, propose a multidimensional dynamic programming algorithm for the NLBP, and develop several optimal properties by which the number of states is efficiently reduced in the proposed DP algorithm. For the clear understanding of the proposed DP algorithm and the properties, we provide the tangible example of NLBP and its solution. In the further research, we will apply the proposed algorithm to real world such as the batching processes of the semiconductor wafer fabrications to reduce the manufacturing cost. In addition, it is necessary to develop more efficient heuristic algorithms for the NLBP since the time and space complexity of the proposed DP algorithm is too high to solve large-sized problem instances.

:
Index of levels ( = 0, 1, 2, . . . , ) ( ) : Index of clusters at level (here, note that any cluster includes item types and items of these item types should be batched. Each cluster at level is composed of clusters at level − 1 (i.e., agglomerative hierarchical clustering structure). Also, Also, (0) is the number of items in the cluster (0) to be batched).

Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.

Introduction
Worldwide, RC structures consume a lot of energy [1] and give off large amounts of greenhouse gases [2]. In addition, the maintenance, repair, and refurbishment during the life of buildings also consume a lot of energy. Even when the building is abandoned, quantities of solid construction waste are difficult to recycle. All of those burden the environment. In order to make RC structures more friendly to environment, a lot of efforts have been made to reduce the influence of RC buildings on environment during the service life [3]. Recently, many researchers focused on the optimization of RC structures considering environment factors [4][5][6][7]. Paya-Zaforteza et al. [4] minimized the carbon dioxide (CO2) emissions and the cost using simulated annealing. Their research showed that the CO2 emissions optimization and the cost optimization were highly related. Yeo and Gabbai [5] explored the implications of using the embodied energy as the objective function to be minimized from the point of view of cost. Their results showed that a reduction of 10% in the embodied energy leads to an increase of 10% in the cost. In their study, certain values of the embodied energy were used, which was not true in practice because the exact value of embodied energy was not known [8,9]. Medeiros and Kripka [6] proposed an optimal method to minimize the monetary and environmental costs of RC column. In their study, four environmental impact assessment indices-CO2, global warming potential (GWP), energy consumption, and Eco-indicator-were taken into consideration, but the cost was not considered. Yepes et al. [7] suggested a design method to optimize the cost and the CO2 emission of precastprestressed concrete U-beam road bridges. Their analysis showed that a reduction of costs by 1 Euro can save up to 1.75 kg in CO2 emissions. Park et al. [10] provided structural design guidelines that reduced the CO2 emission and the cost of RC columns. Parametric study was conducted to investigate the influences of design factors on the CO2 emission and the cost.
In conclusion, those researches optimized RC structures in normal operation condition considering environment factors. Extreme loading conditions, such as explosion, earthquake, and hurricane, were not considered. In the last century, the design that used to resist the extreme loads was not common. With the rapid development of economy, the design to resist the extreme loads was widely accepted by engineering field. An increasing number of buildings were designed considering the extreme loading conditions. Therefore, studies on the sustainable design of RC structures in extreme loading conditions are necessary.
In this study, a sustainable design of RC beam under blast loads was discussed through minimizing the embodied energy. Meanwhile, the cost was used as objective function for comparison. First, a general optimal design procedure was presented. Then, parametric studies were conducted to analyze the influences of the material strength and sectional geometry on the optimal design. The cost and embodied energy of particular building materials (reinforcements and concrete) were taken as variables and their influences on the optimal design were presented. The optimization results of cost and the embodied energy were compared and the correlation between them was discussed, which may be helpful for sustainable deign of RC structural members against blast.

Problem Description.
In this study, an interior beam subjected to air blast load was considered. This problem was an illustrative example of UFC 3-340-02 [11]. Both ends of the beam were fixed. The beam was assumed to deform in a flexural shape. The cross section is shown in Figure 1. A uniform blast load is applied on the upper surface of the beam. The beam was designed according to the restrictions and guidelines found in the UFC 3-340-02 code.
There are two response indices that are usually used to define the damage levels of blast-loaded structural components in a flexural response regime: the support rotation angle ( ) and the ductility ratio ( ) [12]. In the present study, both of the two indices were used as design constraints.
Four design parameters were considered in this study. The four parameters were the height of the beam h, the width b, the ratio of longitudinal reinforcement 1 , and the ratio of stirrup 2 .

Objective Functions.
The total cost and the total embodied energy were taken as objective function to be minimized. The total cost is defined as [13] = where is the total cost of beam, is the cost of concrete per unit volume, is the volume of concrete, is the cost of reinforcing steel per unit volume, and 1 and 2 are the volume of longitudinal reinforcement and shear reinforcement, respectively.
It should be noted that the optimal values were affected by the relative values of the objective function only. Equation (1) divided by is where = / , = / . The value of varies from country to country; thus, (2) was more convenient to study the minimum cost of RC beam than (1) [13].
Similarly to the cost objective function, the embodied energy objective function is written as follows: where = / , is the total energy, is the embodied energy of steel per unit volume, = / is the energy ratio, and is the embodied energy of concrete per unit volume.
Comparing (2) with (3), a similarity can be found. The form of the energy objective function and the cost objective function is very similar. Thus, the cost objective function and the energy objective function are represented by (4) for the convenience of optimization calculation.
where denotes and and denotes and . In the following analysis, the optimal designs for the cost and the embodied energy were unified to the optimal design for .

Design Constraints.
The RC beam under blast load is simplified to SDOF system (see Figure 2). The design constraints were given in mathematical expressions and the optimal design was transform into a constrained optimization problem.
The design constraints were defined in accordance with the UFC 3-340-02 code [11]. The constraints for RC rectangular section beam under blast load were expressed.
(1) Performance limits are as follows: Mathematical Problems in Engineering (2) Constraints of the ratio of longitudinal reinforcement 1 and the ratio of stirrup 2 are as follows: (3) Limit of the direct shear stress is as follows: (4) Limit of the diagonal tension stress is as follows: (5) Limit of the unreinforced web shear capacity is as follows: where is the allowable support rotation angle, is the allowable ductility ratio, is the concrete compressive strength, dc is the dynamic concrete compressive strength, is the yield stress of steel, dy is the dynamic yield stress of steel, 1 is a coefficient associated with , is the width of beam, is the distance from the extreme compression fiber to the centroid of the longitudinal tension reinforcement, is the bending resistance of RC beam, and is the span length.
Some explanations of design constraints are presented here. Equations (5) are the deformation requirements. The value of and can be found in many literatures [11,14,15]. Equations (7)∼(9) are the requirements that ensure shear failure does not occur.
It should be noticed the parameters shown in (5)∼(9) were interrelated. This interrelated relationship was reflected in the calculation of 2 , , , and . Calculations of 2 , , , and were present.
where V and V are calculated by (11) and (12), respectively.
(2) . is defined in where = , is the maximum midspan displacement, and is the elastic midspan displacement.
is calculated by where is the equivalent elastic stiffness. is given by (15) considering clamped boundary.
where is the concrete modulus of elasticity and is the average moment of inertia of the beams. is given by The coefficient in (16) is evaluated by [16] = (3320.3 3 1 − 181.98 2 1 + 5.8624 1 ) ( 7 ) where is the steel modulus of elasticity.

Mathematical Problems in Engineering
(3) . The resistance is given by [11] where is ultimate negative moment capacity at the support and is ultimate positive moment capacity at the midspan.
Values of and are calculated by (19) considering symmetrical reinforced concrete beam.
(4) . is calculated by [17] where is the natural frequency, is the duration time, and 0 is the peak pressure. is calculated by where is the mass of the beam plus 20% of the slabs span perpendicular to the beam and LM is the load-mass factor.

Optimization Technique and Verification.
Many methods were used successfully in optimal design of RC structures, such as Generalized Reduced Gradient (GRG) method [13], heuristic optimization methods [18,19], discretized continuum-type optimality criteria (DCOC) [20], and Lagrange multiplier method [21]. In this study, the sequential quadratic programing (SQP) method was used since SQP was efficient proved by an extensive comparative study done by Schittkowski [22]. Besides, SQP was easy to realize by the constrained nonlinear optimization solver "fmincon" in MATLAB.
In order to get the global optimum, many random starting points were used. The random starting points were generated by Latin hypercube sampling (LHS) [23]. LHS was a matrix of × order. was the number of sampling points to be examined. was the number of design parameters. Each of columns of matrix that contained sampling points 1, 2, . . . , was coupled to form the Latin hypercube. This generated random sample points, which ensured that all portions of design space were represented.
The flowchart of optimum design method is shown in Figure 3.

Example 1: Verify the Effectiveness of Optimum Design
Method. Values of design parameters in this example are listed in Table 1, where is density of concrete.  The ranges of and ℎ are given as follows in this example: Mathematical Problems in Engineering 5   The design variables obtained from the standard design approach solution and the optimal design solution are shown in Table 2.
From Table 2, it is found that ℎ and are larger after optimization. This is explained that an increase of ℎ and increases the bending resistance capacity of RC beam. By optimal design, W is smaller by 16.9%. If is taken as the cost, it will save 16.9% of the cost after optimal design, which is very economical. The optimal design result is shown in Table 2, which proves the effectiveness of the optimal design method mentioned before.

Example 2: Effect of Section
Size ( , ℎ) on the Optimal Design. Energy ratio and cost ratio vary from country to country. Then, ranges of the energy ratio and the cost ratio should be gotten before parameter analysis. Considering the uncertainties of energy ratio and cost ratio, it is reasonable to extend the ranges that are shown in Tables 3 and 4. Therefore, 60 ≤ ≤ 200 and 10 ≤ ≤ 150 were used in this study. Set = 2 ∘ and ≤ 10 and other design parameters are the same as presented in Example 1. When analyzing the effect of on optimal design, set ℎ = 20 in and 10 in ≤ b ≤ 20 in (Example 2(a)). When analyzing the effect of ℎ on optimal design, set b = 15 in and 18 in ≤ h ≤ 25 in (Example 2(b)).
In order to compare the optimal results expediently, optimal design results for = 10 and ℎ = 10 were selected as the reference results. Then, optimal results were normalized by (23), where represents the reference results. Thus, the  When the difference between the value of and the value of is big, an increase in costs results in a reduction in embodied energy. However, when the value of is close to the value of , the cost optimal results reduce the embodied energy simultaneously.
It shall be noticed that two parameters were used as deformation constraints: and . Only one parameter reached 6 Mathematical Problems in Engineering the allowed value in most cases. Take Example 2(a) as an example, the optimal values of / and / are shown in Figure 6. When the value of is small, the allowed value is reached. As the value of increases, the value of decreases while the value of increases until it reaches the allowed value . Two ranges were defined: range (in which = ) and range (in which = ). Trends are different in different ranges, shown in Figure 7. As the value of increases, the resistance changes from curve A to curve E. Curve A to curve C is range and curve C to curve E is range. In range, the value of decreases as the value of increases. In range, the value of increases as the value of increases. Those show that the optimal results are quite different if different deformation constraints are adopted. In order to obtain a safe design, constraints of and should be considered simultaneously.

Example 3: Effect of Material Strength on the Optimal
Design. Values of design parameters were the same as those of Example 2(a). Optimal design results when = 10 were chosen as the reference results. Then, optimal design results were normalized by (23). First, an investigation was performed to study the effect of concrete compression strength on optimal effectiveness Δ (see Figure 8). It is clear to find that the optimal effectiveness closely relates to . However, the relationship between and optimal effectiveness is not clear, which is different from the conclusion under conventional load [29].
Then, an investigation was performed to investigate the effects of the yield stress of steel on optimal effectiveness Δ (see Figure 9). As the yield stress of steel increases, Δ decreases regardless of , which means that the optimal design is very effective if the yield stress of steel is small.

Conclusions
The sustainable design and the blast-resistance design were combined in this paper. A general optimization procedure to minimize the cost and the embodied energy of RC rectangular cross-section beam was present. The optimal design was turned into a nonlinearly constrained optimization. The optimal design was conducted using Latin hypercube sampling and the sequential quadratic programing (SQP) method. Several examples were present to investigate the optimization effectiveness. Several major conclusions are drawn as follows: (1) The optimal results are different if different deformation constraints are adopted. In order to obtain a safe design, constraints of and should be considered simultaneously.
(2) The optimization results are closely related to the cost ratio and the embodied energy ratio . When the value of is close to the value of , the cost optimal results reduce the embodied energy simultaneously.
(3) The optimal design is more effective if the yield stress of steel is small. In the present study, when the yield stress of steel is decreased (from 70000 psi to 60000 psi), the efficiency of optimal design will significantly increase for both cost (from 5.6% to 11.5%) and embodied energy (from 4.2% to 9.8%).

Introduction
Dynamic kill is one of the most efficient ways to deal with the low fracture pressure of shallow in seabed. By adopting this method, well liquid with initial pressure is pumped into the well at first, making the bottom-hole pressure equal or exceed the formation pore pressure to prevent further invasion of the borehole formation fluid. Then a heavier pressure well liquid is gradually added in order to complete kill and reach the static stable state. Unlike the conventional kill methods by means of wellhead to generate back-pressure to balance the underlying pressure, dynamic kill allows us to balance the formation pore pressure with the help of bottom-hole pressure by overcoming the annulus flow resistance. Several well control methods have been established so far. A mathematical model for well flow control was developed. Although this model is simple, it is inconsistent with the actual situation as it ignored the frictional loss of annulus fluid [1]. Based on the Nickens theory, a model of deep water well control was put forward, which has been used widely in deep water well control in previous studies; however, dynamic kill was not proposed in this model [2]. The iterative method was proposed, which comprehensively considered the factors such as shape of borehole, gas invasion, and the kill process [3]. Li et al. studied two-phase flow in the well and solved the partial differential equations of the flows [4]. Later, Sun et al. proposed a multiphase flow calculation method on the basis of the well control by dynamic kill in deep water [5].
In the previous models, some relatively simple factors such as friction in the choke line were considered. However, a comprehensive analysis of various fluid components in the wellbore would be required in actual applications. Despite the comprehensiveness of the multiphase flow model, it is not applicable to engineering applications due to a lack of details in calculation of well control parameters and complexity of seven components. Therefore, a practical and simple calculation method is needed. A better model for calculation of well kill fluid density is proposed in which water phase, oil phase, gas phase, and solid phase are thought out.
One key parameter of dynamic kill is the volume of a well. Calculation of volume requires accurate prediction of the well fluid density. Therefore, research on well fluid density variation plays an important role in dynamic kill analysis and operation: conventional methods for kill fluid density calculation method of additional stress: calculation method of pressure ratio: calculation method of additional equivalent density: Among them, is additional pressure, MPa; is pore pressure, MPa; md is formation fluid pressure, g/cm 3 ; ℎ is depth of kill well, m; is half depth of oil layer, m; is gradient of pore pressure, MPa/m; Although the method is convenient, it has the following disadvantages for it does not resolve the problems in kill fluid such as structure and well depth, especially the difference between deep dynamic kill process, geological conditions, engineering, and onshore drilling. These methods are used to monitor the formation pressure so as to determine drilling fluid pressure determination, while they are not applicable in engineering simply because it is rather difficult to measure the formation pressure of gas invasion or blowout in the drilling process.
The principle of U type tube was used for well kill fluid density calculation whereas the effect of fluid temperature pressure into account was not taken into account in this method [8]. The effects of low temperature on rheological parameters of drilling fluid and the annulus circulating pressure loss of each well section are considered to calculate the density of dynamic kill drilling fluid density; however, the multiple phases and components in dynamic kill in deep water were not taken into account [9]. The gas migration rule of gas-liquid two-phase flow is adopted to obtain the pressure determination on shallow well liquid security value while the effect of temperature is not considered [10]. Based on the theory of gas-liquid two-phase flow pressure and pressure characteristics of whole wellbore derived from Y type tube, calculation on fluid density of well section was derived and design method of kill fluid density was introduced without any consideration for influence of temperature and pressure [11]. A new method to calculate the density of dynamic kill well is also proposed which gets the inlet kill well fluid instantaneous density of mixing slurry according to the instantaneous displacement of seawater and drilling fluid weighting at a certain time and then obtains density value of each point in the borehole by the principle of integration. Despite instantaneous density and improvement in calculation accuracy of density that could be obtained by this method, the changes of temperature and pressure in deep water are not thought about, which may have significant effects on kill fluid density [12]. Improvement is therefore desired from further research.
At present, same method has been adopted for calculation of deep water well kill fluid density in on-land drilling. However, deep water well kill is faced with the problem of narrow fluid density window and difficulty in determination of fracture pressure. Density change in process of deep water drilling is not taken into account in current calculation method [13]. The existing determination of well kill method mainly adopts initial seawater density, neglecting the fluid density variation in complicated environment. With variability in high temperature of deep water and the high temperature and pressure of deep oil and gas reservoir, it is crucial to predict the fluid density variation with change of temperature and pressure which is significant to deep water drilling development and avoid deep water drilling accident and ensure safety. Therefore, a multiphase multicomponent dynamic kill well fluid density model under the temperature and pressure variations in deep water is proposed in this paper.
Formation temperature is given [15]: where is a representative temperature, ∘ C; represents geothermal gradient, ∘ C/m.

Wellbore Temperature Field Model in Dynamic Kill.
The equation for temperature field in drill string is [16] where is a temperature caused by pressure loss in drill string, ∘ C; = (1/ )(d /d ); =̇/2 ; is drill fluid or kill fluid specific heat capacity, J/(kg⋅ ∘ C); is drill string outside radius, m; is heat transfer coefficient from annulus to drill string, J/(m 2 ⋅s⋅K); is the temperature in annulus; is the Internal temperature of drill string.
Annulus temperature field is Mathematical Problems in Engineering 3 In this equation, temperature caused by fluid pressure loss is calculated by the equation is formation layer thermal conductivity, J/(m 2 ⋅s⋅ ∘ C); is well radius, m; is heat transfer coefficient between hohlraum drill fluid and well wall in the well string, J/(m 2 ⋅s⋅K).

Wellbore Pressure Field in Dynamic
Kill. Bottom-hole pressure can be represented by the following equations: where bhc is bottom-hole circulating pressure; hy is hydrostatic fluid column pressure; is annular pressure loss; wh is surface casing pressure; al is total pressure loss of power law fluid in annulus laminar flow section; at is total pressure loss of power law fluid in annulus turbulent flow section. Assume that laminar flow sections in drill string are 1 , turbulent flow sections are 2 , and 1 laminar flow sections in the annulus with the length of each section are ; sectional area is , annulus external diameter and inner diameter of each section are, respectively, and : Pressure distribution in well bore is therefore where is annulus cross-sectional area, m 2 ; is length when calculating annulus pressure loss for different sections, m; is drill bit diameter, m; is rate of penetration, m/s; V is chip slip velocity, m/s; V is upward velocity of fluid in annulus, m/s; is velocity correction factor; is well diameter, m; is drill string external diameter; hy is annulus hydraulic diameter, m; pt is total pressure loss of power law turbulent fluid in the circular pipe; 0 = 4 [4(2 + 1)/ ] .

Kill Fluid Density Model
Kill fluid density is calculated based on the composite model [17]: where is oil density of kill fluid, kg/m 3 ; is water density of kill fluid, kg/m 3 ; is solid phase density of kill fluid, kg/m 3 ; is chemical additive density, kg/m 3 ; is the volume fraction of oil phase; is the volume fraction of water phase; is the volume fraction of solid phase; is the volume fraction of chemical additive; oi is the oil phase density after volume changes caused by temperature and pressure, kg/m 3 ; wi is the water phase density after volume changes caused by temperature and pressure, kg/m 3 .
Consider the kill fluid as solid and liquid phase in which the liquid phase is water or oil and the solid phase are all other liquid immiscible materials, except cuttings.
The solid phase is assumed to have materials, the volume ratios are 1 , 2 , . . . , , and the liquid phase volume ratio is among them. So The following equation is established by (10) and (11): where ( 0 , 0 ) is the density of kill liquid phase under atmospheric temperature and pressure, kg/m 3 ; ( , ) is the density under certain temperature and pressure ( , ), kg/m 3 . In (16) it is notable that the key to calculate kill fluid density under a certain temperature and pressure is to know the liquid density variation with temperature and pressure.
Density of the fluid solid phase is not affected by the temperature and pressure in the kill process. The liquid phase is obviously influenced by temperature and pressure. The effect of temperature is thermal expansion, and pressure affects the compression. As a result, the underground density is no longer equal to surface density because of the above factors.

Liquid Water Density.
Under a given temperature, liquid water density at different pressures can be expressed as  where is compression coefficient, Pa −1 ; 0 is water density at initial state, kg/m 3 ; 0 is the initial pressure, 1 atm. Similarly under a given pressure, water density at different temperatures is expressed as where is expansion coefficient, ∘ C −1 ; 0 is initial temperature.
For different temperatures and pressures, liquid water density can be calculated using (17) and (18). The specific calculation process is shown in Table 1.
The compression coefficient can be obtained by carrying on data fitting to compression coefficient under normal water temperature (20 ∘ C) and different pressure: With a superposition method, at 20 ∘ C, atmospheric pressure, the expression for the calculation of water density is where ( ,20) is water density at 20 ∘ C; is atmospheric (at), kg/m 3 . Similarly, under a given pressure, water density at different temperatures can be obtained by using data fitting of the expansion coefficient from Table 2 Using a superposition method, at the initial pressure of one pressure (1 atm), water density can be calculated: (1, ) is the density of water when initial pressure is 1 atm and initial temperature is ∘ C, kg/m 3 ; ( ) is coefficient of expansion of water when pressure is 1 atm; and temperature is ∘ C, 1/ ∘ C.
According to (20) and (21), water density is calculated which is shown in Figure 1. In this figure, actual operation situation is considered with the temperatures ranging from 20 ∘ C to 200 ∘ C and the pressures from 10 atm to 1200 atm.
In order to improve the precision of calculation, the density of water can be drawn by using the nonlinear regression method above. Under a given pressure, the relationship between water density and temperature can be presented by Coefficients and are constants of water characteristics, which can be calculated by means of nonlinear regression. For further detailed characterizations of the density changes of liquid water, a segmentation method can be used, with steps of 10 atm; then (22) can be changed to By fitting the data in Figure 1, water density at different pressure and temperatures can be calculated with

Oil Phase Density.
The oil phase density also changes along with temperature and pressure. The empirical correlation is expressed as [18,19] ( , ) = 0 + 1 This correlation has a pretty good precision. The following modified correlation is proposed by comprehensively considering oil phase density in different borehole sizes; then method of concentrated summation of sections is adopted: where , ( , ) is the oil phase density of each section, kg/m 3 ; , is the oil phase pressure and temperature of each section, Pa, ∘ C.
Through the above formula, density change of the oil phase can be obtained, as shown in Figure 2.

Determination of Kill Fluid Density.
In general, kill fluid is a mixture of liquid water or oil. Accordingly, this paper calculates the density of kill fluid by considering the liquid water and liquid oil volume ratio coefficient. In the liquid phase, aqueous phase volume ratio coefficient is 1 and oil phase volume ratio is (1 − 1 ), so the liquid density ( , ) calculation equation is shown:  The following equation can be obtained from (23):

Analysis of Influence Factors in Model.
Dynamic kill is a relatively new type of well kill method in deep water drilling.
Here some wells' kill data are chosen for a case study. Some of the related parameters of this well are shown in Table 3. During the initial "dynamic stable stage," the sea water could be chosen as the initial density of well kill fluid.
During "static stable stage," the temperature and pressure below the surface can be calculated on the condition that the surface temperature is 21 ∘ C and relative pressure is zero according to (8) and (9) where the temperature and pressure of the deep water are taken into account to cause the density of kill fluid change.
With (28), the density of well kill fluid can be calculated under a temperature and pressure. The composition of the selected well kill fluid is shown in Table 4. The trend of the density 2 of the well kill fluid at static stable stage is shown in Figure 3 with temperatures shown in below: 26.67 ∘ C, 93.33 ∘ C, and 176.67 ∘ C. It can be seen that temperature rises as density decreases under the same pressure. At higher temperature levels, the density changes with pressure are insignificant. Comparison has been done to real values with the calculated values for error analysis shown in Figure 4. The error is larger at lower temperature levels and pressure conditions, while the error can be reduced as temperature  and pressure increase, indicating that the model is more applicable in the high temperature and high pressure conditions.

Model Application.
Kill operation cannot be conducted above the seabed wellhead; therefore, only fluid density below is calculated in this sample. First of all, wellbore temperature field can be elicited according to the data in Figure 3; then, temperature field model can be obtained as shown in Figure 5 on (8).
As is shown in Figure 5, based on the boundary condition that bottom temperatures are equal, temperature distribution can be obtained. The lowest geothermal gradient exists at the seabed. In addition, annual temperature rises with the increasing of the depth. However, it decreases when approaching the bottom part, whereas the temperature in the drilling string continues to increase. From (28) and (29), density distribution rule of the circulation can be elicited. And then result is compared with that in the in static condition.
As is shown in Figure 6, comparison indicates that temperature and pressure exert a greater influence on annual density and density variation in the drilling string. Compared with Fu's method [13], it is proven to be more accurate as shown in Figure 6. Density declines sharply when it moves towards the bottom of the well; however, density increase will be slow or even static in high temperature. Meanwhile, density in an annulus keeps increasing, indicating that bottom pressure plays a decisive role.

Conclusions
(1) In this paper, all phases of the kill fluids in dynamic well kill operation are taken into account in which this paper further divides the process into dynamic stable stage and static stable stage. If the initial kill fluid and heavy kill fluid are mixed together, it will greatly increase the difficulty of calculation. The model developed in this paper is more accurate in predicting fluid density than previous models since it fully considers initial kill well and heavy kill fluid flowing in the wellbore with a grading (section) method.
(2) The kick tolerance decreases with the increase of water depth, which requires the accurate control of well kill fluid density. Considering the characteristic of temperature and pressure in the deep water environment, kill fluid density is affected by the coupling of temperature and pressure.
(3) Well liquid density is inversely proportional to temperature, while it is proportional to pressure. The effect of pressure on density change is more significant at higher temperatures. Density of oil phase is inversely proportional to temperature and proportional to pressure, whereas the change is relatively insignificant.
(4) With the temperature increasing, the kill fluid in the string declines gradually whereas it keeps constant when reaching the bottom and temperature in the annulus increase as well under the seafloor. However, the influence of the kill timing variation on the density is not taken into consideration in this paper; therefore, further research on this subject will be conducted in the future.

Introduction
In practical engineering, most water inrush and mud burst disaster in karst tunnel is caused by insufficient safety thickness of water-resistant strata between concealed karst cave and tunnel [1][2][3]. Thus, during the karst tunnel construction, it is of important engineering significance to determine the thickness of water-resistant strata rationally. For example, when we adopt "energy-releasing and pressure-reducing" method to address the karst cavity filled with water, a certain thickness of rock plug is required to prevent rock plug form being crushed, and that can cause water inrush disaster. In addition, during the karst tunnel construction, we need to change construction procedure and determine reasonable safe distance when closing to the small and medium filling type karst cave. Generally, if the thickness of water-resistant strata, which is determined by a water-resistant strata stability criterion, is too small, the difficulty of engineering treatment and cost will decrease. Nevertheless, it increases the instability probability of water-resistant strata. On the contrary, if the thickness of water-resistant strata is too large, it will cause unnecessary economic waste obviously. Moreover, it is impossible to relocate the tunnel. Thus it can be seen that selecting reasonable stability criterion of water-resistant strata to determine the thickness is directly related to the tunnel design, construction, and operation safety.
At present, simplified mechanical model and numerical simulation are adopted to analyze the stability of waterresistant strata and determine the thickness. Generally, waterresistant strata are simplified as a mechanical calculation model of beam and slab [4][5][6][7][8], and the calculation is simple and clear. However, it is of limited use without considering the influence factors of karst tunnel stability. Numerical simulation can be employed to evaluate the stability of water-resistant strata with considering the influence factors. Generally speaking, the plastic zone size is used to estimate the stability of water-resistant strata [2][3][4][5][6][7][8][9][10]. In fact, the occurrence of plastic zone in rock mass does not mean that failure occurs in rock mass. Only when the equivalent plastic strain reaches a certain value, failure will occur in rock mass [11], and the plastic zone is only given as a range, which is single information. The information, such as the rock mass damage degree in plastic zone, the most likely damaged area, and the evolution law of damage degree, is very difficult to obtain through single index of plastic zone size [12]. On the whole, the plastic zone is intuitive but not objective.
It can be seen from the law of thermodynamics that energy conversion is the essential characteristics of material physical process, and the material damage is a kind of state instability under energy drive. During the whole process from stress to failure, rock mass exchanges energy with surrounding rock all the time. Therefore, it is more conductive to reflect the essential characteristics of karst tunnel rock mass global failure through researching energy change law during deformation and failure process of karst limestone and the instability criterion of water-resistant strata based on releasable elastic strain energy. In the aspects of energy principle of rock mass stability, many scholars have carried out some researches and achieved a lot of results [12][13][14][15][16][17][18][19]. However, most researches focus on rock burst prediction in mining engineering, which can provide train of thought for stability analysis of karst tunnel based on releasable elastic strain energy. Currently, energy principle is seldom applied in stability analysis of karst tunnel. So we tried to introduce the energy principle into the stability analysis of rock mass excavated in karst area, embedding the energy principlebased failure criterion in finite element or finite difference software, which is of practical interest to ensure the stability of water-resistant strata. Thus, it has important engineering application value that energy principle is employed to analyze the stability of water-resistant strata. If the instability criterion program of water-resistant strata based on releasable elastic strain energy is embedded into the numerical analysis software, it can not only get the scope of the damage zone, but also obtain the damage degree and the change law of each point. Overall, the safety thickness of the water-resistant strata is determined though analyzing the stability of the karst tunnel based on the releasable elastic strain energy, which is of great theoretical significance and engineering application value in guiding the design and construction, preventing the tunnel water inrush and mud burst disaster and ensuring the construction safety.

Calculation of Releasable Elastic Strain Energy of Rock
Mass. Taking unit volume of the rock mass unit for energy analysis, it can be seen from the law of conservation of energy where is the work done by the outside to the rock, and is the total energy of the rock absorbed from outside. is the releasable elastic strain energy.
is the dissipation energy. The relationship between and is shown in Figure 1. In the shaded area, refers to the releasable elastic strain energy, and refers to the dissipation energy, which mainly includes internal damage energy and plastic strain energy.
where ( = 1, 2, 3) is principal stress in three directions. ( = 1, 2, 3) is the total strain in three directions. is elastic strain in three directions. , , ( , , = 1, 2, 3) are principal stresses. is Poisson's ratio. is the unloaded elastic modulus in three directions. Assuming is not affected by damage, denoted by , we can gain In order to simplify the calculation, the initial elastic modulus 0 is taken instead of usually. So (3) can be rewritten as [14]

Establishment of RMFI (Rock Mass Failure Index) Based on Releasable Elastic Strain Energy.
Most of the energy input to the rock mass from outside is transformed to the releasable elastic strain energy and stored in the rock mass. When the releasable elastic strain energy reaches the surface energy 0 of the rock mass unit, global failure will occur. It should be noted that the releasable elastic strain energy is most likely Mathematical Problems in Engineering 3 to be released in the direction of the minor principal stress 3 [14]. In order to unify symbols, we set the compressive stress as positive and the tensile stress as negative in this paper. Academician Xie et al. give the global failure criterion of rock mass unit [14], which is elaborated as follows.

Global Failure Criterion of Rock Mass Unit under
Triaxial Compression. Most engineering rock mass units are in triaxial compression state ( 1 > 2 > 3 ≥ 0). The releasable elastic strain energy is easy to release in the direction of 3 , which is easy to release along the direction of the main stress difference 1 − 3 and is proportional to the relationship between 1 − . If the strain energy release rate in the direction is defined as , then where is the material constant. The maximum energy release rate occurs in the direction of the minimum compressive stress; then 3 = 3 ( 1 − 3 ) . When the maximum strain energy release rate 3 exceeds the critical release rate , the releasable elastic strain energy which is stored in the rock mass unit is released in this direction first. When the releasable elastic strain energy of the rock mass unit reaches the surface energy 0 required for the global failure of the rock mass unit, the global failure of the rock mass will occur. Then it satisfies where is the releasable elastic strain energy of rock mass, which is determined by (4). is the material constant, which can be determined by uniaxial compression test.

Global Failure Criterion of Rock Mass Unit under Tension at Least in One Direction.
Tensile stress often appears in engineering rock mass, such as unloading rock mass. It represents a kind of situation that the global failure is prone to occur. When the rock mass unit is under tension state at least in one direction ( 3 < 0), because the tensile stress in any direction will promote the release of the releasable elastic strain energy, the strain energy release rate is distributed according to the value of the principal stress, and then = ( = 1, 2, 3) .
The maximum strain energy release rate occurs in the direction of maximum tensile stress. When the failure occurs in rock mass, it meets the following requirements: where is the releasable elastic strain energy of rock mass, which is determined by (4). is the material constant, which can be determined by uniaxial compression test.

Establishment of RMFI (Rock Mass Failure Index).
When formulas (10) and (16) are employed to evaluate the stability of engineering rock mass, is tensile strength of rock mass (negative values), and is uniaxial compressive strength of rock mass (positive value). We define the damage degree of rock mass as RMFI (rock mass failure index) in this paper which is given by When RMFI < 1.0, the unit is under a safe state. When RMFI = 1.0, the unit is under a critical state. When RMFI > 1.0, the unit is under a failure state. In this paper, RMFI (rock mass failure index) is employed to characterize the failure degree of the water-resistant strata, and it is also used as the energy criterion for the instability of the water-resistant strata. From formula (17), it can be seen that RMFI (rock mass failure index) corresponds to the stress state of the rock mass unit, and the greater the RMFI (rock mass failure index) is, the greater the failure degree of the rock mass unit is. Therefore, if the stress state of each element of the rock mass is acquired, RMFI (rock mass failure index) of the rock mass unit can be determined, and the instability and failure state of the water-resistant strata can be judged.

Stability Analysis of Water-Resistant Strata Based on RMFI (Rock Mass Failure Index)
After the excavation of tunnel in karst areas, every rock mass unit is under the specific stress state. The whole compressive strength and tensile strength of rock mass are definite value. Then, the failure degree of each rock mass unit is corresponding to the stress state. Thus the numerical simulation method Flac 3D software can be applied, and the instability discrimination program of water-resistant strata based on releasable elastic strain energy and RMFI (rock mass failure index) is compiled by FISH programming language in Flac 3D software. FISH is an array programming language that combines the functional programming with the execution of imperative (procedural) programming. So in the article FISH programming language was used which is embedded to Flac 3D software. In this way, we can gain distribution nephogram of releasable elastic strain energy and RMFI (rock mass failure index), and RMFI (rock mass failure index) of every point in rock mass can be shown. The area of RMFI > 1.0 indicates rock mass failure zone.

3.1.
Modeling. Qiyueshan Tunnel of Liwan highway is selected as the background engineering in this paper. Rock mass mechanical parameters of section GK2480 are shown in Table 1 [19]. Mesh generation of the model is shown in Figure 2, and the positions of the monitoring points are shown in Figure 3.

Elastic Strain Energy's Distribution Characteristics Analysis of Water-Resistant Strata in Karst
Tunnel. The display program of the rock mass releasable elastic strain energy is written in FISH programming language, and its distribution nephogram is shown in Figures 4 and 5.
The releasable elastic strain energy values at each monitoring point of the water-resistant strata under different cave spans are shown in Table 2.
The distribution curves of the releasable elastic strain energy and the minimum principal stress at the monitoring points of the water-resistant strata under different spans are shown in Figure 6.
It can be seen from Figure 6(a) that the elastic strain energy increases with the span at each monitoring point (except the 1st unit at the bottom of the cave). When = 4 m and = 8 m, the releasable elastic strain energy is mainly distributed at the bottom of the cave ( = 4m, = 0.088 MJ/m 3 ). As the span increases, the releasable elastic strain energy is gradually transferred to the top of the tunnel, which is about 0.141 MJ/m 3 . The minimum principal stress distribution curve Figure 6(b) shows that water-resistant strata is under compression integrally, and the minimum principal stress at the top of the tunnel is the lowest. The amplification of minimum principal stress is only 20% with the increasing of the cave span. The triaxial test results of limestone show that the elastic energy storage capacity limit     of the tunnel is proportional to the minimum principal stress [20,21], and it is low at the top of the tunnel. Nevertheless, the releasable elastic strain energy is larger than other locations ( > 8 m) at the top of the tunnel. It increases by 281% with the increasing tunnel span at the top of the tunnel, which indicates that failure is prone to occur for the rock mass at the top of the tunnel and the damage will be severe.

Distribution Characteristics Analysis of RMFI for Water-Resistant Strata in Karst
Tunnel. FISH programming language was employed to write the RMFI display program, and its distribution nephogram was shown in Figure 7. To observe the scope of the area of the water-resistant strata, we can set it with conditional statement and the simulation approach is as follows: if RMFI < 1.0, the RMFI is set to 0. If RMFI > 1.0, the    RMFI specific values are displayed normally, and RMFI > 1.0 is the rock damage area as shown in Figure 8.
The RMFI values at the monitoring points of the waterresistant strata under different span are shown in Table 3.
The distribution curves of RMFI at each monitoring point of the water-resistant strata under different cave spans are shown in Figure 9.
It can be seen from     The paper takes some influence factors of the karst cave span as an example and analyzes the stability of the water-resistant strata and the distribution law of damage area, which is based on releasable elastic strain energy and RMFI (rock mass failure index). It is intuitive and objective unification. The releasable elastic strain energy and RMFI (rock mass failure index) of each point of the rock mass can be obtained quantitatively, and the failure zone of waterresistant strata can be observed directly. If the critical distance of failure zone of water-resistant strata is defined as safety thickness [2], combined with the RMFI display program and adjusting the distance between tunnel and the karst cave continuously, the safety thickness of the water-resistant strata can be determined vividly.

Engineering Application
Yichang-Wanzhou Railway Dazhiping tunnel is a two-lane tunnel. There is a water-rich karst cave at mileage DK137 + 768.5∼DK137 + 783.6. The internal water pressure is 0.73∼ 0.89 MPa. The cave span is 7.7∼9.2 m, and the cave height is 5.5∼7.8 m. The longitudinal span along the axial direction of the tunnel is about 13∼15 m. The relationship of the location between the cave and the tunnel is shown in Figure 10.
The karst cave can be simplified as an ellipse, with the major axis 8.45 m and the short axis 6.65 m. The depth of the tunnel is 110 m. Rock parameters are selected from the limestone test data obtained by Wang et al. [4]. The tensile strength and compressive strength of rock mass is 0.570 MPa and 7.081 MPa, respectively, and the lateral pressure coefficient is 1.3. The density of rock mass is 26.5 kN/m 3 . The    bulk modulus is 17.371 GPa. Shear modulus is 7.808 GPa, and Poisson ratio is 0.27. We adjust the distance between the karst cave and the tunnel constantly, until the critical location of the failure zone of water-resistant strata was found (critical contact position where RMFI > 1.0). The distance between the top of the tunnel and the bottom of the karst cave is the safe distance ( Figure 11). Thus, the water-resistant strata are instable. In actual construction, the pregrouting and advanced pipe roof support is employed, and finally the area influenced by the karst cave is accomplished successfully.

Conclusions
(1) The energy instability criterion of water-resistant strata based on releasable elastic strain energy is established and RMFI (rock mass failure index) is defined. It is employed as energy criterion to characterize the damage degree of water-resistant strata. If RMFI < 1.0, the rock mass is stable. If RMFI = 1.0, the rock mass is in the critical state. If RMFI > 1.0, the rock mass is instable.
(2) The releasable elastic strain energy and RMFI program was compiled by FISH programming language of Flac 3D software. Combined with this program, Flac 3D software to analyze the influence of concealed karst cave on the failure zone is applied.
(3) Combined with the releasable elastic strain energy and RMFI defined in this paper, safety thickness of water-resistant strata in Dazhiping tunnel is determined by the RMFI instability discrimination program. The result is close to the practical engineering. Therefore, this method can be used as a tool for confirming other prediction methods or as a basis for engineering decision.

Introduction
Fossil-fuel-fired power plants can produce stable and controllable energy. Despite the quick development of sustainable power generation methods, such as solar power systems and wind power systems, the fossil-fuel-fired power generation system is and will be an import part of the power system. In the meantime, more effort must be made to reduce the waste gas emission of the thermal power production process in order to protect the environment. To realize this target, the boiler furnace temperature should be controlled to follow certain curves. However, there are several challenges that need to be overcome.
To start with, there is a lack of a dynamic and accurate model of the boiler furnace temperature. There are several mechanism models constructed for the boiler. Gao and Dai [1] extended a new linear model of the steam unit. The model showed a good performance in the dynamic analysis of the power system. Alobaid et al. [2] mentioned numerical models at different steady-state operation points. The models obtain relative error of less than 1% at the steady-state situation. They perform well in a certain situation, but when the situation changes, the accuracy of the model will drop significantly. Wu et al. [3] presented data-driven modeling based on subspace method. It shows good performance in the simulation, but the selection of the modeling method needs to be done by experience before the application. There were many other nonlinear models constructed for the boilerturbine unit, such as [4,5]. They considered the unit as a whole system, and there was no boiler model mentioned alone. Motivated by the successful usage of data-mining method in boiler wall temperature prediction [6], a predictive model is constructed based on the least square support vector machine in this paper. To improve the predictive accuracy, differential evolution algorithm is employed to optimized parameters for different problems dynamically.
Second, a modern control method should by tailored to fit the boiler furnace temperature control problem. The widely used control method in thermal power plants is PID control method. Although this method is easy to utilize and simple to understand, the control accuracy cannot meet the requirement. Besides the PID method, many other methods are utilized in boiler control. Park et al. [7] presented robust controller for cogeneration plants. Keshavarz   a discrete-time piecewise affine model and model predictive control method for the boiler-turbine unit. Heo et al. [9] and Li et al. [10] proposed intelligent control methods based on particle swarm optimization and genetic algorithm relatively. These methods mainly focus on the application of these modern methods, and little attention is paid to high accuracy modeling. Wu et al. [3] presented a predictive control method and demonstrated the effectiveness of the methods. However, the rolling optimization problem in model predictive control method was solved by a simple mathematic method which is inefficient in solving the nonlinear problem. In [11,12], model predictive control methods like dynamic matrix control and nonlinear model predictive control were utilized relatively. Liu et al. [13] presented two alternative methods of exploiting the performance of model predictive control. These methods obtained good performance, but they paid little attention to the optimization of the predictive model. With the development of the model predictive method, a generalized predictive control method was proposed and utilized in boiler steam temperature [14]. The generalized predictive control method provides a more flexible structure and fewer constraints of the predictive model.
Motivated by the two mentioned issues, we proposed a dual-optimized adaptive model predictive control (DoAMPC) method. The proposed DoAMPC has the following main features compared with the previous researches: (1) The DoAMPC adopts LSSVM to construct the predictive model for the boiler furnace temperature with high accuracy rapidly. This model considered the main state variables and control variables. Additionally, to improve the prediction accuracy, differential evolution (DE) is utilized to optimize the parameters of LSSVM for each different problem. The remainder of this paper is organized as follows. Section 2 describes some background knowledge of the boiler control problem, LSSVM, and DE briefly. The details of the proposed DoAMPC are provided in Section 3. In Section 4, some practical cases are employed to testify the performance of the proposed DoAMPC by comparing with other common methods. The conclusions based on the present studies are drawn in Section 5.

Boiler Temperature Control Problem.
The production process is shown in Figure 1. The boiler turns the fossil energy into thermal energy. And the steam generated by the boiler is transferred to the engine for further usage. There are dozens of variables in this process. Most of these variables can be separated into two types briefly. The control variables, such as feedwater flow, are variables that can be controlled by the operator directly. And to set these control variables properly is the main purpose of this paper. The state variables, such as oxygen content, are the variables detected to monitor the production process. These state variables can reflect the situation of the process. Actually, all the variables contain the information that influences the boiler furnace temperature. So, these variables are employed as the input variables in the proposed DoAMPC. [15] proposed LSSVM to improve the calculation efficiency of SVM. LSSVM is a supervised learning method. So, a training dataset is needed to construct a model. Assume that = {( → num , num )} num = 1, 2, . . . , is the training dataset, where denotes the number of samples; → num ∈ nf denotes the input and num ∈ denotes output; nf is the number of input features. Then, the predictive model can be described as formula (1). Hence, where ( ⃗ ) means the predictive value of a new input ⃗ . ( → num , ⃗ ) means the kernel function which provides outstanding ability to deal with nonlinear problems. The main target of LSSVM is to obtain the coefficients num and . Then, the predictive model is established. The selection of the kernel parameters influences the predictive accuracy significantly. To maintain the predictive accuracy in a different situation, DE is employed to optimize the parameters dynamically. [16]. To solve an optimization problem with DE, the search area should be specified. Then, the swarm is initialized randomly. Assume that → ( = 1, 2, . . . , np) is the th particle in iteration.

DE. DE was proposed by Storn and Price
Then, the fitness value of → is calculated considering the objective function of the optimization problem. Then, the particles are updated following certain rules to improve the quality of the swarm. After certain iterations, which depend on the optimization problem, the swarm should converge to a relatively small area. Finally, the particle information with the best fitness value is considered to be the solution of the optimization problem.

Dual-Optimized Adaptive Model Predictive Control (DoAMPC) Method
The dual-optimization model predictive control method is proposed in this section. In Section 3.1, the relationship of differential evolution algorithm and LSSVM and the main procedure are shown. Then, the data preparation method used in this paper is given in Section 3.2. Consequently, the predictive model based on optimized LSSVM is described in Section 3.3. Additionally, the optimization model is constructed and solved in Section 3.4. Finally, the feedback correction method is given in Section 3.5.

The Main Procedure of DoAMPC.
The main procedure of DoAMPC is shown in Figure 2.
Firstly, the practical data (PD) collected from power industry are preprocessed to reduce the noise and guarantee the effectiveness of the sample data (SD).
Then, to realize the predictive control of boiler furnace temperature, a predictive model of boiler furnace temperature must be constructed. Due to the nonlinearity and timevarying boiler furnace temperature, a black-box model based on data analytics is conducted. The control variables u and process variables x are utilized as inputs to obtain the boiler furnace temperature (bt t ). used as the subscript means the time index of each variable. In this model, DE algorithm is utilized to improve the model accuracy by solving an optimization problem.
Thirdly, an optimization problem based on the black-box model and several constraint conditions is given and solved by DE. The solution is the optimized set value of control variables u( ).
Finally, a model correction method is proposed to improve the control accuracy.

Preprocess the Practical Data.
The practical data are read from the database which stores the real process variables. These data can reflect the state of the boiler which is precious to construct the prediction model. However, due to the tough process environment, there are data missing, outliers in practical data. To ensure the accuracy of the prediction model, the practical data must be preprocessed. In this paper, we utilize two different methods to deal with the missing data and outliers, respectively.
First, to deal with the data missing, the cubic spline interpolation is employed. Actually, the state variables of the boiler are continuous. The cubic spline interpolation is good at solving this kind of question. It can recover the missing data with relatively smooth data. Additionally, the cubic spline interpolation is easy to program with good stability and convergence property.
Second, the outliers are deleted and replaced by the cubic spline interpolation. The 3 criterion is utilized to find out the outliers. 1000 continuous samples are used to calculate the mean ( ) and variance ( ) of every variable. If a variable value exceeds ± 3 , then this variable value is considered as an outlier. The reason for choosing this method is that this method can perform calculation with high efficiency to deal with a large amount of data. To ensure the effectiveness of this method, the amount of the data should be large. In fact, there is a large amount of practical data collected from the production process every minute.
After the preprocessing of the practical data, the sample data are utilized in the following procedure. The preprocessing is done repetitively when there are new data collected.

Construct the Predictive Mode.
As mentioned before, the traditional boiler furnace temperature models are not good at dealing with the dynamic situation and cannot be utilized in the model predictive control method directly. So, the blackbox model based on DE and LSSVM is constructed.
DE is a swarm-based optimization algorithm which was proposed by Storn in 1995. In this algorithm, several particles made up of parameters that need to be determined are initialized with random values within a constraint. Then, the fitness of each particle is calculated. And the particles are updated following certain formulas considering their fitness. Finally, the swarm converges to a global best location which is the solution of the optimization problem.
LSSVM was proposed in 1995 by Suykens as a variant of SVM. A linear programming problem instead of the quadratic programming problem is solved in LSSVM, which accelerates the calculation while inheriting the ability to solve nonlinear problems. Additionally, LSSVM shows an outstanding generalization ability. The selections of kernel function parameters and regularization parameter are the keys to maintain the predictive accuracy. Due to the complexity of the selection, an optimization problem is constructed and then solved by DE.
Assume that the sample data used to train the predictive model is SD = {( → in , bt )}, = 1, 2, . . . , ns, where → in denotes the input of prediction model made up with the control variables → and the state variables → . bt t denotes the boiler furnace temperature. ns denotes the number of samples. Then, the predictive model obtained by LSSVM is means the prediction value of → in, → and are the coefficients calculated by LSSVM, and ( → in, → in ) is the kernel. In this paper, we utilize radial basis function as the kernel. The formula is shown in (2). Whenever the LSSVM is utilized, the kernel parameter 2 and the penalty factor influence the prediction accuracy significantly. And the best parameters for the different problem are verified. To obtain the best parameters dynamically, DE is employed. Hence, In the hybrid of DE and LSSVM, DE calculates the candidates of parameters while the LSSVM offers the fitness value of each candidate. To testify the accuracy of the predictive model, new data which are different from SD are prepared. Assume that TD = {( → in , bt )}, = 1, 2, . . . , nt, is the new database, where nt denotes the number of test data. The optimization problem to select the best parameters is to choose the parameters which can allow the prediction accuracy to minimize. The mathematical representation is given as To solve this problem with DE, the following steps should be done.
Step 2 (calculate the fitness of particles). To calculate the fitness of particles, the kernel parameter 2 and the penalty factor are set based on each particle firstly. Then, a predictive model based on sample data is constructed. Calculate the objective function in (3). Next, the value is considered as the fitness of the corresponding particle. Finally, let the global best location → be equal to the best particle.
Step 3 (exchange the information among different particles).
To update particles, the particles change their location by the following equation: where → +1 denotes a new candidate particle; 1 and 2 denote the scale factor to control the update velocity; Mathematical Problems in Engineering 5 1 , 2 = rand[0, 1] mean random number in the range [0, 1] obeying uniform distribution. 3 and 4 are a random integer in the range [1, np] which is different from . These numbers are generated every iteration.
Step 4 (select the particles). After the update, run Step 2 to calculate the fitness of candidate particles. Compare the fitness of → +1 and → and let the better particle be → +1 .
Step 5. If one of the termination conditions is met, stop the algorithm and output the predictive model. If not, go to Step 3.
The termination conditions utilized in this paper are as follows: (1) the fitness of → is smaller than 0.1% and (2) reaches a specified integer . The outputs stored in a specified file for further application include the best parameters, the coefficients → and , and the sample data.

Construct and Solve a Rolling Optimization Problem.
To control the boiler furnace temperature, we attempt to control the temperature to change following a certain curve. This curve named reference curve is given considering the practical requirement. In other words, the objective target is to minimize the error between the outputs and reference curve. After discretization of the reference curve, assume that is the reference value at time . Then, the objective function of the rolling optimization problem is set as the following formula: where denotes the predictive length and ( → in) denotes the predictive value of → in. To realize the prediction of steps, predictive models need to be constructed. In the input → in, there are two parts. The first part includes the proposal control variables ⃗ , and the second part includes the state variables at the prediction moment. ⃗ is the solution which needs to be optimized. Considering the predictive model constructed in Section 3.3 and the bound constraint of ⃗ , the rolling optimization problem is shown in the following formula: where → , and denote the coefficients of the th predictive model; → denotes the upper bound of the control variables; → denotes the lower bound of the control variables.
To solve the problem, DE is utilized to obtain the optimized control variables. The flowchart of the procedure is shown in Figure 3. The content of each step is given in Figure 3.
Initialize the swarm. As in Section 3.3, assume that → ( = 1, 2, . . . , np) is the particle. However, , represents the th control variable which is different from Section 3.3. , is initialized randomly by the following equation: where rand[0, 1] means a random number obeying uniform distribution. This number is different for each .
To calculate the fitness of each particle, the predictive values are calculated. Then, the error between the predictive outputs and reference curve is calculated by formula (6).
The terminal conditions and the method to update the particle are the same as the ones in Section 3.3.
The result outputted by DE includes the optimized control variables and the objective value for further analysis.

Model Correction. The state of the boiler is dynamic.
And the boiler load should change with the requirement. In this case, the prediction model needs to be corrected if the predictive error exceeds the acceptable range. In this paper, considering that the modeling can be done in seconds, a reconstruction strategy is employed.
If the relative predictive error is bigger than 1% for continuous sample period, the prediction model is reconstructed with updated sample data.

Computational Results
To testify the performance of the proposed algorithm, experiments based on the practical production data are carried out. Both the accuracy of the predictive model and the effectiveness of the DoAMPC method are testified. All the algorithms are implemented using VC++. All the following experiments are run on a PC with Intel Core i7-4712MQ CPU (2.30 GHz), 4.00 GB RAM, and Windows 10 operating system.

Experiment Design.
The data utilized in the following experiments are collected from a Chinese thermal power plant. The details of the data are shown in Table 1. The inputs include the control variables (such as feedwater flow, pulverized coal flow, and wind flow) and the state variables (the boiler furnace temperature, oxygen content, etc.). The control object is the temperature of the boiler. Case 1 includes a step change in the boiler furnace temperature which can illustrate the sudden change in practical production. Case 2 and case 3 represent the frequent continuous change of boiler furnace temperature.
To utilize DE to solve the two optimization problems, some of the DE parameters need to be set up. By the commonly used trial-and-error method, the parameters are set up as follows: np = 100, 1 = 1.4, 2 = 1.9, and = 1000.

The Experimental Results of the Predictive Model.
To verify the accuracy of the proposed black-box model, several experiments based on the datasets mentioned in Section 4.1 are provided. And the other 3 widely used methods, radial basis function neural network (RBFNN), multilayer perception (MLP), and linear programming (LP), are utilized as comparison methods. The 5-fold method is employed. In this method, the dataset is divided into 5 subsets randomly, and 4 of the 5 subsets (including 400 instances) are utilized as the training data while the rest of the subsets (including 100 instances) are utilized as test data. The predictive results of the test data are shown in The results in subplots (a)-(c) illustrate that (1) the prediction errors of the proposed method are smaller than other comparative methods. This shows the effectiveness of the proposed predictive method. (2) The predictive errors of the proposed method are smaller than 10 which can meet the requirement of the practical control. We also note that when the load of the unit changes, the prediction error becomes bigger than that in a stable situation. This may be caused by lack of the training data which can reflect the load change. In other words, most of the training data are collected in a stable situation. The number of samples which contain the information of load changing is smaller than that of stable samples. The results in subplots (d)-(f) indicate that (1) the proposed method can obtain stable high accuracy in three cases and (2) the predictive relative errors of the proposed method are below 1.0%.
All the results illustrate that the proposed modeling method can construct an accurate model for nonlinear problems. From subplots (a)-(c), the following results can be concluded. (1) The control results of the DoAMPC method in three different situations follow the reference curves dynamically, which means the DoAMPC is effective. (2) Compared with the PID method, the control accuracy of the proposed method is higher in all different datasets. From subplot (d), we can find out that the control errors of the proposed method are less than 1%, and the average error is below 1.5%.
Similar to the predictive results, the control errors are bigger in the load change point than in a stable situation. This may be caused by the lag of model reconstruction. That is, the predictive model needs several sample times to reconstruct the model. However, the good performance of the reconstructed model demonstrates the effectiveness of the model correction strategy.
All the results demonstrate that the DoAMPC can handle nonlinear and dynamic problems with good performance. The generalization ability of the proposed method is outstanding. With the update of the sample data, this method can be utilized in other boilers in other thermal power plants.

Conclusion
A dual-optimized adaptive model predictive control method based on black-box model and DE algorithm is proposed in this paper in order to control the boiler furnace temperature accurately. One major feature of the proposed DoAMPC method is that it uses a black-box model based on LSSVM Mathematical Problems in Engineering  1  20  39  58  77  96  115  134  153  172  191  210  229  248  267  286  305  324  343  362  381  400  419  438  457  476  to some commonly used algorithms. In the future, we will make more effort in improving the prediction accuracy and trying to utilize other data-mining methods in this structure.

Motivation of Research.
With the globalization of the economy, masses of airline companies usually chose to join airline alliances to extend their aviation networks and increase load factors. In an airline alliance, airlines could combine the legs they operate separately and create additional itineraries through code-sharing agreement. In this paper, code-sharing allows the product of an airline offered to the cooperated airline, and the ticket for the product may be sold by both airlines regardless of which one operates it. The airline revenue management problem is concerned with the decision of which fare classes to make available for sale during the booking period. In an airline alliance, because of the antitrust law, the cooperated airlines cannot use the others' revenue management, how many tickets of different classes should be allocated to the airlines to make the revenue of the airline alliance maximize and how to share the revenue not only fair but also excitation are the mainly problems to research.
In this study, we focus on the development of a method to allocate the seat capacity efficiency and a motivated revenue sharing mechanism for alliances. We assume that the airline alliance is the third party, as it could knowledge the referred information but member airlines parts could not. With the combination of combinatorial auction and the EMSR method [1], we establish combinatorial auction-based seat capacity allocation model in the airline alliances and a twofold revenue sharing model under VCG mechanism. To a certain extent, it may be regarded as allocation centrally but be in line with antitrust law.

Literature Review of Related Topics.
Compared with central decision of a network of a single airline, the decisions of the airlines in an airline alliance are multiple. Boyd [2] considered that using the marginal seat benefit balance principle can maximize the alliances revenue in the discrete revenue management system. He mentioned that the most ideal method to maximize the airline alliance revenue is regarding the alliance as an independent company, but such a centralized decision-making system seems to be impossible for the airline alliances on account of the legal barrier and technical requirements. Similarly, Vinod [3] pointed out the present research situation of the revenue management of airline alliances and the Network Equilibrium Conditions. On this condition, if the airline alliances want maximize the revenue management, they must ask their members to share the seats on the itineraries and then redistribute those shipping spaces. He also thought that the airline alliances should treat the buying price as a control measure of the shipping spaces. At the 2011 AGIFORS Cargo and Revenue Management Study Group meeting in Taipei, Ratliff and Weatherford summarized some issues about code sharing and airline alliance revenue management [4]. They drew a conclusion that members in the airline alliances have a huge effect on the cooperative partner's revenue management and analyzed the present research issues and future research direction of the revenue management in an airline alliance. Houghtalen et al. [5] consider carrier collaborations. They develop models revealing transfer prices between the partners so that the decisions of the individual carriers coinciding in the alliance could be optimal decisions. Then, they analyze whether the allocations obtained by using the models are desirable by the alliance partners. In order to research the seat capacity allocation and control problem of a multiple segment alliance network, Fernandez de la Torre [6] discussed the space allocation methods under the code-sharing principle, one of which is the fixed or unfixed numbers of shipping space controlled by operators and this region space code-sharing failed to be wildly used. The other is the free sales agreement which is the most commonly used at present. He also came up with the likely potential loss in the airlines caused by overestimating the code sharing, and he advised to distribute the revenue in proportion to avoid the individual losses. Oum and Park [7] list further incentives for airlines to join strategic alliances. Capacity control procedures are employed to allocate seat capacity. For individual airlines not part of an alliance, capacity control has attracted a lot of attention. Netessine and Shumsky [8] analyze the seat allocation decisions of the airlines in horizontal and vertical competition. Topaloglu [9] assumed that the airlines make the seat capacity control decision independently in a multiple segment alliance network, and only those airlines who sell tickets can decide whether to accept passengers' arrival demand or not. He established a linear programming model of inventory control of a single airline, working out the airlines' amount of the revenue sharing by using the duality approach.
In the research field of revenue sharing, Wright et al. [10] analyzed the revenue sharing mechanism between two airlines which operate multilegs in the same airline alliance and find out the static and dynamic allocation mechanism influence airlines' revenue and the total revenue of the airline alliances. He emphasized the complexity in using the space allocation mechanism based on discrete dynamic allocation and used a similar method, in which he firstly assumed, in a period, besides release, the transfer price of every flight the airlines should know the class level, future passenger arrival rate, and revenue sharing of their cooperative airlines. And then he eased restrictions; airlines just need to consider their own Internet booking situation with the transfer price serving as the only one linkage information. The advantage of this method is having a reduced calculation and more accurate result. Wright [11] implied incomplete information between the alliance partners for that they were unable or unwilling to share certain information concerning code sharing. He introduced a decomposition rule for central dynamic program to determine approximate bid prices in an airline alliance for the individual partners. Ç etiner and Kimms [12] analyzed the revenue sharing mechanism in the airline alliances with the method of cooperative game theory; they use the proposed nucleolus allocations as a benchmark to evaluate the different mechanisms applied in a decentralized setting. The evaluation of the mechanisms is accomplished through using the fairness measure. Graf and Kimms [13] used the option method to study the internal income distribution of airline alliances and established the revenue sharing linear programming model, with introducing the Option Price and Striking Price, but the Recovery Price in the option method was not shown in this model. Grauberger and Kimms [14] analyzed the competition model in the Parallel Alliance and Complementary Alliance with the Nash equilibrium method, and he was the first person who considered the multigrades of seats in a model.
Besides, Shumsky [15] considers that major traditional carriers are forced by low-cost competitors to process an increasing amount of their traffic in airline alliances. The passengers recognize strategic alliances if they book a codesharing flight. A code-sharing agreement allows an airline to sell flight tickets under its own brand that are provided by its partners. Airlines have incentives to cooperate with other airlines within a strategic alliance due to new expected revenue potentials founded by greater airline networks, coordinated flight schedules, and access to protected markets. Moreover, there are cost-cutting potentials justified by a higher load factor. Another motivation for building strategic alliances could be the generation of market entry barriers. Transchel and Shumsky [16] introduced a closed-loop dynamic pricing game for alliance partners that operate a parallel and substitutable flight. On the one hand, the competitors are assumed to compete horizontally on this flight, while, on the other hand, they have to set prices for their local and for their code-shared products. Belobaba and Jain [17] described the technical difficulties in the process of information sharing faced by alliance revenue management and proposed information sharing mechanisms to overcome the difficulties.
Auctions in which bidders are allowed to bid on bundles of items and the bidder gets either each item in the bundle if the bid wins or no item at all if the bid loses are called combinatorial auctions (CA). In recent years, many auctions involve the sale of a variety of distinct assets. Examples are airport time, network routing, and delivery routes. Ledyard et al. [18] describe the design and use of a combinatorial auction to select carriers. Here, the objects bid upon were delivery routes, and it was profitable for bidders to have their trucks full on the return journey. CA is always applied in allocating resources. Kuo and Miller-Hooks [19] solved the problem of allocating residual track capacity among multiple competing carriers where infrastructure ownership and train operations are vertically separated to facilitate delivery by train.
1.3. Structure. The organization of this paper is as follows. In Section 2, we establish a combinatorial auction model to allocate the seat capacity to the airlines. In Section 3, we formulate a twofold revenue sharing method for the airlines of the airline alliance. In Section 4, we consider the related concept and procedure to compute. In Section 5, we use an example to test the model. Finally, several conclusions from this research are presented in Section 6.

Problem Description.
The cooperation among the airline alliance members is usually based on bilateral and multilateral agreements. From the perspective of revenue management, how to allocate the seat capacity among the airline alliance members in order to maximize the total revenue of the airline alliance is one of the most important problems during the cooperation. For the antitrust laws and lack of technical support, the airlines are not aware of the cabin classes and demands of their cooperative partners.
In the method of airline alliance seat capacity allocation based on combinatorial auction, the total amount of all airlines' seat capacity is seen as the resources (commodities), and each member airline can buy the group of most optimal combined commodities seen as the seat capacity distribution for different O-Ds, so that airline alliance could acquire the maximization revenue.
This study is based on the following assumptions: (1) Each member airline in the airline alliance only operates one leg.
(2) The airline alliance implements code-sharing principle, so each member airline that can "sell" seat capacity belongs to the airlines.
(3) Each airline only knows its own company's fare classes and passenger demands.
(4) According to the antitrust laws, the airline alliance, as a third party, cannot disclose the airlines' sales quantities before joining the alliance to any other airlines.
(5) The passenger demands are independent and comply with the normal distribution.

Parameter Meaning.
is the quantity of airlines in an airline alliance (bidders); is the number order of an airline in the airline alliance, ∈ ; is the numbers of seat capacity belonging to the th airline; + 1 is the numbers of nodes of the itinerary which is operated cooperatively; ( , ) is symbol of an O-D pair, represents the origin and represents the destination, = 1, . . . , , = 2, . . . , +1, > ; is the th combinatorial bidding strategy; is the th combinatorial bidding strategy of the th airline; ( , ) is the demand quantity in the th airline's the th combinatorial bidding strategy. And the vector quantity is . . .  represents the bidding segment in a strategy whether it uses the th airline's seat capacity or not: . . . (2) ( ) is the bid paid for the strategy ; is the total revenue of the airline alliance by seat capacity auction; : let = 1 if strategy is selected and zero otherwise.

Model Establishment
The objective function (3) maximizes the expected revenue of the airline alliance; constraint (4) force the sum of the seat capacity on different O-Ds that the member airlines bid for cannot exceed the operating airlines' transport capacity; constrain (5) ensures that the each airline must win a bidding and one biding only; in constrain (6), ( ) = 1 if the combinatorial bidding strategy is selected and zero otherwise.

Combinatorial Auction-Based Twofold Revenue Sharing Model
The theories about revenue sharing of the airline alliances by most writers are not used; at present, the most mainly used method is the proportional distribution in accordance with the airline alliance agreement. In the agreement, the increased revenue coming from those cooperative airlines after the airline companies joining the alliance shall be distributed in proportion which is stipulated in the agreement. However, this allocation method is not completely reasonable, because distributing in the fixed proportion would decrease the airlines' enthusiasm, for member airlines, no matter who sales a ticket, they will share the same percentage of the fare form the ticket. Therefore, we make references to the above-mentioned combinatorial auction space allocation model and establish a twofold revenue allocation for airlines. In the new model, the airlines in the alliances can obtain revenue in two ways: one is from the traditional sharing method in proportion corresponding to the agreement and the other is from selling tickets of seat capacity. So, the more expensive tickets airlines sell, the more they earn. The method is a kind of encouragement for all the airlines, and the model is as follows: ∈ (0, 1) .
In formula (7) of this model, denotes the proportion of the increased revenue sharing for each member airline, denotes the airlines' revenue of seats by combinatorial auction allocation after joining the alliances, ℎ denotes the payment price for the selected strategy of the member airline, denotes each airline's revenue before joining the airline alliance, represents each airline's revenue for selling tickets of the seat capacity allocated, that is, the expected revenue which equals the bid price in this paper. Constraints (8) and (9) make sure that the proportions are between 0 and 1, and the sum of them is 1.
The increased revenue of airlines is Δ > 0; that is to say, only the airlines' revenue increases when they participate in the alliances and the stability of the airline alliances can be ensured.

Determination of the Bid Price.
When airlines bid for the combinatorial strategy of seat capacity, they do not know the other airline's bidding prices for different combinations of seat capacity. Incomplete information is among buyers for the two reasons: A because the various brand value, fare levels, and passengers' demands of airlines are different and B the existence of the antitrust laws.
In the model, we take the predictive value of the seat capacity strategy as the largest willing price of bidders. After participating in the airline alliances, the airlines can forecast the passengers' various demands for multiclass seats on the different itineraries. Through the EMSR method, we can figure out this airline's expected value of the seat capacity and regard it as the bidding price.

Realization Process of the Seat Capacity Allocation Model
Step 1. The combinatorial auction is a kind of NP-hard problem, so it is impossible for an airline to bid for every strategy in a real world application. In order to correspond to reality and be convenient to calculate, airlines use an individual value which acts as the minimum reference variable unit to generate all the viable strategies. Taking two airline companies of one airline, for example, the form of a random strategy is Step 2. Using the EMSR method [5], we can figure out the relevant bid price ( ) of all strategies.
Step 3. According to the bid price and strategies submitted by airlines, the airline alliances solve the 0-1 equation that can gain the optimal solution. The deduction of the seat constraint condition is as follows. Take two airline companies of one airline, for example, and assume that airline 1 offers two bidding strategies and airline 2 offers three bidding strategies. The front part of inequality sign in constraint equation (4) can be expressed as follows: ) .
Step 4. The optimal solution is based on a minimum reference variable unit, so the solution is a second-best solution The unit of price is $, the first number in the demand column represents the average demand, and the second number represents standard deviation. History data are collected from airlines.

The VCG Mechanism of Revenue Allocation.
In this paper, we use the VCG mechanism to figure out the payment price of all the airlines, which was made by other companies' loss through the participation of the bidder. VCG is a direct mechanism with a motivation system, and its advantage can be reflected to satisfy each bidder's individual interests by omitting bidders' complicated and strategic calculation. Under the VCG mechanism, if all the airlines adopt the dominant strategy (i.e., reaching the dominant strategy equilibrium), the utility of the space value can reach the maximum. Therefore, the bidders gain the profits produced by the VCG mechanism and increase their independent profits.

The Empirical Study
There are two airlines in an airline alliance, and airline 1 operates Phoenix (PHX)-Chicago (ORD); meanwhile, airline 2 operates Chicago (ORD)-Frankfurt (FRA). Airline 1 provides 300 seats and airline 2 provides 250 seats. According to the airline alliance agreement, the increased revenue sharing proportion between airlines 1 and 2 is 30% versus 70%. Moreover, airlines offer several fare classes for each itinerary, an airline product defines an OD pair with a specific fare class and it is referred to as an "ODF." Forecasts about fares and demands of two airlines' ODF before and after joining the airline alliances are seen as in Tables 1 and 2. Two airlines' respective fares and demands before they joined the airline alliance are shown in Table 1. Table 2 reflects each airline's fares and demands for different pairs of O-Ds after joining the airline alliance. According to the model calculation, Table 3 lists many relevant contents in a way of combinatorial auction, such as the airlines' seat capacity allocation results, the bidding price used by airlines, the price needed to be paid, and the allotment of the increased revenue in the traditional mechanism.
Under the VCG mechanism, we can get ∑ =1 ℎ − ∑ =1 < 0 through a calculation, and the revenue sharing proportion of the increased revenue is close to the contribution proportion of the ticket selling (as shown in Figure 1), which explains that this distribution method pays more attention on airlines' sales volume. Using the model can motivate the airlines and promote the development of the whole airline alliance.

Conclusion
In this paper, we establish the combinatorial auction model to allocate the seat capacity to the different O-Ds of the different airlines. On the condition of not breaking the antitrust laws, this method can maximize the revenue of the airline alliances and ensure that every airline can get the seat capacity allocation on the different O-Ds. Compared to the traditional revenue distribution method, this method based on the combinatorial auction not only can gain revenue from their alliances but also can obtain profit independently from the seat capacity that they bid for; that is to say, the higher ticket price they sell, the more they earn. It stimulates the airlines to increase their own revenue by bidding more seat capacity and selling the higher price. The VCG-mechanism can calculate the final payment price and allocation amount of the increased revenue for airlines. Experimental results show that the established mathematic model in this paper is in line with the actual needs of the airline alliances, and the arithmetic can satisfy the needs of seat capacity allocation in the airline alliances.
There are also some limitations in this paper; in the common calculation software, the computation time can be long due to the large amount of calculation and the complexity of the airline alliances. Besides commerce confidence, we cannot get the real date for an airline alliance. How to figure out the space allocation results for reality application more quickly and more accurately is the direction in the further research.

Introduction
With the development of IR technology, IR guided missile gradually replaces the status of radar guided missile and becomes the main weapon in the air combat. In the war area for 20 years, about 90% of aircrafts were damaged by the IR guided missile [1]. So studying the IR defense strategy is becoming necessary. In recent years, many researches have been carried out about air combat especially the optimal flight paths in air combat [2] and multiple aircrafts cooperative air combat [3], but few for IR defense strategy. IR defense is the key to IR antagonism. The aircraft can get away from missile and enemy aircraft and even counter them by performing reasonable and effective means of defense. Particularly when the missile is coming from frontage, it is the opposite moving between the aircraft and missile that makes pilot nervous, what makes the aircraft being shot down before the pilot have time to react. All in all, it is important to design reasonable and effective means of defense, which can improve aircraft susceptibility and lengthen pilots' life.
Arthur establishes the model of decoy and vertical-S maneuver and obtains the best defensive strategy that the aircraft deploys decoys and performs vertical-S maneuver simultaneously [4]. But the antijamming performance of missile which tracks the center of energy is very weak in the paper. This missile has been replaced by a new generation of missile in modern air combat, so the defense strategy has not more instructions to the modern air combat. Fumiaki Imado studies defense strategy of barrel roll maneuvers against a proportional navigation guidance missile and draws the conclusion that a high-barrel roll maneuver generally produces a larger miss distance than a split-S maneuver [5]. But most of the research is about avoidance maneuver, which does not analyse the interference effect of decoys. Xinhui et al. establish the mathematic model of barrel roll maneuvering and study the effect of fighter barrel roll maneuvering terminal evasion [6]. Finally, it is obtained that the miss distance becomes bigger firstly and then smaller as the change of barrel roll angle rate. But it does not contrast other maneuvers and the conclusions are not accurate.
In this paper, the best avoidance maneuver is obtained by contrasting horizontal-S maneuver to barrel roll maneuver. And then according to the antijamming performance of missile and applying IR decoy into avoidance maneuver, comparing with different defense strategies, the best defense strategy is obtained which is propitious to avoid the missiles and improve the survivability of aircraft.
The main construction of the paper is that, firstly, the simulation model containing aircraft and surface-type IR decoys is established. Secondly, put the measured data to good use, which ensure the reliability of model. Finally, the best defensive strategy is acquired with the incoming IR guided missiles in front of the aircraft by simulation and analysis, considering the interference of maneuvers and surface-type IR decoys comprehensively.

Analysis of Defensive Strategy of Aircraft Confront IR Guided Missile
When IR guided missile is flying opposite to the aircraft, it is infeasible to avoid missile in the method of consuming missile energy by reciprocating maneuver. To avoid missile, the following points are taken into consideration.

Weaken Radiation of Aircraft Skin.
The skin radiation from the front of aircraft is lowest [7]. As distance becomes a little longer, the IR radiation will not reach the threshold of the IR detector in the seeker [8]. Thus, in the process of headon attack, the missile cannot capture target aircraft. Finally, the hit ratio will drop sharply. It is obviously supposed to keep the head of the aircraft pointing to the missile all the time to avoid the head-on missile.
Supposing that the aircraft points to the coming missile, the pilot does not need to adjust. While the aircraft does not point to the coming missile in most cases, the pilot needs to adjust the aircraft to point to the coming missile constantly. However, as the missile-target distance becomes shorter, the seekers will capture enough radiation of the aircraft skin to hit it at last. It is unrealistic only by weakening radiation of aircraft skin to avoid missile, which must be combined with decoys. This paper will not do much about it.

Perform the Maximum Overload Evadable Maneuver.
In process of attack, the speed of the missile is more than twice faster than aircraft. When the missile is close to the aircraft, with the limit of the missile self-overload, the turn radius will be very large and the turn rate may not be enough large to track the aircraft [9]. There is a great chance that the aircraft will avoid missile attack while performing the maximum overload evadable maneuver, especially snake maneuver and barrel roll maneuver [10]. Suppose that the maximum overload of the missile is 30 g in this paper. Barrel roll maneuver makes the image point of the aircraft do continuous circle motion on the focal plane of the seeker. Missile is asked to change the direction of motion all the time, thus leading to lose the target because of the limit of the overload [11]. Snake maneuver is also named S maneuver, which is mainly composed of vertical-S maneuver and horizontal-S maneuver. These two types of maneuvers do not have essential difference [12]. Horizontal-S maneuver is simulated in detail in our paper, the theory of which is similar to barrel roll maneuver to avoid the missile. freedom, but it is difficult to obtain coefficients of aerodynamics and operation quantities, such as rudder angle, which bring difficulties into solving the equations. To ensure the credibility of the aircraft trajectory, the actual flight data got from the recorder of flight data is adopted.

Establishment of Model
The flight data has been collected from 200 sorties of an aircraft and the maneuvers are identified [13], from which 50 maneuvers have been sampled to build up the maneuver database. One data of maneuver database is chosen, such as barrel roll maneuver. To ensure the authenticity of the aircraft trajectory, assuming that the initial time is known, the flight information (such as position, condition, and speed) of the aircraft is obtained by interpolating in the simulation.

Model of Aircraft Radiation.
Aircraft radiations are composed of aircraft plume radiation, aircraft hot parts radiation, skin radiation, reflected sunshine radiation, reflected earthshine radiation, and reflected skyshine radiation [14]. Aircraft plume radiation and aircraft hot parts radiation are collectively called exhaust system radiation and the reflected radiation is so weak that can be ignored [15], so we just analyse skin radiation and exhaust system radiation in this paper [16].
Calculation of the aircraft exhaust system radiation model by theoretical formula is not only complicated but also has a gap compared with real data. So the measured radiation intensity of the exhaust system at different speed and different height is adopted in this paper. The relationship between sight angle and radiation intensity of the exhaust system is stored in database.
Supposing that is radiation intensity of skin, is the measured radiation intensity of exhaust system. Firstly, the coordinate systems are established. Ground coordinate system fixed on the ground is inertial coordinate system, whose original point is the projection of the coordinate onto the horizontal plane at the time of foils deployment.
axis is vertical upward. and axes are in the horizontal plane and constitute right-handed coordinate system with axis. Body coordinate system's original point is the aircraft centroid.
axis is the vertical axis of aircraft and forward is positive.
axis is in the symmetry plane of aircraft and upward is positive.
axis constitutes right-handed coordinate system with and axes. The relationship between coordinate systems is shown in Figure 1.
Secondly, the angle between line-of-sight and axis is calculated. Suppose that the aircraft coordinate is 0 , 0 , 0 and the missile coordinate is 1 , 1 , 1 in ground coordinate system. The transition matrix from ground coordinate system to body coordinate system is L b g : O (x 1 , y 1 , z 1 ) (x 0 , y 0 , z 0 ) Figure 1: The relationship of each coordinate system. so is The radiation intensity of exhaust system in the direction of line-of-sight can be solved by interpolating and inquiring about the relationship between radiation intensity of exhaust system and sight angle within database.
Thirdly, the radiation intensity of aircraft skin is calculated: where is the radiation intensity of skin in the wave range from 1 to 2 , is the emissivity of the skin, is included angle between sight direction and outer normal direction of skin surface element, and is skin surface element. is the stagnation temperature, which is mainly related to atmosphere temperature and flight velocity. The average temperature in the surface of aircraft skin is , which is approximately times higher than ; that is, The selection of should consider aerodynamic heating, environment radiation (solar radiation and atmosphere radiation), and internal heating (engine, jet nozzle, and electronic cabin). It will be confirmed by comprehensive action of thermal convection, heat conduction, and thermal radiation [17]. When the surface of aircraft is smooth (without external weapons) and does not expose to the sunshine and the effect of heat insulation is well, the value of is little. The range of is from 0.7 to 0.95. Lastly, it is concluded that where is the total radiation intensity and is the exhaust system radiation correction factor of taking the skin covering into consideration.

Model of Surface-Type IR Decoy.
Surface-type IR decoy consists of many foils, which is suppressed in the sealed barrel and isolated from air. Once exposed in the air, the foils self-ignite rapidly. The continuous burning time of the foils is about two or three seconds (sec) with a temperature maintained in certain range, which changes slowly compared with point-type IR decoy. At the same time the temperature will not be too high, so the radiation characteristics are more like the aircraft [18]. Surface-type IR decoy is one of the most effective IR interference equipment, which can produce large area IR radiation and shield the image of target IR radiation. It will seriously interfere with the distinguishment of the seeker. Surface-type IR decoy drops sharply by gravity and aerodynamic drag after deploying, whose trajectory is similar to parabola.

Motion Model of Surface-Type IR Decoy.
With the influence of gravity, aerodynamic force, internal force, random factors, and so on, the motion laws of foils are different and complex; therefore, we regard the surface-type IR decoy as a whole to study the shape of spread and the motion trajectory of centroid instead of studying the motion law of every foil.
It can be known from Bernoulli equation that where V ( ) is the velocity of decoy centroid at time . V ( ), V ( ), and V ( ) is the component velocity of V ( ) in the axes of , , and . is the drag coefficient. is the frontal area of the spread surface-type IR decoy. is the air density. ( ) is the acceleration generated by aerodynamic drag.
The decoy velocity is much less than the local speed of sound. Assume that is invariant in the movement of decoy. During the flight of decoy, when the drag and gravity is equal and opposite, V ( ) is invariant and the decoy descends at an even speed: where is as follows:  where V is the uniform decline speed which can be obtained by surface-type IR decoy low altitude deploying experiment.
The decoy centroid coordinate at time is as follows: (2) Spread Shape Equations of Decoy. Surface-type IR decoy spreading shape is approximate to ellipsoid and the direction of long axis is similar to the decoy centroid motion direction. The variation of long axis ranges from ten to twenty meters, and the short is three to five meters [19], while the values of long and short axis have relation with velocity. The relationship between the length of long and short axis and the initial speed of decoy is shown in Figure 2. Fitting the curve in Figure 2, the approximate fitting equations are shown as follows: where and are the long axis and short axis, respectively, V is the local speed of sound, V 0 < V , and V 0 is the initial speed of the decoy.

Radiation
Model of Surface-Type IR Decoy. The radiation characteristics of surface-type IR decoy are decided by  the radiation characteristics of the foils diffusing in the air. The foils are adhered to Magnesium and Teflon [20], which will generate much heat rapidly when exposed to the air [21]. The sampling test is done to measure the variation of foils' diffusing temperature in the whole deployment process. And then draw the curve of foils' temperature varying with time after interpolation and smooth processing. The medium wave band radiation intensity of the surface-type IR decoy ranges from 1000 W/sr to 2500 W/sr. Under the condition of 8000 m height and 1 Ma speed, the interpolation curve of the dynamic radiation intensity of the surface-type IR decoy is shown in Figure 3 and the gray level distribution of the surface-type IR decoy is shown in Figures 4 and 5. Figures 6-9. The barrel roll radius is the radius of the cylindrical curved surface expanding generated by the spiral trajectory of barrel roll maneuver which is not the real radius of motion trajectory.

Horizontal-S Maneuver Simulation.
Suppose that the target aircraft and the attacker are all in 8000 m height and they fly in opposite direction at the speed of 200 m/s. The attacker launches the missile at the distance of 6000 m and the target aircraft performs horizontal-S maneuver at the same time [22]. The simulation results are shown in Figures 10 and  11.
Keeping initial conditions invariant and repeating both of maneuvers 2000 times, the hitting results are as shown in Table 1.     It is concluded from Table 1 that when the coming missile is from the frontage of the target aircraft, it prefers to perform barrel roll maneuver rather than snake maneuver to avoid missile.

Airborne IR Jamming Simulation.
It is concluded from maneuver simulation that barrel roll maneuver with small radius and large overload is more beneficial for aircraft to avoid missile. The airborne IR jamming simulation is that the target performs small radius barrel roll maneuver and small radius snake maneuver, timely deploying surface-type IR decoy to interfere missile in this chapter. Assume that the target aircraft performs barrel roll maneuver of 125 m barrel roll radius and horizontal-S maneuver of 500 m radius, respectively, and missile launch time is zero hours, and surface-type IR decoy is deployed after 0.5 sec; meanwhile other conditions are similar to maneuver simulation. The simulation results are shown in Figures 12  and 13.
The target aircraft deploys one and four surface-type IR decoys (time interval is 0.1 sec), respectively, when receiving the airborne alarming signal and repeats both simulations 2000 times. The simulation results are as shown in Table 2.

Maneuver Simulation Analysis.
It is concluded from maneuver simulation that when the target aircraft performs barrel roll maneuver to avoid missile, the trajectory of missile is approximately close to the shape of barrel roll. It will make the missile fly with big overload all the time. When the target is flying opposite to missile, despite the fact that missile consumes much energy because of the big overloaded barrel roll maneuver, the remaining fuels are enough for hitting the target. Therefore, the target aircraft can avoid the missile only by the limit of maximum overload of the missile to make it disappear in the missile field of view. When the target aircraft performs barrel roll maneuver of 200 m barrel roll radius (the real trajectory radius is about 800 m), the missile hits the target. When the target aircraft performs barrel roll maneuver of 125 m barrel roll radius (the real trajectory radius is about 500 m), the missile misses the target.
When the target aircraft performs small radius barrel roll maneuver, the missile is asked to reduce the radius to fly to target, which will make the missile fly with big overload. However, the radius of missile is larger than the target. When the distance between the target and missile is short, the missile is supposed to reduce the radius of barrel roll at the right time to hit the target [23]. The computed overload calculated by the missile computer is likely to exceed the maximum available overload, thus resulting in the miss of the target.
But for horizontal-S maneuver, the probability of avoiding missile for the target aircraft performing horizontal-S maneuver of 500 m radius is higher than the 1000 m radius. However the radius is, the effect of avoiding missile is not good, as the hitting rate is still high.
When the target aircraft performs horizontal-S maneuver to avoid missile, the trajectory of missile is approximately close to the shape of horizontal-S maneuver when tracking the target at the same time. Compared with barrel roll maneuver, the overload of horizontal-S maneuver is only along the horizontal direction, which means that the maximum radius of horizontal direction is smaller. When the distance between the target and missile is short, the missile only needs to make a sharp turn in the horizontal direction and then quickly point to the target and hit the target.
It is concluded from simulation that as far as the effect of avoiding missile is concerned, the horizontal-S maneuver of 500 m radius is similar to the barrel roll maneuver of 200 m radius. All in all, the smaller the snake maneuver radius is, the more successful of avoiding the missile. Consider the fact that the maximum overload that the pilot can bear in a short time is about 10 g, what limits the minimum turning radius.
The smaller radius of barrel roll maneuver is propitious to avoid missile, of which the barrel roll maneuver of 125 m radius is the most effective.

Airborne IR Jamming Simulation Analysis.
It is concluded from maneuver simulation that the effect of defense is best when the target aircraft performs barrel roll maneuver of small radius (125 m to 150 m), meanwhile deploying multiple surface-type IR decoys [24].
The initial deployment speed of surface-type IR decoy is much slower compared with the speed of the target aircraft, so surface-type IR decoy flies approximately along the tangent line of the maneuver trajectory and the velocity decreases sharply. When the target aircraft deploys surface-type IR decoy, the missile gets into antijamming state and remembers the radiation characteristics of the target image before antijamming state. When surface-type IR decoy separates from the target aircraft, owing to the slow speed of surface-type IR decoy, the separation position of surface-type IR decoy in the focal plane of the seeker is more close to the position of the target image before antijamming state. At the same time the missile exits the antijamming state.
Firstly, the seeker chooses the target which is almost similar to the radiation characteristics of the target image in the memory. The radiation characteristics of surface-type IR decoy are similar to the target aircraft, which do interference to the missile. Moreover, the missile judges from distance and selects the point which is the closest to the true target in the memory before entering into antijamming state to track. Because the speed of surface-type IR decoy decreases sharply, the position is near to the position in the memory of missile. Therefore, the target aircraft which performs small radius barrel roll maneuver accompanied with the decoy is more likely to avoid the missile.
The spread shape of surface-type IR decoy is about ellipse, whose outline is different from the aircraft, while the radiation intensity of surface-type IR decoy is larger than the aircraft. Therefore, the missile can distinguish aircraft from surface-type IR decoy based on outline and radiation intensity of the memorial target image. And the effect of deploying just one decoy is not obvious. However, when deploying four decoys, some decoys are still in the state of launching and the IR image coincides with the aircraft when the missile gets out antijamming state and the missile will take the radiation characteristics of coincided IR image into memory. When the IR image of surface-type IR decoy is separated from the target aircraft, the image characteristics of both are close to the image of the memorial target of the missile and it will jam missile seriously. The missile is likely to track the surface-type IR decoy and miss the target [25].
So the best defensive strategy for the target aircraft is performing small radius barrel roll maneuver accompanied with deploying two to four decoys with the deployment time interval of 0.1 sec. The earlier the deployment time is, the better the jamming effect is. Meanwhile the pilot should observe the trajectory of missile all the time [26]. Assuming that the missile tracks the decoys instead of the target aircraft after deploying decoys, the target aircraft should exit barrel roll maneuver and fly away from the sight direction of missile.
Under the condition that the missile still tracks the target aircraft, the target should deploy the decoys once again, of which the way is similar to the first time. Meanwhile the pilot should observe the trajectory of missile again.

Conclusions
Firstly, the models of aircraft and surface-type IR decoy based on the realistic data of flight and exhaust system radiation are established, which have high reliability. Secondly, the most effective maneuver of avoiding missile is simulated, when the coming missile is from the front of the target aircraft. Finally, on the basis of the most effective maneuver, the aircraft deploys surface-type IR decoys at the same time, looking for the best strategy of defense.
The defense strategy acquired by simulation is of high reliability and fits for the real air combat, which is propitious to improve the survivability of aircraft. The main conclusions can be drawn as follows: (1) When the missile comes from the front of aircraft, the best strategy of defense is performing small radius barrel roll maneuver, meanwhile deploying multidecoys.
(2) When receiving the alarm signal from the alarm system, the pilot should deploy surface-type IR decoy rapidly and multiple decoys are better than one. The time interval of decoy should not be large and 0.1 sec is okay.
(3) Observe the direction of the coming missile when the pilot is performing maneuver. After deploying one set of decoys, if the missile tracks the decoys, the target aircraft exits barrel roll maneuver and flies away from the sight direction of missile. On the other side the target aircraft should deploy another set of decoys once again and the time interval should be similar to the first time and meanwhile the pilot should observe the trajectory of missile again until the interfering is successful.
But the establishment of decoy model is relatively simple which has some gap with reality and it does not analyse the jamming effect of multidecoys with different deploying time and deploying interval time to missile. These will be studied in further research.