Efficient Neural NetworkModeling for Flight and Space Dynamics Simulation

This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.


Introduction
An artificial neural network (ANN), usually abbreviated as neural network (NN), is a mathematical model that is inspired by the structure of the brain biological neural networks.A neural network consists of an interconnected group of elements called neurons, arranged in different layers, and it processes information by propagating data through these elements.Neural networks use "weights" to change the parameters of the connections to the neurons.These weights are calculated through a learning process of the available data.Simply speaking, neural networks are nonlinear statistical data modeling tools used to model complex relationships between inputs and outputs.Two challenges face neural network designer: the selection of neural network topology (i.e., number of layers and number of neurons) and the initialization of the weights.It is well recognized that a perfect way to decide an appropriate architecture and assign initial values to start neural network training has yet to be established.A common criticism of neural networks came from those two challenges where wrong selection of topology or initial values for weights leads to slow operation [1][2][3][4][5].
A lot of work has already been devoted to the application of neural networks (NN) in dynamics and control, both with respect to controller design and to system modeling and identification [6][7][8][9][10].Modeling dynamic systems or controllers by neural networks consumes a lot of the network designer time in selecting the appropriate structure and in training the network and had to be done offline.The network size also affects the execution time and has high impact on network design and selection especially for real-time problems.
The network design technique given in this paper is systematic to relieve the network designer from the intensive task of finding an appropriate structure for the neural model.For linear systems, based on the work done by Kaiser [11] and Kassem and Sameh [12], the paper can find the network weights and biases analytically so that the neural network can work directly without the need for training.
Nonlinear systems can be modeled by training its linearized models about specific operating point without the need to change the size or the structure of the network.The training is fast and can be done online, as it uses the linear system knowledge to speed up the training process [12].

International Journal of Aerospace Engineering
A modified version of time-delay neural network is presented in this work, and when it is combined with the analytical modeling, it can fast and accurately model linear and nonlinear dynamic systems.The technique is tested on different flight and space dynamics models and showed good match with the linear and nonlinear equations.

NN Analytical Model with One Hidden Neuron
This section represents the mathematical equations for linearizing a neural network model consisting of an input neuron, a hidden neuron, and an output neuron (Figure 1).Assuming that y = ax + b is to be approximated for x ∈ [x 0 , x 1 ], x 1 > x 0 within an error limit of ε, ε > 0, select z 0 , z 1 on the sigmoid horizontal axis with with a constant q ∈ R. In other words, this step selects the interval on the horizontal axis of the sigmoid, where the sigmoid function can be seen as linear.
Define a function f , f : [x 0 , x 1 ] → [z 0 , z 1 ] as (1) Finally, further modifications (see Kaiser 1994 [11] for details) yield where Taking p 2 as the weight to the corresponding input unit, p 1 as the bias of the hidden neuron, w as the weight of the link between the hidden neuron and an output unit, and w 0 as the bias of the output unit results in the desired approximation.This analytic approximation forms the base for the algorithm that will be used to model the dynamics of linear systems.Figure 1 shows the model used in this paper for approximating the linear function.

Modified Time-Delay Neural Networks
Time-delay neural networks (TDNNs), introduced by Waibel et al. [13], form a class of neural networks with a special topology.They are used for position-independent recognition of features within a larger pattern.In order to be able to recognize patterns place or time invariant, older activation and connection values of the feature units have to be stored.This is performed by making a copy of the feature units with all their outgoing connections in each time step before updating the original units.The total number of time steps saved by this procedure is called delay.
In this work, the TDNN is modified to fit exactly the general difference equation for a linear nth order dynamic system with constant coefficients shown as follows [14]: where y(i), i = k, k + 1, . . ., k + 2 denotes the discrete variable y at the ith instant and f (k) denotes the input value which could be another difference equation.
The output y(n + k) is separated to one side of the equation, while its delayed values and the input are moved to the other side of the equation as follows: Now, the output value is just a summation of linear relations in the form y = ax which can be modeled analytically as an NN with one hidden neuron.This way, the one hidden neuron NN is used as a building block, and the complete network is built systematically as shown in Figure 2.

The Algorithm
First, the nonlinear system is linearized about an operating point (trim condition) using any linearization technique such as Taylor series or small disturbance theory.Second, an analytic NN model is build using this linear model after discretization.Finally, the model is trained with the nonlinear model data points and with initial values for weights and biases taken from the linear analytic NN model.The detailed modeling algorithm is shown in Algorithm 1.
For linear systems steps 2, 3, and 4 only apply, as there is no linearization, and there is no need for training.

Test Cases
Three test cases will be given in this section illustrating the application of the proposed modeling approach using One hidden neuron NN One hidden neuron NN One hidden neuron NN Summing junction Figure 2: NN for modeling difference equation.
(1) Starting with the nonlinear model in the form of ẋ(t) = f (x, u, t), where x are states vector and u is input vector.
(2) Linearize the nonlinear model around an operating point and represent it in state space form as follows: ẋ(t) = Ax + Bu, where A, B are the system and input matrices, respectively.
(3) Discretize the continuous linearized system using finite difference, the equations will be ẋi+1 = (A + IΔt)x i + BΔtu i , where I is the identity matrix and Δt is the time step.
(4) Build an analytic NN to replicate the linear model using the proposed linear system given in section two and Figure 2. NN.The first two test cases in Sections 5.1 and 5.2 show the result of modeling of unstable linear dynamics without details of the proposed algorithm.The third nonlinear test case presented in Section 5.3 will be accompanied with more details.

Helicopter Longitudinal Dynamics.
The state space linear model of longitudinal motion of a helicopter [15] is given by where q, θ, and v are pitch rate, pitch angle of the fuselage, and horizontal forward velocity, respectively, and the control input δ is the rotor tilt angle.The system is unstable with both zeros and poles in the right hand plane, for example, system poles p 1,2 = 0.3776 ± 0.3445 j; p 3 = −0.375.(7) Using the linear algorithm, the NN can fit the system's three states very accurately for unit step δ input as shown in Figure 3.

F-16 Longitudinal Dynamics.
The equations for longitudinal motion of the F-16 fighter are considered for a trimmed flight condition with U = 502 ft/s, α = 0.0393 rad, θ = 0.0393 rad, and q = 0 rad/s.rate, respectively.The single control input is the elevator deflection δ E in degrees.The state equation in matrix form becomes [16] where Using the linear algorithm, the NN can fit the system's four states very accurately for unit step input δ E , as shown in Figure 4.

Generic Fighter Nonlinear Flight Dynamics.
Consider the flight dynamics model of a high-performance aircraft for the short-period mode [17][18][19], where the motion involves rapid changes to the angle of attack and pitch attitude at roughly constant airspeed.The model covers both linear and nonlinear flight behavior through an extensive range in angle of attack.The model is mathematically described as [18] α = q + 9.168C z − 1.834(δ e + 7) + 7.362,  where It is obvious that a single linear model cannot represent this nonlinear behavior.So, this example will show the strength of the proposed algorithm.It will start with a single linear NN model and then train it to capture the nonlinear behavior, then adding a second model at different operating point to improve the accuracy.The linearized model for α < 14.36 deg is given by The discretized system (12) using Δt = 0.004 s is given by Using the linear algorithm, the NN can fit the system's two states very accurately for the unit step input δ E , as shown in Figure 5.This assumes that the angle of attack does not exceed value 14.6 deg.We can see the difference when using the linear NN model for predicting behavior of the nonlinear system when angle of attack exceeds value of 14.6 deg., as shown in Figure 6.Therefore, the NN needs some training.A step input of δ E = −11 for five seconds is used to train the system, and a fifteen seconds step input with same value (δ E = −11)  is used for testing, and the results are shown in Figure 7. Initial weights and biases were taken from the linear NN to speed up the convergence.
When initiating the NN learning with zero or random weights and biases, it has been noticed that it does not converge to the correct values or in best case converges after 5 times the required number of iterations compared with the case of using linear NN weights and biases.
It is clear that the NN works relatively good compared with its size (4 sigmoid elements).The model can be improved by adding another model (at different operating points).Same procedure is used to produce the NN model for this linear system.Adding the two NN models, we get double the size NN (8 sigmoid elements).After training the concatenating model, we can get an improved model which produces a much better results as shown in Figure 8.It is clear that this improved model can catch the limit cycle very well, so there is no need to add another NN model in the region between 14.36 and 19.6 degrees.

Satellite Attitude
Dynamics.This final case study shows a space-related problem and also illustrates the effectiveness of the proposed NN for systems with sparse dynamic matrix (A).In this case study, the linearized attitude dynamics for a symmetric satellite for small angles and gravity gradient torques approximation is given by [20] ẋ = Ax + Bu, (16) where ( The satellite altitude = 700 km, its moments of inertia I x = 80, I y = 82, and I z = 9 kg-m 2 , and an initial attitude is represented by θ(0) = 5 • , φ(0) = 5 • , and ψ(0) = 5 • .Figure 9 shows initial condition response of the satellite using the state-space and NN models.The NN model is built by direct substitution without the need to train the network, and the results show good match.Also, the NN model is very efficient, as it uses only eight (one hidden neuron NN) for matrix A representing only the nonzero elements.

Conclusions
The procedure and the test cases described in this paper show that the linear neural network based on the analytic one hidden neuron NN is best suited as a starting point for fast learning nonlinear networks.It also shows that the modified Time-delay neural network combined with the analytical network form a very efficient algorithm for flight dynamics equations approximations.The presented algorithm relieves the network designer from the work intensive task of finding a suitable structure for the neural network, given that the designer has at least some qualitative knowledge about the modeling task.It also shows a systematic way of increasing the size of the neural network to improve the accuracy without the need of guessing.It is clear that the algorithm gives some guidance to the learning process and helps the network escaping local minima by using initial guess from the linear model.This illustrates the importance and usefulness of this algorithm in the field of neural network modeling of nonlinear flight dynamics.

Figure 1 :
Figure 1: The model used for approximating the function y = ax + b.

( 5 )
Train the linear analytic NN with the nonlinear model data and with initial weights and biases taken from the linear analytic NN.(6)If the final NN model is not satisfactory, then increment model size by adding another linear model at different operating point, repeat steps 2 to 5, and concatenate the two models to get a better approximation.The number of final concatenated models depends on the required accuracy and the acceptable execution time.(7) Stop Algorithm 1: NN algorithm for modeling nonlinear systems.

Figure 5 :
Figure 5: Linear NN model versus state-space model for fighter linearized dynamics.

Figure 6 :
Figure 6: NN trained model versus nonlinear models for fighter dynamics.
z + I x − I y .