^{1}

^{2}

^{1}

^{1}

^{1}

^{2}

In this paper an advanced method for the navigation system correction of a spacecraft using an error prediction model of the system is proposed. Measuring complexes have been applied to determine the parameters of a spacecraft and the processing of signals from multiple measurement systems is carried out. Under the condition of interference in flight, when the signals of external system (such as GPS) disappear, the correction of navigation system in autonomous mode is considered to be performed using an error prediction model. A modified Volterra neural network based on the self-organization algorithm is proposed in order to build the prediction model, and the modification of algorithm indicates speeding up the neural network. Also, three approaches for accelerating the neural network have been developed; two examples of the sequential and parallel implementation speed of the system are presented by using the improved algorithm. In addition, simulation for a returning spacecraft to atmosphere is performed to verify the effectiveness of the proposed algorithm for correction of navigation system.

The autonomous navigation system of spacecraft can be used to control the spacecraft without relying on ground-based support and to determine the position, speed, and altitude of spacecraft in real time by measurement equipment aboard. The navigation system of spacecraft as a core for space engineering mainly provides information in the stages of its orbit entry, reentry, orbit change, and large altitude maneuvers, significantly depending on the data processing ability of the system algorithm [

Neural networks consist of a large number of interconnected processing elements which are called neurons, operating as microprocessors [

Actually, the neural network has been applied to the adaptive control of aircraft in recent years. WANG Qing et al. proposed an antiwindup adaptive control method of aircraft based on neural network and pseudocontrol hedging, according to the unfitness flight control of conventional adaptive control on an actuator with magnitude saturation and rate saturation [

The structure of this paper is presented as follows. An algorithm of building a prediction model of compensation for autonomous INS errors is developed in Section

Generally, different approaches to the prediction differ in terms of the amount of a priori information necessary for the prediction about the object under study. If considering an autonomous INS functioning for a long period (more than 6 hours), it is not available to correct INS from external devices and systems.

The main task to compensate for the errors of autonomous INS using only internal information herein is proposed. It also assumed that the autonomous operation mode of INS proceeded the system operation period in the correction mode of satellite system. Structure diagram of INS considering the algorithm of building a prediction model (APM) when external sensors are disconnected is shown in Figure

Structure diagram of the INS with the application of APM when external sensors are disconnected.

In addition, dynamic objects usually could move in space by different trajectories to effectively perform the tasks. In process of designing the control systems for dynamic objects operating in an actively counteracting environment, as a rule, it is possible not only to perform various maneuvers, but also to control the basis of prediction of the object state.

In practical applications, predicting the state of the maneuvering object by using a priori mathematical models is not possible and reliable. When a dynamic object functions under stochastic conditions, the amount of a priori information about the object is usually minimal. Therefore, it is advisable to use the self-organization approach for extrapolation.

Self-organization algorithm allows building a mathematical model without a priori indication of the rules of the object. The developer of the mathematical model should set the ensemble of selection criteria (self-organization criteria) of the model selection, then the mathematical model of the optimal complexity is selected automatically. Furthermore, implementation of the self-organization algorithm is assumed on board of the dynamic object. Typically, such algorithms are presented fairly strict requirements for speed, compactness, and ease of implementation in a computer. These requirements are especially important when predicting the state of highly maneuverable dynamic objects.

The principle of self-organization algorithm for models is formulated as follows: with a gradual increase in the complexity of models, the value of internal criteria (in the presence of noise) decreases monotonically. Under the same conditions, all values of external criteria pass through their minima (extremums), making it possible to determine the model with optimal complexity, which is unique for each external criterion.

For self-organization method, the following three conditions must be met:

An initial organization (a set of support functions).

A mechanism for random changes (mutations) of this organization (a set of models-applicants).

A selection mechanism by which these mutations can be evaluated in terms of their usefulness for improving the organization (self-organization algorithm).

To a large extent, success of the self-organization modeling depends on the choice of reference functions class. If the reference functions such as the structure of object cannot be restored using a combination of particular models, then the approximation problem is still solved; however, the result is often suitable only for prediction, and not for object identification, since it is not a physical model of the object. But, the task of selecting a description is solvable if the class of reference functions is chosen sufficiently general. The available a priori information allows us to restrict a few types of reference functions and the model structures derived from them.

In the self-organization model, such reference functions as power polynomials, trigonometric functions, and exponential functions can be applied. If several types at the same time are included in the system of reference functions, then mixed functions containing the sum or product of power polynomials and exponential functions can be obtained.

According to the principle of Gödel external complement, it is necessary to choose a criterion for the selection of model with optimal complexity. In order to solve the problem, we need to divide the data table into two parts A and B. Part A is a training sample and part B is a test sample. Sometimes the table is also divided into the third part C, an examination sample, which is used to evaluate various models, and it may also serve to select the optimal division into a training and verification sequence. With such a partition, optimal models are selected from a set of functions based on the training sequence, and one or two better functions will be selected during the criterion test.

The following criteria are most commonly used.

One of the criteria is as follows:

When the method of self-organization is used, the predictive model can be written as_{n} are basis functions from the parametrized set_{p}._{p}=

As a model of optimal complexity, the one with smaller number of arguments is chosen with a simple reference function.

The criterion is defined as

The criterion of model simplification helps to significantly simplify the implementation of self-organization algorithm in the special on-board calculator of spacecraft. To reduce computational costs and obtain compact models in the self-organization algorithm, an original criterion for model simplification is included in the ensemble of selection criteria, which tends to be a more compact model with similar values from the ensemble of selection criteria. Using the constructed nonlinear model, the state of the object (INS errors) is predicted in the autonomous mode, i.e., in the absence of measurements from external sources.

To predict the state of the object under study, a mathematical model that contains all the necessary information about its parameters should be formed, and its state changes during a given period of time. In particular, if we take a sensor reading at certain (not necessarily equal) intervals, then the measurement results can be written down as_{i}<_{j} when

The essence of forecasting is to build a model (or select from a set) that best meets the specified criteria and further calculate its values at the points_{n}. Process of building such a model can be formally divided into separated stages: the first stage is to define the parameterized class of models, in which the search is performed. Examples include methods for finding one of functions belonging to a selected set and depending on a certain parameter vector or the method of sequential identification described below. Methods based on building impulse reactions (weight functions) are also widely used, and most of these methods widely apply the theory of statistics and random process.

Here we introduce the criterion for identifying basis models:_{i} is a basis function of_{p} and (_{k},_{k})

The identification of 1-st basis model is the process of minimization of the criterion for identifying the 1-st basis function in frequency and amplitude:

After the random search, an assumption is made that the point found especially in a unimodal vicinity of the global minimum, after which a clarification will be obtained by gradient method. When solving the problem of identification, a model of the form

It is also noticed that if the minimum value of criterion had a linear function in the first step, then the further process will no longer be able to change the overall process, while a completely different situation is observed when applying selective algorithms; for example, the curve obtained for the same sample, using the proposed method, does not have a dominant linear trend. Self-organization algorithms are multirow algorithms based on the selection hypothesis, which states that models do not pass the threshold of self-selection (if the corresponding criterion is chosen optimally) and do not get in the formation of best models in the next row.

Assume that the first selection series consist of

In each new row models are built as a linear combination of two pair different models from the previous row and constant. Thus, the combinations of the following type are formed from_{i} is the sample values, and_{i} is the model values computed at the point_{i}. A description of the methods for dividing the original sample can be found. Suppose that

In practice, usually none of the above criteria is used, but instead constitutes a so-called ensemble of criteria. In many problems, ensembles of the following type have proved themselves well:_{α} is the weight of the relevant criteria.

The application of this type of criteria selection allows changing the weights of its individual components during the operation of algorithm and performing corrections in the process of work by levels.

The first step of the algorithm consists in identifying the basis functions by the corresponding criterion (see (_{0},_{1} are the model numbers from the previous level,_{n} is frequency,

It is obvious that there are four variables for each pair model; thus, we need to solve the following problem in order to find the final model:_{n}, since they are linearly in (

As a result, we obtain the coefficients for models of (

In this section, an algorithm for building a dynamic object model is developed, and it can adequately set the initial values of the weight coefficients of a neural network, which significantly accelerates the learning process of neural network. Meanwhile, the algorithm of optimization of Volterra network structure is also considered.

The control of various dynamic objects usually involves the use of their mathematical models. In this case, when the model of a dynamic object is a priori unknown, it is necessary to build by using a neural network. Neural networks allow building models of investigated objects with a sufficiently high accuracy, but they require a long time to implement the learning process. When synthesizing control systems for dynamic objects especially various aircraft, the time for model building is limited. Therefore, the task of accelerating the work of a neural network is extremely important.

The main task of building and training a neural network in the case under study is approximation of a function. Based on a training sample of input data and function values, it requires to determine the weights of neural network, so that the result of the network (value of the output function) on the vector of input variables is as close as possible to the specified function value (training value) for this vector.

In the process of implementing the neural network training, the following procedures are performed in turn for all input vectors:

After that, the training condition for the end of algorithm is checked, i.e., how the performance of the neural network differs from the initial values. If the condition has not yet been fulfilled, then the algorithm returns to the second step. If the deviation from the original sample satisfies the conditions specified in the algorithm a priori, then the neural network is considered trained.

The method of self-organization is very similar to the neural network, but it is not the same. The method of self-organization determines the weights of connection using Gaussian normalization, and for each combination of functions a model of the form is constructed as

In the transition from one step to the next, several best models are selected (in accordance with the Gabor principle). The combination continues as long as the error decreases using the test sample. After the algorithm is completed, it is required to go through all the steps of the algorithm in the reverse order and determine the weights of the basis functions.

Thus, in fact, the method of self-organization with the same structure as neural network is studied completely in different way. The first one is based on Gauss normalization method and the selection of best results, while the neural network is based on the method of back propagation and gradient descent method. The main disadvantage of neural network is the random selection of initial values of the weights, which leads to a long network training. From this aspect, the main task was to combine the advantages of the method of self-organization in the speed of work and the neural network in building a model of better approximation.

It is proposed to first search for an approximate minimum of error using the self-organization method, then to initialize the neural network weights of connection with the obtained values from self-organization method, and next, to find a more accurate approximation by neural network training. At the first stage, it is necessary to find a suitable network structure, which could easily be compared with the method of self-organization of all types of networks:

In the case of applying a function to the sum of the products of values of elements of the previous step on the weights of connections, it becomes difficult to initialize the weights of connections with values from the self-organization method. Similarly, it is difficult to distribute the weights if a chain of elements has several links with different weights.

As a result, the method of self-organization provides one weight for each basis function; it is not possible to only divide these weights into components. A type of neural network that has a suitable structure for combination with the method of self-organization is the Volterra network. This neural network allows using the result of the method of self-organization as a start point for learning a neural network. Accordingly, the weights coefficients of a function can be defined in the form as follows:

Input and output signals of Volterra network.

If we expand the brackets in (

Examples are given for the value

Volterra’s neural network (for

To avoid repetition of the basis functions, the following method is constructed: the product is ordered by the indices of participating signals

Rules for constructing non-repeating combinations of indices for a Volterra network.

1 | 2 | 3 |
---|---|---|

0 | 00 | 000 |

1 | 01;11 | 001 |

2 | 02;12;22 | 002 |

3 | 03;13;23;33 | 003 |

011;111 | ||

012;112 | ||

013;113 | ||

022;122;222 | ||

023;123;223 | ||

033;133;233;333 |

The table discloses combinations for

Thus, in order to use the self-organization method to accelerate the operation of the Volterra network, it is necessary to select the basis functions in a special way. The basis functions used in the method of self-organization should be defined as follows:_{i} will be obtained, which need to be assigned to the weight factors

The structure of Volterra’s network without the use of repetitive products is presented in Figure

The reduced Volterra network.

It should be noted that an additional element is introduced into the network structure, a constant, since this does not contradict the developed theory, at the same time, it gives full correspondence with the method of self-organization.

This reduction of the network structure leads to a sharp decrease in the number of input elements of the network. For the case

Based on the training sample with the help of a reduced neural network, it was possible to build a mathematical model (result of the network). It can be seen that, the accuracy of the model built by a reduced neural network is almost identical to the accuracy of the model made by an ordinary neural network. However, the speed of reduced neural network is significantly higher.

Thus, an algorithm for building a mathematical model based on a neural network is developed. To accelerate its work, it is proposed to determine the coefficients of the network by the method of self-organization. The Volterra neural network is represented and the reduced structure of this network is developed. Reduced neural network can significantly reduce the time to build a mathematical model.

Another approach that speeds up the process of building a model is parallelization of calculations in the implementation of a neural network. The operation of each layer of the neural network can be realized as a set of parallel threads in an amount equals to the product of the number of neurons of the current layer on the number of neurons of the previous layer.

Similarly, the neural network can also be parallelized for the error back propagation algorithm. The genetic algorithm involves step-by-step development of generations, evaluation of individuals of the current generation and the formation of a new generation of the best individuals of the previous one. In parallel, it is impossible to calculate the next generation until the previous one is formed, but it is possible to work in parallel with individuals of the same generation. Parallel assessment of the quality of individuals of the current generation and the formation of the next generation of individuals help reduce the time of each cycle and the algorithm as a whole.

Considering that each network neuron can be calculated independently of the other neurons of its layer, as well as the fact that each individual neural network actively interacts with its parameters (synaptic weights) [

The speed tests of the parallelized and sequential implementation of the system are performed; the results are shown in Table

Comparative characteristics of the speed of sequential and parallel implementation.

Type of calculation | Number of networks | Number of iteration | Number of training cycles | Work time |
---|---|---|---|---|

Sequential | 1 | 10 | 1 000 | 0.51 s |

2 | 10 | 1 000 | 1.07 s | |

3 | 10 | 1 000 | 1.66 s | |

| ||||

Parallel | 1 | 10 | 1 000 | 0.47 s |

2 | 10 | 1 000 | 0.53 s | |

3 | 10 | 1 000 | 0.58 s |

It can be seen from Table

In this section, an example of application and simulations are performed for well-known models of INS errors, and the INS is installed on a returning spacecraft to the atmosphere. As a test model, a typical error model of the platform INS is used in the following form:

Then the error model of the northern channel of INS can be written as

In Figure

INS errors in determining velocity obtained with a real INS and by various models.

It can be seen that Figure

In Figure

Angle of deviation of GPS from the plane of horizon and the model built by reduced Volterra neural network.

Angle of deviation of GPS from the plane of horizon and the model built by Volterra neural network.

From the results of simulations, we notice that the reduced Volterra network provides an acceleration of building models of a given accuracy, in comparison with the Volterra neural network on average by 7-10%. The accuracy of building a model in the correction interval averages 85% of the nominal.

This paper presents an advanced algorithmic method for increasing the accuracy of an INS of spacecraft. Three approaches for speeding up the work of neural network are suggested, which are extremely important in building mathematical models of the INS correction system. And the offline correction of INS is performed using the predictive error model constructed by the Volterra neural network modified by the self-organization algorithm. The modification of algorithm is validated to speed up the work efficiency of the neural network.

The accuracy of building a model using a reduced neural network is practically the same as the model built by an ordinary neural network. However, the speed of the reduced neural network is significantly higher compared to a conventional neural network. In this paper an algorithm for building a mathematical model based on a neural network has been developed. To speed up its work, it is proposed to determine the network coefficients by self-organization, a Volterra’s neural network is also presented, and a reduced structure of this network is developed. The reduced neural network can significantly reduce the time of building a mathematical model. Therefore, methods have been proposed to accelerate the operation of a neural network, which affect the process of mathematical models building of various dynamic objects, in particular, the INS error model of spacecraft. The simulation results show that the idea of combining neural network with navigation algorithm is feasible and has a wide application prospect, and the prospects for further research are related to the development of algorithms for constructing models with desired properties, for example, models with enhanced characteristics of observability, identifiability, and sensitivity, etc.

The data used to support the findings of this study are available from the corresponding author upon request.

There are no conflicts of interest regarding the publication of this paper.

This work is supported by the “Intelligent Ammunition System Theory and Key Technology Innovation Induction Base” founded by Chinese Ministry of Education.