^{1}

^{1}

^{2}

^{1}

^{2}

The nonlinear function fitting is an essential research issue. At present, the main function fitting methods are statistical methods and artificial neural network, but statistical methods have many inherent strict limits in application, and the back propagation (BP) neural network used widely has too many optimized parameters. For the gaps and lacks of existing researches, the FOA-GRNN was proposed and compared with the GRNN, GA-BP, PSO-BP, and BP through three nonlinear functions from simplicity to complexity for verifying the accuracy and robustness of the FOA-GRNN. The experiment results showed that the FOA-GRNN had the best fitting precision and fastest convergence speed; meanwhile the predictions were stable and reliable in the Mexican Hat function and Rastrgrin function. In the most complex Griewank function, the prediction of FOA-GRNN was becoming unstable and the model did not show better than GRNN model adopting equal step length searching method, but the performance of FOA-GRNN is superior to that of GA-BP, PSO-BP, and BP. The paper presents a new approach to optimize the parameter of GRNN and also provides a new nonlinear function fitting method, which has better fitting precision, faster calculation speed, more few adjusted parameters, and more powerful processing ability for small samples. The processing capacity of FOA for treating high complex nonlinear function needs to be further improved and developed in the future study.

In various fields of science and technology, the relationship between different variables is usually described by the function. Some functional relations can be derived from classical theoretical analysis, which not only provides theoretical basis for further analysis and research, but also facilitates the solution of practical engineering problems. However, many engineering problems are difficult to directly derive the functional expression between variables, or even if the expression can be obtained, the formula is very complex, which is not conducive to further analysis and calculation. Due to the research needs, it is expected to obtain the functional relationship between these variables. At this time, the fitting method can be used to obtain the approximate functional expression between variables by combining the known experimental data with the mathematical method.

According to the fitting problem, the fitting methods are generally divided into linear fitting methods and nonlinear fitting methods. For the linear data fitting problem, linear fitting methods usually adopt a set of simple, linearly independent basis functions to approximate the experimental data. However, the linear fitting methods have many inherent strict limits in practical application; assume that independent variables are normal distribution, nonrelevance, and independent; function form relating to independent variables and dependent variable must be preexisting and linearity [

Generalized regression neural network (GRNN) has excellent performance of anti-interference, good approximation ability, fast convergence speed, and the ability of autonomous learning. In addition, in the treatment of small sample or unstable data, the GRNN has a better effect [

Although the GRNN does not need preexisting function model, the SPREAD parameter of GRNN has significant influence on the prediction performance. Therefore, in order to select an appropriate SPREAD parameter, researchers have raised multiple evolutionary algorithms for optimizing neural network, such as genetic algorithm (GA) [

However, traditional evolutionary algorithms have common disadvantages, such as complex program, difficulty to understand, slow convergence rate, etc., so the scholar Pan proposed the fruit fly optimization algorithm (FOA) in 2012 [

This paper utilized the fruit fly optimization algorithm (FOA) to optimize the SPREAD parameter of GRNN and established the FOA-GRNN model and then designed three nonlinear functions from simplicity to complexity, compared FOA-GRNN with GRNN, GA-BP, PSO-BP, and BP models in the same experimental conditions for verifying the accuracy and robustness of the FOA-GRNN model.

The rest of this study is organized as follows. The principle of GRNN, FOA, GA, PSO, and BP are described in Section

In 1991, American scholar Donald F. Specht proposed the generalized regression neural network (GRNN), which is a special form of radial basis function (RBF) neural network [

The network structure of GRNN is composed of input layer, pattern layer, summation layer, and output layer, as shown as Figure

The topological structure of GRNN.

The number of neurons in the input layer is equal to the dimension

The number of neurons in the pattern layer is the number^{th} neuron in the model layer is as follows.

In formula (_{i} is the output of the^{th} neuron in the pattern layer, _{i} is the learning sample corresponding to the^{th} neuron.

The summation layer contains the summation unit in the denominator and the summation unit in the numerator. The summation unit in the denominator performs arithmetic summation on the outputs of all neurons in the pattern layer. The sum of connection weights between all neurons of pattern layer and a neuron in the summation layer are 1, and the transfer function is shown in formula (

The molecular summation unit is a weighted summation of the output of neurons in the pattern layer; the element_{i} is the connection weight between pattern layer neuron

The output layer

When using GRNN for modeling, as long as the training sample is determined, the corresponding network structure and the connection weight between neurons are also determined accordingly, so that the stability of GRNN is very good; its training process is actually the process of optimizing smooth parameter

In order to obtain the ideal parameter SPREAD and make the prediction model have good generalization ability, fruit fly optimization (FOA) algorithm is adopted to optimize the smooth parameter SPREAD in GRNN model.

Fruit fly optimization algorithm (FOA) is a global optimization algorithm by simulating fruit fly foraging behavior. FOA has the advantages of simple structure, low computation complexity, and high execution efficiency, proposed by scholar Pan Wen-Chao in 2012 [

The parameter optimization process of FOA-GRNN model.

The specific optimization steps of FOA-GRNN are as follows.

Randomly initialize fruit fly position, setting iteration number, and population size.

Randomly set the direction and distance of each fruit fly.

Calculate the distance between the fruit fly and the origin (_{i}).

Establish the fitness function. One flavor concentration determination value S represents one position of fruit fly, then it is regarded as one SPREAD parameter of GRNN, substituted into the fitness function to obtain the fitness of the position of fruit fly. In this study, the reciprocal of root mean square error (RMSE) between the predicted values and actual values is treated as the fitness function of fruit fly.

Regenerate a new fruit fly population. The best fruit fly with highest fitness is pick out, then around the best fruit fly, the new fruit flies will be regenerated and born. The direction and distance of new fruit fly is adjusted based on the position of the best fruit fly of previous generation.

Repeat Steps

Judge whether to meet the requirements of the terminating condition, finally, obtaining the optimal parameter SPREAD and GRNN model.

Back propagation (BP) neural network is a multilayer feed forward network proposed by Rumelhart and McCelland in 1986, and it is one of neural network modes applied widely. It can self-learn, self-organize, and fit any nonlinear function. During training process, the BP neural network takes minimum sum of squared errors as learning goal and continuously adjusts the weights and thresholds via steepest descent method to approach the desired output [

Genetic algorithm (GA) is an evolutionary algorithm proposed by American professor Holland in 1962. It follows along with the biology inheritance and the evolutionary mechanism of “survival of the fittest” to search the global optimal solution. During optimum process, genetic algorithm firstly transforms the problem solutions into population individuals embodied by code and initializes population randomly, in which each individual represents one candidate solution. Then genetic algorithm calculates the fitness of each population individual, reserves superior individuals with high fitness, and eliminates inferior individuals with low fitness through selection, crossover, and mutation operations; evolution repeats until the best individual is searched, which is decoded to obtain the optimal solution [

Particle swarm optimization (PSO) is an evolutionary computation technique based on swarm intelligence. It stems from the simulation of bird flock's looking for food and has been proposed by Kennedy and Eberhart in 1995. The PSO algorithm maps the solution space to the particle swarm, and each particle is characterized with fitness, position, and speed. The fitness distinguishes superior and inferior particle; the speed determines the direction and distance of particle moving. The optimal position particle pass is called “private best position”, and the optimal position in entire particle swarm's best position is called “global best position”. When particle moves in search space, each particle's movement is guided by private best position as well as global best position. The better particle will be found around the private best position and global best position, which will be updated until obtaining the optimal solution represented by global best position [

Weights and thresholds of BP neural network have great influence on the fitting precision. Therefore, in order to continuously enhance fitting precision, genetic algorithm and particle swarm optimization algorithm are introduced to BP neural network for obtaining the optimal weights and thresholds, and finally, the BP neural network optimized by genetic algorithm (GA-BP) and BP neural network optimized by particle swarm optimization algorithm (PSO-BP) are established.

In the sequence from simplicity to complexity, we designed three nonlinear functions. Each prediction model is set to the same experimental conditions. Using each nonlinear function, our study randomly generated 10,000 data, and the first 8000 were considered as training data; the following 2000 were treated as testing data.

For verifying the accuracy and robustness of the FOA-GRNN model, the FOA-GRNN is compared with GRNN, GA-BP, PSO-BP, and BP. The experimental conditions are set to equal, that means input and output variables, training and testing dataset, fitness function, population scale, and loops number are the same.

In FOA-GRNN, the SPREAD parameter of GRNN is the optimal targets of FOA; one fruit fly position represents one parameter. In GA-BP and PSO-BP, the weights and thresholds of BP are regarded as the population individual of GA and PSO. The reciprocal of root mean square error (RMSE) between prediction values and actual values is treated as individual fitness function. Population size and evolution generations of each evolutionary algorithm are set to 20 and 20.

Moreover, crossover and mutation probability of genetic algorithm is set to 0.2 and 0.1. In PSO, the particle maximum, particle maximum, velocity maximum, velocity minimum, and acceleration constants c1&c2 are set to 0.55,0.05,1, -1, and 1.49445. For BP neural network, the network topological structure is set to 2-4-1, the convergence criterion is RMSE less than or equal to 0.0001, and the iteration maximum, learning rate, and training function are 100, 0.1, and traingd (Table

Parameters setting of algorithms.

Model Type | Training and Testing samples | Algorithm parameters |
---|---|---|

FOA-GRNN | Training samples: 8000 dataset | Individual: the SPREAD parameter; |

| ||

GA-BP | Same | Individual: the weights and thresholds of BP network; |

| ||

PSO-BP | Same | Individual: the weights and thresholds of BP network; |

| ||

GRNN | Same | Individual: Same; |

| ||

BP | Same | Individual: the weights and thresholds of BP network; |

The root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) are used to assess the fitting precision of models. The smaller the value of RMSE, MAE, or MAPE is, the less the forecasting error is, the better the model fit is, and vice versa.

The formulae of three performance criteria are shown as follows:

where

The first nonlinear function for testing is Mexican Hat function [

The function graph of first nonlinear function.

In formula (

To search for the best population individual, FOA-GRNN, GRNN, GA-BP, and PSO-BP evolved 20 generations; the fitness value of best population individual in every generation appeared in Figure

The optimization process of first testing function.

As shown in Figure

The predictive comparison of first testing function.

Model | Total Error | RMSE | MAE | MAPE | Stability |
---|---|---|---|---|---|

FOA-GRNN | 57.2698 | 0.0412 | 0.0286 | 0.1468 | Yes |

GRNN | 138.3150 | 0.0961 | 0.0692 | 0.3719 | Yes |

GA-BP | 833.1369 | 0.5256 | 0.4166 | 2.0451 | No |

PSO-BP | 768.0228 | 0.4824 | 0.3840 | 1.7451 | No |

BP | 861.4234 | 0.5436 | 0.4307 | 2.3267 | No |

In Table

Otherwise, with the same training and testing data, the programs of FOA-GRNN, GRNN, GA-BP, PSO-BP, and BP models had been run repeatedly for several times, each running of FOA-GRNN and GRNN models returned the same result, but the predicted results of GA-BP, PSO-BP, and BP models were differently within the same test scenario. That indicated the FOA-GRNN and GRNN models had been trained sufficiently and had gained the optimal solution, the predictions of FOA-GRNN and GRNN models were stable and reliable. However, for GA-BP, PSO-BP, and BP models, the sample size was small; thorough training had not been received, so the optimal solution could not be obtained; models exhibited unsaturated and unstable state

The second nonlinear function for testing is Rastrgrin function [

The function graph of second nonlinear function.

In formula (

Figure

The predictive comparison of second testing function.

Model | Total Error | RMSE | MAE | MAPE | Stability |
---|---|---|---|---|---|

FOA-GRNN | 2865.8925 | 1.9038 | 1.4329 | 0.0514 | Yes |

GRNN | 3252.8647 | 2.0885 | 1.6264 | 0.0613 | Yes |

GA-BP | 16520.2570 | 10.1086 | 8.2601 | 0.3346 | No |

PSO-BP | 16482.0882 | 10.1015 | 8.2410 | 0.3373 | No |

BP | 16505.6928 | 10.1119 | 8.2528 | 0.3362 | No |

The optimization process of second testing function.

Table

The third nonlinear function for testing is Griewank function [

The function graph of third nonlinear function.

In formula (

The third testing function is more complex, with more local extreme values. Figure

The predictive comparison of testing function 3.

Model | Total Error | RMSE | MAE | MAPE | Stability |
---|---|---|---|---|---|

FOA-GRNN | 686.0692 | 0.4455 | 0.3430 | 0.1971 | No |

GRNN | 685.7656 | 0.4455 | 0.3429 | 0.1969 | Yes |

GA-BP | 823.8674 | 0.5083 | 0.4119 | 0.2415 | No |

PSO-BP | 822.9429 | 0.5080 | 0.4115 | 0.2431 | No |

BP | 825.5087 | 0.5090 | 0.4128 | 0.2430 | No |

The optimization process of third testing function.

In response to the gaps and shortages of current prediction methods, this study established FOA-GRNN model, compared FOA-GRNN model with other four prediction models raised by previous scholar (GRNN, GA-BP, PSO-BP, and BP). Simulation results showed that the FOA-GRNN model proposed by this study had advantageous properties such as better fitting precision, faster calculation speed, fewer adjusted parameters, and stronger nonlinear fitting capability. The main conclusions are summarized as follows.

First, compare with BP model, GRNN model has fewer adjusted parameters, better fitting precision, and more suitable for small samples. The parameters searched by BP neural network model are the whole network weights and thresholds, so the number of parameters is huge. For example, in this study, BP neural network has 2 input nodes, 2 hidden nodes, and 1 output node; the parameters required to be sought are 17. Therefore, the BP neural network model is difficult to obtain global optimization and easily fall into the local extremum, and the convergence rate is relatively slow. Moreover, BP needs larger training dataset; the small or incomplete sample data will lead to insufficient training and uncertain predictions. But the parameter needed to be adjusted by GRNN model is only one, so the GRNN model has better fitting precision, faster convergence speed, and more suitable for small sample data.

Second, FOA can get global optimum solution more rapidly and greatly enhance the convergence rate of the GRNN model. FOA which is an artificial intelligence algorithm is put forward by foraging act of research and observation about groups of fruit flies. Compared with random assignment or loops with step-in for searching parameters, the FOA can obtain the best SPREAD parameter faster and improve the convergence speed of the GRNN model.

Third, the fitting precision of optimized BP neural network model is better than that of BP neural network model. In GA or PSO, population individuals are filtered according to fitness value, superior individuals with high fitness will be retained, and inferior individuals with low fitness will be eliminated; the new generation inherits the previous generation, and also superior to the previous generation, so cycle evolution, until meeting the termination conditions, gets the fittest individual. The GA or PSO algorithm obtains the optimal weights and thresholds of BP neural network more easy and quick than the random search method. Therefore, the predictions of GA-BP and PSO-GA are more accurate than BP neural network model.

Nevertheless, more research is needed to present opportunities for making perfect predictions. As the nonlinear function becomes more and more complex, the local extrema are increasing, and the improvement effect of FOA is getting smaller and smaller, which indicates that the processing capacity for treating complex nonlinear function of FOA needs to be further improved and developed. The same problem also exists in GA and PSO. Moreover, the FOA-GRNN model should be further tested and developed through its application in other fields of science, math, technology, and engineering.

The data used to support the findings of this study are available from the corresponding author upon request.

The authors declare no conflicts of interest.

This research was funded by the National Social Science Fund of China (Grant no. 17BGL202).