Prediction of Chaotic Time Series Based on BEN-AGA Model

. Aiming at the prediction problem of chaotic time series, this paper proposes a brain emotional network combined with an adaptive genetic algorithm (BEN-AGA) model to predict chaotic time series. First, we improve the emotional brain learning (BEL) model using the activation function to change the two linear structures the amygdala and the orbitofrontal cortex into the nonlinear structure, and then we establish the brain emotional network (BEN) model. The brain emotional network model has stronger nonlinear calculation ability and generalization ability. Next, we use the adaptive genetic algorithm to optimize the parameters of the brain emotional network model. The weights to be optimized in the model are coded as chromosomes. We design the dynamic crossover probability and mutation probability to control the crossover process and the mutation process, and the optimal parameters are selected through the ﬁtness function to evaluate the chromosome. In this way, we increase the approximation capability of the model and increase the calculation speed of the model. Finally, we reconstruct the phase space of the observation sequence based on the short-term predictability of the chaotic time series; then we establish a brain emotional network model and optimize its parameters with an adaptive genetic algorithm and perform a single-step prediction on the optimized model to obtain the prediction error. The model proposed in this paper is applied to the prediction of Rossler chaotic time series and sunspot chaotic time series. The experimental results verify the eﬀectiveness of the BEN-AGA model and show that this model has higher prediction accuracy and more stability than other methods.


Introduction
Determining the irregular movement in the system is called chaos. e chaotic time series is the manifestation of the discrete situation of chaos. e time series with chaotic characteristics generated by the chaotic model is the chaotic time series. e rich dynamics information is implied by the chaotic time series. It can be said that chaotic time series is a bridge from chaos theory to the real world and an important application field of chaos. e research and application of the prediction of chaotic time series are becoming more and more important. Choosing a single time series as the original data of time series prediction is a conventional way to study time series. A single time series presents a simple relationship between time and forecast. Observation of customs clearance shows that they are not a simple linear relationship. erefore, when making time series forecasting, we pay more attention to the time series generated under nonlinear systems, especially time series produced by chaotic systems. e theory of chaotic time series believes that the evolution of a single variable in a time series contains all the information about the long-term evolution of all variables in the system. rough the proposed phase space reconstruction theory by Packard, we can use any long-term evolutionary variable in the system to reduce the chaos of the whole system to study the chaotic behavior of the system [1]. Chaos theory provides new ideas for time series prediction. Chaotic time series are widely used in geomagnetic storm warning [2], short-term traffic flow prediction [3], sunspot data [4], prediction, and so on.
In recent years, scholars at home and abroad have proposed a variety of chaotic prediction models, such as: moving average model [5], autoregressive model [6], autoregressive moving average model [7], neural network (feedforward neural network, recurrent neural network, and constructional neural network) [8], and support vector machine [9]. e moving average model, autoregressive model, and autoregressive moving average model are proven and mature models, which are widely used in linear time series forecasting. ey have the advantages of high forecasting accuracy and short forecasting time, but for nonlinear time series forecasting accuracy not high, there are limitations in time series forecasting. A neural network is a powerful tool for nonlinear time series forecasting. It has the advantages of good nonlinear operation ability and high prediction accuracy. For example, Shen et al. [10] improved the extreme learning machine to predict the chaotic time series and verified the strong robustness of the proposed model. For example, Wang et al. [11] used convolution and short and long memory hybrid neural networks combined with genetic algorithm to predict the photovoltaic power generation, and the results showed that the method they adopted had better prediction performance for the photovoltaic power generation. However, because the neural network is limited by the number of training samples, there are problems of overfitting and falling into local optimal solutions, and the structure needs to be manually specified, . Also with the increase of system complexity, the computing time increases significantly, which limits the applicability of neural networks. Support vector machines are also generally used for nonlinear time series. Its advantages are that it follows the principle of structural risk minimization, has better generalization performance, and can obtain global optimal solutions. For example, Guo [12] used a genetic algorithm to optimize support vector machines to carry out time series prediction and achieved higher prediction accuracy than before. Similarly, Yan et al. [13] used the support vector machine model optimized by a genetic algorithm to predict the short-term wind speed and obtained the predicted value of the short-term wind speed, which is of great practical significance. But it has disadvantages such as slow calculation speed; prediction accuracy is too dependent on the choice of parameters, and the model parameters do not have the disadvantage of a unified determination method. erefore, it is necessary to further study new models with simple structure, good generalization performance, accuracy, and efficiency. e brain emotional learning (BEL) model [9] is based on the structure of the limbic system of the mammalian brain to simulate the mechanism of the amygdala and the orbitofrontal cortex to generate emotions, consolidate memory, and avoid repeated learning. It has the advantages of simple structure, low calculation complexity, and fast calculation speed. Due to the advantages of the BEL model, it has a wide range of applications in time series forecasting, such as the prediction of geomagnetic storms and the forecast of short-term traffic flow. For example, Ying et al. [14] used the BEL model to predict chaotic time series and applied it to the field of geomagnetic alarm. For example, Yin and Tan [15] classified effective data through the BEL model and found that the BEL model had high classification accuracy and faster operation speed than the traditional algorithm. However, in the face of nonlinear time series, the prediction accuracy of the BEL model decreases, and the prediction results in the application depend on the setting of the reward signal parameters, and there is no uniform setting standard for the reward signal parameters. In the past, researchers proposed different reward signal setting methods according to different problems. is approach limits the applicability of the BEL model. In order to improve the nonlinear time series prediction accuracy of the BEL model, the problem of reward signal setting is avoided. In this paper, we improved the BEL model based on the existing research results to build a brain emotional network (BEN) model by changing the linear amygdala and the orbitofrontal cortex structure in the model to a nonlinear structure with the activation function and weight, and weights are added to the output part to describe different effects. In order to improve the genetic algorithm in the face of nonlinear systems falling into local optimal solutions problem, we constructed an adaptive genetic algorithm with dynamic crossover probability and mutation probability to improve the flexibility of the genetic algorithm and enhance the global and local search capabilities. We combine adaptive genetic algorithm and brain emotional network model to build BEN-AGA model using adaptive genetic algorithm to optimize the parameters of the brain emotional network model. First, the weights to be optimized are encoded as chromosomes; then the crossover and mutation processes are controlled through dynamic crossover probability and dynamic mutation probability, and finally the optimal parameters are selected through the fitness function to evaluate the chromosome.
e BEN-AGA model proposed in this paper is used in Rossler chaotic time series prediction and sunspot sequence prediction.
e experimental results show that the BEN model has high prediction accuracy and fast calculation speed and can effectively reflect the trend of chaotic time series. Compared with other models, it has obvious superiority.

Chaotic Time Series Prediction
e observed signal obtained in the application of time series forecasting is often a single time series, such as y(t), t � 1, 2, . . . , n . However, there is no simple linear relationship between the generally predicted object and the observed signal, such as time and power consumption. erefore, the chaotic time series generated under the nonlinear system, especially the chaotic system, is very important when predicting the time series. e theory of chaotic time series tells us that the long-term evolution of a single variable in the time series contains all the information about the long-term evolution of the entire system.

Phase Space Reconstruction.
e state of the system at a certain moment is called the phase, and the space formed by all phases is called the phase space, which is the geometric space that determines the state of the system. e phase space of a chaotic system is generally high dimensional. e original single-variable time series is actually the final reflection of the interaction of many influencing factors, which contains the traces of the influence of all the variables participating in the influence. Packard proposed the theory 2 Complexity of phase space reconstruction to study chaotic systems (chaotic behavior of the system) by using a single long-term evolution time series in the system. Phase space reconstruction is to embed the time series into a higher dimensional phase space so that the information contained in the original time series can be fully revealed. Phase space reconstruction is a very important step in processing chaotic time series. e quality of phase space reconstruction directly affects the prediction results. e theory put forward by Takens [16] is very instructive in phase space reconstruction. e theorem holds that the development and change of any influencing factor in the system are determined by the development and change process of other influencing factors related to it. e principle is as follows: for a time series x(t), t � 1, 2, 3, . . . , n { }, where n is the length of the time series. Reconstructed phase space is as follows: en the phase space state can be expressed as follows: where X(t) is a point in the phase space, m is the embedding dimension, and τ is the delay time. It is obvious from the above formula that the determination of embedding dimension and delay time is the key to phase space reconstruction. A good embedding dimension and delay time can make the reconstructed phase space preserve the mutual influence of the original data without generating redundant information and noise. Selecting feature vectors in the reconstructed phase space for calculation and prediction can greatly improve the reliability and accuracy of prediction. In this paper, the complex autocorrelation method [17] is used to solve the delay timeτ, and the method of Cao [18] is used to solve the embedding dimension m.

(4)
Equation (3) is the single-step prediction model we need, and equation (4) is the prediction error of the model. From equation (3), the selection of approximate mapping f ′ will directly affect the prediction effect, so choosing a good mapping f ′ is essential.

Brain Emotional Model.
e brain emotional learning (BEL) model is inspired by neurophysiology and proposed by Buzug and Pfister [20]; it has many extensive applications in the prediction of chaotic time series. e three key parts of the emotional learning model are the amygdala, orbitofrontal cortex, and reward signals. e amygdala is an important organization that controls emotional responses. It can receive different sensory stimuli, generate emotions, form memories, and consolidate memories. e information from the outside world is transformed into nerve stimulation through the human senses and reaches the amygdala through long and short reflex arcs to become emotions. In the short reflex arc, sensory stimuli reach the thalamus and are directly transmitted to the amygdala; in the long reflex arc, sensory stimuli pass through the thalamus to reach vision the cortex is processed and transferred to the amygdala. e orbitofrontal cortex can also receive sensory stimulation through long and short reflex arcs, but it is mainly used to adjust the amygdala and assist the amygdala to judge emotions. e amygdala and the orbitofrontal cortex are the main parts of the model. Emotional learning occurs in two main parts. e amygdala is responsible for generating emotions and output according to stimuli, and the orbitofrontal cortex is responsible for regulating the learning of the amygdala, avoiding over-and underlearning. Reward signals as emotions naturally produced by the brain can directly act on the amygdala and orbitofrontal cortex. e model is shown in Figure 1.

Complexity
In Figure 1, the sensory input signal is X; the reward signal is R ew ; A th is the maximum sensory input signal; A is the output value of the amygdala; O is the output value of the orbitofrontal cortex; v is the weight between the sensory cortex and the amygdala; w is the weight between the sensory cortex and the orbitofrontal cortex; and the output value is recorded as T. It can be seen that the amygdala receiveX, A th , and R ew ; the orbitofrontal cortex receives X and R ew , and the output of the emotional learning model is responded by sensory input signals. e emotional learning process is the weight adjustment process of the emotional learning model. According to the definition of the brain emotional learning model [20], the weight adjustment law of the amygdala and orbitofrontal cortex is as follows: where Δv represents the weight change rate of the amygdala; Δw represents the weight change rate of the orbitofrontal cortex; A i and O i represent the output nodes of the amygdala and orbitofrontal cortex, respectively; and α and β represent the learning rate of the amygdala and orbitofrontal cortex, respectively. Observing the above two formulas, we can see that reward signal R ew plays an important role in the adjustment of the weight of the amygdala and orbitofrontal cortex. erefore, the prediction result of the model depends on the choice of reward signal, and there is no unified standard for the choice of reward signal, which makes the model the versatility is not high. e general expression of the model is as follows:

BEN Model Establishment
. e emotional learning model has been widely used in short-time traffic flow prediction, geomagnetic storm warning, and other fields and has achieved certain effects in the application. However, there are still problems such as low generality of the model and further improvement of prediction accuracy due to the lack of a unified standard for reward signals. Considering the improvement of the affective learning model, we proposed the BEN network model to improve the prediction accuracy and avoid the problem of setting reward signals, so that the established BEN network model has a stronger universality in the application of related fields. e amygdala and the orbitofrontal cortex are the core of the emotional learning model. Sensory signal input is processed by the amygdala and the orbitofrontal cortex and then interacts to become the emotional output. e processing of the amygdala and the orbitofrontal cortex uses a simple linear weighting function to describe the emotional learning process, but it is more realistic to use a nonlinear model to describe the emotional learning process; because the amygdala and the orbitofrontal cortex are simple linear weighting functions, we regard them as two simple neural networks and act on the activation functions on the two neural networks to change the simple linear structure into a nonlinear structure. Considering the influence of the amygdala and the orbitofrontal cortex in the process of emotional learning is not equal, the basis of the activation function, they are given output weights to describe the influence in the process of emotional learning, so as to establish the BEN model. e BEN model has stronger nonlinear computing capabilities and better model structure and can better describe the emotional learning process. e prediction results of the model are too dependent on the reward signal R ew , and the reward signal R ew is no unified standard. So reward signal R ew is initialized randomly, and the adaptive genetic algorithm is used to iteratively search for the optimal reward signal R ew to avoid artificially setting reward signals to affect the accuracy of prediction results.  Figure 2.
Here, the two neural networks of the orbitofrontal cortex and the amygdala are net1 and net2, respectively, and linear addition is used internally, and the linear output is selected as the output. e final output is the sum of the linear weights of the two neural networks.
In the BEN model, the original signal becomes the sensory input signal after phase space reconstruction: where m is the phase space reconstruction dimension also the number of sensory input signals. e maximum value of sensory input signal X i directly acts on the output result as follows: e current amygdala and the orbitofrontal cortex are regarded as two neural networks, net1 and net2, respectively. For each sensory input signal X i , net1 and net2 have corresponding sensory input signals. e acceptance points are where v i represents the weight between the nodes of net1 and w i represents the weight between the nodes of net2. So the internal output T a of net1 can be linearly expressed as follows: where b a is a random item with an initial value randomly give.
e internal output T o of net2 can be linearly expressed as follows: where b o is a random item with an initial value randomly give. So the entire output T can be expressed linearly as follows: where N 1 and N 2 represent the output weight of the amygdala and orbitofrontal cortex, respectively. From equation (14), the current model is a linear relationship. We improve the model to strengthen the model's nonlinear calculation ability and generalization ability to better describe the chaotic time series. For the output of the entire model, we still adopt the linear function model. e output function of the model is a linear function and is marked as g(·). e hidden layer functions in net1 and net2 are also linear functions, marked as f 1 (·), f 2 (·), respectively. In the two separate neural networks, net1 and net2, activation function h(x) is used to make the network nonlinear and improve generalization ability. After the activation function is applied, the output of the hidden layer function of net1 and net2 is . So the expression of the whole model can be written as follows: where w and v are the weight vector from the layer to the hidden layer, x is the sensory input signal vector. It can be seen from the model that the output linear function is (15) can be written as follows: Since our activation function chooses sigmoid as the activation function, so Equation (17) can be written in detail as follows: Equation (18) is the established BEN model. e input of the BEN model is . . , x m ; the output of the BEN model is T; and the parameters of the BEN model are w, v, b a , b o , N 1 , N 2 . We consider using evolutionary algorithms to optimize the parameters of the network model. Common evolutionary algorithms are genetic algorithm (GA) and differential evolution (DE) algorithm. Comparing the genetic algorithm with the differential evolution algorithm, in an iterative way, the genetic algorithm is the parent generation of new children, and the differential evolution algorithm is the parent generation of its own evolution. In the selection of the genetic algorithm through probability to eliminate the inferior, differential evolution algorithm is the absolute elimination of the inferior. In terms of the difference between the core of the algorithm, the core of the genetic algorithm is crossover operation, while the differential evolution algorithm is mutation operation. e robustness and convergence speed of genetic algorithm are lower than that of differential evolution algorithm, but it has stronger global search ability. Considering that the BEN network constructed in this paper is to study the prediction of chaotic time series, the genetic algorithm with stronger global search ability is adopted as the learning algorithm to optimize the parameters of the network model.

BEN Model Combined with Adaptive Genetic
Algorithm (BEN-AGA Model) 4.1. Adaptive Genetic Algorithm. Genetic algorithm is a kind of randomized search method evolved from the evolutionary laws of the biological world (survival of the fittest, genetic mechanism of survival of the fittest). It was first proposed by Professor J. Holland in the United States in 1975 [21]. Its main feature is to directly operate on structural objects without the limitation of derivation and function continuity; it has inherent implicit parallelism and better overall optimum search capability: using probabilistic optimization methods, it can automatically obtain and guide the optimized search space and adaptively adjust the search direction, and no definite rules are required. ese properties of the genetic algorithm have been widely used in combinatorial optimization [21], machine learning [22], signal processing [19], adaptive control [23], and other fields. It is a key technology in modern intelligent computing. e entire process of genetic algorithm is divided into chromosome encoding [24], fitness value calculation [25], selection operation [12], crossover operation [13], mutation operation [26], and the last three operations are collectively referred to as the genetic operator. e process of the genetic algorithm is shown in Figure 3.

Adaptive Genetic Operator.
(1) In the selection operation, roulette and the optimal individual retention strategy are used to select the mating group. In the roulette method, the probability of each chromosome being selected is proportional to its fitness value. e fitness value of the i − th chromosome is fit i ; then the individual selection probability p i is expressed as follows: where pop is the population size, k is the correlation coefficient, and F(ch i ) is the fitness value calculated by the fitness function based on the current chromosome. (2) We use the maximum fitness value of the group and the median of the group fitness to describe the dynamic adaptive probability In the crossover operation, the dynamic crossover probability p cross is designed as follows: where p cross max and p cross min are the preset maximum crossover probability and minimum crossover probability, respectively; fit max and fit mid are the group's maximum fitness value and the group's median of fitness value, respectively; and fit is the larger fitness value among the parents of the current 6 Complexity parent. Using arithmetic crossover method, the following formula is obtained: where X 1 and X 2 represent the chromosomes of the two parents, X 1 ′ and X 2 ′ represent the chromosomes of the two offspring, and r is a random number between 0 and 1.
In the mutation operation, the adaptive mutation probability p mut is designed as follows: where p mut max and p mut min are the preset maximum mutation probability and minimum mutation probability, respectively; fit max and fit mid are the maximum fitness value of the group and the median of the group fitness, respectively; and fit ′ is the fitness value of the individual who needs to mutate.

Combination of BEN Model and Adaptive Genetic
Algorithm. In order to further improve the approximation ability of the BEN model and increase the calculation speed of the BEN model, the BEN model and the adaptive genetic algorithm are combined to construct a brain emotional network adaptive genetic algorithm (BEN-AGA) model.
After the combination, the adaptive genetic algorithm is used to optimize the parameters of the BEN model, so that the calculation of the model is more accurate and efficient, and the problem of artificially setting reward signals that affect the prediction accuracy is avoided. e combination is as follows: e parameters in the BEN model are α, β, c 1 , c 2 , where αis the hidden layer parameter and β is the output layer weight. First, the parameters in the BEN model are chromosome-coded to correspond to the genetic operator of the adaptive genetic algorithm. e parameter encoding chromosome of the BEN model is as follows: where α 1 , α 2 , . . . , α m+1 and β 1 , . . . , β m represent the weights between the neural nodes in the hidden layer, c 1 and c 2 are the thresholds between neurons, and m is the number of sensory signals. en the corresponding fitness function is established to guide the search process of the adaptive genetic algorithm. Good or bad fitness function design influences the convergence speed and prediction accuracy of the algorithm. Generally speaking, the fitness function is converted from the objective function. For the BEN-AGA model, the prediction error is naturally considered as the objective function.
e smaller the error and the better the parameters of the corresponding chromosome allocation also show that the BEN-AGA model selected the better the parameters. erefore, the corresponding fitness function is as follows: where y l is the predicted value under the l input mode, y l represents the expected output value under the l input mode, and F(X) represents the average prediction error when chromosome X is the model parameter. Finally, according to the chromosome and fitness function established corresponding to the BEN-AGA model, the optimal chromosome (the adaptive genetic algorithm chromosome with optimal fitness) is selected by adaptive genetic operation, which is the best corresponding to the BEN-AGA model parameter combination. en it is updated to the BEN-AGA model, and the prediction result is got through single-step prediction.  Figure 4. As the picture shows, the specific steps are as follows:

Implementation Steps of BEN
Step 1: To normalize the chaotic time series and reconstruct the phase space to determine the embedding dimension and delay time

Complexity 7
Step 2: To establish a BEN-AGA model and initialize its parameters ω randomly Step 3: To encode the required weights and initialize the chromosomes in the population Step 4: To input observed values into the Ben-AGA model, the fitness evaluation was carried out, and the optimal chromosome was selected by the adaptive genetic algorithm Step 5: To iterate to the maximum population algebra, confirm the optimal chromosome, and update the parameter ω of the BEN-AGA model Step 6: To obtain the predicted value of the chaotic time series by single-step prediction through the BEN-AGA model of the confirmed parameters

Parameter Setting.
e Rossler system is an accurate model for studying chaotic systems. It is a three-dimensional first-order ordinary differential equation. e equation is expressed as follows: where x, y, and z are time functions, select the values of each parameter in the equation as follows: A � 10, b � 8/3, c � 28, and initial values of the equation are selected as x(0) � 1, Y(0) � 1, z(0) � 1 and the selection sampling time is t � 0.01. e fourth-order Runge-Kutta method is used to generate 10,000 data points, and the first component x is the chaotic time series, which is recorded as x(t). e first 2,000 points are discarded to ensure that the system is completely chaotic; 4,000 points are taken as training samples, and the last 4,000 points are used as test samples. According to the sample data generated by Rossler chaotic time series, phase space reconstruction is performed, and the required embedding dimension and delay time are calculated. e delay time τ obtained by the complex autocorrelation method [17] is 7; the embedding dimension obtained by the method of Brown [18] is 6. According to the second subsection of this article, the single-step prediction method is used to set the prediction step size η � 1. A sixdimensional vector of type is constructed as follows: As the input of the BEN-AGA model, and the output of the model is represented by a one-dimensional vector as x(t + 1).
From the input data as a six-dimensional vector, it can be determined that the number of input nodes of the BEN model is 6; the output node is 1; the hidden layer node is determined by cross-validation as 12; and the initial network weights are random numbers between [− 1, 1]. Next, the parameters of the adaptive genetic algorithm are set. e chromosome length is determined by the number of BEN network parameters. In order to get the best performance of the genetic algorithm, the hyperparameters are worth studying.
ere are four key parameters in genetic algorithm: population size, population iteration number, crossover probability, and mutation probability. In this paper, full factor analysis is used to obtain a reasonable combination of four genetic algorithm parameters. e hierarchy of the four parameters is designed as follows: two levels for population size ∈{50,100}; two levels for Iterations Number ∈{50,100}; three levels for Crossover probability ∈{0.8 0.3,0.75 0.3, 0.9 0.25}; and three levels for mutation probability ∈{0.01 0.001,0.1 0.01,0.02 0.009}. A full factorial analysis needs 2 2 · 3 2 � 36experiments. Some experimental results are shown in Table 1. e reasonable parameter combination finally selected is shown in Table 2.

Simulation Result.
is article compares and analyzes the prediction results of the BEL [9] model, the BEN-AGA model, and the brain emotional network combined with the genetic algorithm (BEN-GA) model. Figures 5-7 show the prediction results and errors of the chaotic time series of the sample data of the BEL, BEN-GA, and BEN-AGA models.
From Figure 5, the BEL model can roughly predict the trend of the chaotic time series, but the deviation of the prediction at the peak and fluctuation of the curve is too large, and the error range is within [− 2.5, 2.5]. Figure 6 shows that when using the BEN model and genetic algorithm to predict chaotic time series, there is a significant  improvement. e forecast deviation at peak and fluctuation points is significantly reduced, and the error range is also reduced to [− 0.5, 0.5], but the forecast curve still cannot clearly reflect. e trend of chaotic time series is caused by the genetic algorithm falling into a local optimal solution in the face of nonlinear model optimization. From Figure 7, the BEN model combined with the adaptive genetic algorithm for prediction, it is obvious that the prediction curve can fully reflect the prediction trend of the chaotic time series, which fits the observation curve best, the prediction deviation between peak and fluctuation is small, and the error range is again reduced [− 0.25, 0.25]. In summary, the prediction effect of the BEN-AGA model is better than the other two models, and it has higher prediction accuracy.
Because the BEN-AGA model has a smaller error in the prediction results, the optimization ability of the adaptive genetic algorithm is higher than that of the genetic algorithm, and the convergence speed is faster. As shown in Figure 8, the adaptive genetic algorithm fitness value and genetic algorithm fitness value iterative trajectory are presented.
It can be seen from Figure 8 that the fitness value of the adaptive genetic algorithm has converged when iterated to 50 generations, and the fitness value is 0.0052; when the Observation curve and prediction curve Complexity genetic algorithm is iterated to 88 generations, the fitness value has converged, and the fitness value is 0.0061; the adaptive genetic algorithm has a higher convergence speed and better fitness value, so the BEN-AGA model has a faster iteration speed, higher prediction efficiency, and higher accuracy.

Error Analysis.
In order to better evaluate the effectiveness of the three models, two error evaluation indicators are introduced, namely the mean square error (MSE) and the mean absolute percentage error (MAPE) as follows: where y(t) is the actual observation, y(t) is the predicted value, and N is the number of data. e error evaluation results of the three models are shown in Figure 9. Figure 9 lists the statistical comparison results of the mean square error (MSE) and mean absolute percentage error (MAPE) of the BEL model, the BEL-GA model, and the BEL-AGA model under different sample sizes. From the figure, it can be clearly seen that the BEL model performs poorly under each different sample size, and the prediction error is much larger than the BEN-AGA and the BEN-GA models. e minimum mean square error is 0.4784 at the sample size of 4,000, and the minimum average absolute percentage error is also obtained at the sample size of 4,000 as 35.34%. e model's prediction error is not stable due to the sample size. e effect of the BEN-GA model is better than that of the BEL model. e minimum mean square error is 0.1074, and the minimum average absolute percentage error is 15.63%. e prediction error is relatively stable and is less affected by the number of samples. e BEN-AGA model performed best. e minimum mean square error of 0.0460 was obtained at the sample size of 1,000, and the minimum mean absolute percentage error of 11.59% was obtained at the sample size of 4,000. e stability of the prediction error is not affected by the sample size.
As shown in Figure 10, the error curve of the BEN-AGA model fluctuates in a small range near the zero point in the prediction of different samples without much change, indicating that the prediction effect is good, and the prediction trend can be accurately restored, will not produce excessive errors at extreme values. e error curve of the BEL model fluctuates violently and changes significantly, indicating that the prediction effect is average, and the prediction trend cannot be restored well, and it is easy to cause large errors in over-or underfitting at extreme values. e BEN-GA model is between the two models and shows a certain fluctuation near the zero point, which can roughly restore the forecast trend.
In summary, compared with the BEL and the BEN-GA models, the BEN-AGA model has higher prediction accuracy and smaller prediction error and can more accurately reflect the prediction trend. to the prediction of sunspot chaotic time series, we select annual sunspot data from 1,700 to 1,998 as a sample and select the annual sunspot number index as our sample observations. To normalize the observations of the number of sunspots, we selected each year and then reconstruct the phase space. e delay time τ obtained by the complex autocorrelation method [17] is 3; the embedded dimension obtained by the method of Brown [18] is 4. After phase space reconstruction, 280 sets of data are generated: the first 180 sets of data are selected as training samples, and the last 100 sets of data are used as test samples.

Prediction Analysis of Sunspot
After phase space reconstruction, the input data is a four-dimensional vector. It can be determined that the number of input nodes of the BEN model is 4; the output node is 1; the hidden layer node is determined by crossvalidation as 12; and the initial network weights are random   500  1000  1500  2000  2500  3000  3500  4000  0  Sample size   500  1000  1500  2000  2500  3000  3500  4000  0  Sample size   500  1000  1500  2000  2500  3000  3500  4000  0  Sample size   500  1000  1500  2000  2500  3000  3500  4000  0 Sample size  e chromosome length is determined by the number of BEN network parameters as 11. Other parameters are determined as follows through the method of tenfold cross-validation and experience: the population size is 100, the maximum evolutionary algebra is 50, the maximum crossover probability is p c max � 0.7, the minimum crossover probability is p c min � 0.1, the maximum mutation probability is p m max � 0.1, and the minimum mutation probability is p m min � 0.003. And the initial coding of the chromosome is performed, and the coding range is [− 1, 1]. After BEN-AGA model training and prediction, the obtained data are denormalized to obtain the prediction result of sunspots.

Sunspot Series Prediction Result.
In the sunspot sequence prediction, Figure 11 shows the prediction curve and prediction error curve of 180 training data. It can be seen from Figure 11(a) that the red predicted curve can fit the blue actual observation curve very well, indicating that the BEN-AGA model can describe the changing trend of sunspot number well. From the error curve of Figure 11(b), the prediction error of each point fluctuates in a small range near the zero point, indicating that the prediction error of the BEN-AGA model is small. Figure 12 is the prediction curve and prediction error curve of 100 test data. It can be seen from Figures 12(a) that the red prediction curve can fit the blue actual observation curve well; correspondingly, it can be seen from Figure 12 Based on Figures 11 and 12, both the training data and the test data prediction curve can fit the observation curve well, indicating that the BEN-AGA model can well reflect the changing trend of the sunspot sequence, and the error curves are all in fluctuations near the zero point indicate that the BEN-AGA model has high prediction accuracy. In order to further illustrate the fitting ability of the BEN-AGA model, the correlation coefficient between the predicted value of the model and the observed value is calculated. e correlation coefficients on the training set and the test set are 0.8986 and 0.8918, respectively; it shows that the BEN-AGA model has a good fitting performance to the sunspot sequence.
In order to compare and verify the performance of the BEN-AGA model in terms of prediction accuracy, stability, and speed, under the same experimental parameter settings, the BEL [9] model, BEL-GA [14], and BEN-GA model are used for sunspots Sequence for single-step prediction. Each model was run independently 20 times, and the average value of M se , M ad , and M ape , and running time was obtained by statistics. Among them, M se , M ad , and M ape are used as prediction accuracy indicators, and stability uses a variance of M ape as indicators. e results are shown in Table 3.
It can be seen from Table 3 that in terms of accuracy indicators M se , M ad , and M ape , the statistical values of the BEN-AGA model compared with other models are all the smallest, indicating that the BEN-AGA model has the highest prediction accuracy. In terms of time complexity, the BEL model has the minimum complexity, while the other three models are consistent. In terms of running time, the running time of the BEL model is the shortest. Because the iterative calculation of the adaptive genetic algorithm is added to the BEN-AGA model, the running time is slightly longer than the basic BEL model, but the running time is not much different; from the perspective of stability, the variance of M ape 1 of the BEN-AGA model is the smallest, so the BEN-AGA model has better stability than other models. In summary, the accuracy and stability of the BEN-AGA model in the prediction of the sunspot number sequence are better than other models, and the calculation time is also better than most other models. erefore, the BEN-AGA model is in the prediction of the sunspot sequence. It is effective and superior to other comparison models.

Comparison of Performance of Different Models.
In order to compare the prediction performance of the model proposed in this paper, it is respectively compared with the deep learning network model, SVM model, LM-BP neural network, MLP-BP neural network, basic BEL model [14], and Dendritic neuron model [27,28]. Rossler chaotic system is selected to produce the original chaotic sequence, single-step projections for chaotic sequence. e parameter settings of the model were set according to the experiment above. Each model was run independently for 25 times, and the average values of MSE, MAD, MAPE, and running time were obtained by statistical analysis of the running results. MSE, MAD, MAPE as the prediction accuracy index, and the stability is measured by the variance of MAPE. Different models of the chaotic sequence forecast results are shown in Table 4. Table 4 shows the statistical indicators of BEN model, LM-BP neural network model, BEL basic model, SVM model, and deep learning network. We found that the MSE, MAD, and MAPE of the BEN model were all smaller than that of the MLP-BP neural network model, LM-BP neural network model, BEL basic model, and SVM model. is shows that the prediction accuracy of the BEN model is very good, higher than that of common neural network models, but it is still insufficient compared with deep learning network. From the perspective of prediction stability, the BEN model has a very small variance of MAPE, indicating that its prediction stability is very good. Compared with other models, the prediction stability of the BEN model is lower than that of the deep learning network and better than other common neural network models. In terms of running time, the running time of the BEL model is far less than that of other models, and the running time of the BEN model is also smaller than that of other models. However, the running time of the BEN model is longer than the BEL model because it performs iterative calculation of the genetic algorithm on the basis of BEL model. In the face of chaotic time series prediction, e BEN model is superior to the traditional network model in terms of accuracy, stability, and running time. e prediction accuracy is slightly lower than the deep learning network and dendritic neuron model. e deep learning network and dendritic

Complexity 13
Observation curve and prediction curve

Conclusion
Based on the research of the existing brain emotional learning (BEL) model, this paper proposes the BEN model by changing the linear structure into a nonlinear structure and giving output weights to describe the influence of different structures. en we further propose the Ben-AGA model through the combination of adaptive genetic algorithm and BEN model. e weights to be optimized are encoded as chromosomes; the crossover process and the mutation process are controlled by the dynamic crossover probability and dynamic mutation probability of the design; and the chromosome is evaluated by the fitness function to select the best parameters. Compared with the original model, the BEN-AGA model has stronger approximation ability, calculation ability, higher prediction accuracy, and stability. e model is applied to Rossler chaotic time series prediction and sunspot sequence prediction, and the experimental results show that the BEN-AGA model proposed in this paper can effectively predict the sunspot sequence. Compared with the model before improvement, it has the advantages of high prediction accuracy, fast convergence, and high stability. e experimental results objectively verify the effectiveness and rationality of the improved method and the established model. By comparing the prediction results of chaotic time series with those of other models, the BEN model has better prediction accuracy, less operation time, and stronger stability than the traditional neural network model. Compared with the deep learning network, the prediction accuracy is still slightly lower, but the model structure is simpler, and the calculation time is shorter. In summary, the BEN model has the advantages of high prediction accuracy, strong stability, and less running time, and the model is simple and easy to be applied in the actual prediction system. It has great potential and broad prospects for application in geomagnetic early warning and shortterm traffic flow prediction.
Data Availability e simulation experiment data uses the fourth-order Runge-Kutta method to solve the Rossler chaotic system. e experiment used the annual sunspot data from 1,700 to 1,998 as the sample. e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.