Nonlinear ARIMA Models with Feedback SVR in Financial Market Forecasting

,


Introduction
Financial markets are volatile and unpredictable, and all entities are often exposed to uncertainty about profit and loss. e factors affecting their risk are equally complex and volatile, making it difficult to grasp patterns. Over the past decades, there have been several financial market crises of a global nature, such as the Asian financial crisis in 1997, the United States subprime mortgage crisis in 2008, and the global economic crisis. Financial crises have a contagious linkage effect, affecting a wide range of countries and industries, and the occurrence of a financial crisis has a catastrophic impact on the operation of the entire national economic system. e advent of the era of big data has created a revolution in information technology, especially the widespread use of cloud computing, artificial intelligence, and big data technologies in traditional industries [1]. e "Internet+" model is blossoming everywhere, and artificial intelligence empowers all industries. Internet financial data is massive and fast updating, and the traditional analysis method can no longer meet the demand, needing to innovate the original statistical model. Facing the huge volume of data, maximizing the value of data use is a must for data decision-makers today, and the key lies in how to extract useful information for decision-making. e core development potential of the financial industry in the future will significantly depend on the innovative ability of data extraction, transformation, analysis, and mining [2]. At this stage, the development of artificial intelligence technologies is becoming more mature and methods such as machine learning, artificial neural networks, and strong learning have been successfully used in the direction of imaging, speech, and autonomous driving. Deep learning techniques essentially include a new generation of artificial neural networks that use statistical modeling arguments to overcome problems that have plagued the tradition, such as the fading gradient problem, and the overfitting problem. Specifically, forecasting generally refers to estimating a sample regression model using the existing sample data and then calculating a new sample regression model based on this sample regression model on the possible values of the new sample data. e development of deep learning techniques has led to their widespread use in the study of financial market sequences and other problems, where deep learning has excellent nonlinear mapping capabilities and excellent fitted generalization to better predict and deal with volatility, nonlinearity, correlation, and time series dependence in financial markets in dynamic forecasting.
When financial risk arises, it affects not only the procurement and management of monetary funds in the areas of production and distribution, but also all aspects of social redistribution.
erefore, effective prevention of financial risks can maintain the smooth operation of the economy and society. Risk hazards drive the application of science and technology, and the application of science and technology drives the development of financial innovation, and the application of big data and artificial intelligence on financial innovation is of landmark significance to the defense of risk [3]. In this paper, a combined linear and nonlinear model is used to forecast and empirically analyze the financial market, and a linear ARIMA model and a nonlinear SVR model are selected for combined forecasting and compared with a single model.

Related Work
Time series analysis has evolved from linear to nonlinear models and from parametric to nonparametric models. ARIMA models have been widely used in linear models because of their more desirable statistical properties and the accepted Jenkins-Box modeling method. ARIMA models have been applied to a huge amount of time series. Literature [4] successfully predicted the monthly consumer price index (CPI) by constructing an ARIMA model, which can provide a quantitative basis for the effective implementation of price control policies at the macro level. Literature [3] applied the SARIMA model to forecast the trend of CPI in China. Mao Youfang predicted the future return and direction of "BalancePay" through the ARIMA model based on the return per 10,000 shares of BalancePay. In the paper [5], the average price of second-hand houses in Guangzhou and Shenzhen was used as the object of study, and the ARIMA model was used to make rolling forecasts of future house prices with high prediction accuracy, which can make continuous forecasts of the average price of second-hand houses and provide a reference for house buyers and sellers. In the literature [6], three ARIMA models were used to forecast the global average temperature: the basic ARIMA model, the trend-based ARIMA model, and the ARIMA model based on wavelet transform, among which the trendbased ARIMA model has the best prediction effect. In the process of studying the exchange rate series, literature [7] used the ARMA model and processed the data effectively by differencing method to eliminate the noise part, to improve the prediction accuracy of the ARMA model. Xin Ouyang used the ARIMA model to forecast two exchange rate series with different lengths of time series and found that the ARIMA model was effective in forecasting both exchange rate series, and the short sample had higher forecasting accuracy than the long sample. In the paper [8], the weekly average of the RMB-USD exchange rate was used for forecast by the ARIMA model, and it was found that the RMB tended toward appreciation, and the prediction effect was satisfactory. Literature [9] takes the series of mid-price of RMB-USD exchange rates as the research object and forecasts its short-term trend by the ARIMA model, and the prediction result reflects the short-term fluctuation of the exchange rate. Literature [10] was the first to use support vector machines to analyze financial time series correlation problems. Literature [11] proved that the support vector machine method for analyzing and forecasting financial time series is fully applicable. After this, more and more researchers are using the support vector machine approach in forecasting financial time series. Among them, in literature [12] in forecasting study of SSE Composite Index, the original data was denoised by wavelet transform, and then support vector machine method was used to develop multistep forecasting of its SSE Composite Index series, where the direct forecasting method used as well as recursive forecasting method obtained better forecasting results. Literature [13] utilized a new nonparametric support vector regression (SVR) method to predict the exchange rate time series variables based on a nonlinear ARI model and compared the prediction results with those of maximum likelihood (MLE) and artificial neural network (ANN). Literature [14] selected the daily exchange rates of four currencies, China, India, Switzerland, and Korea, and predicted the exchange rate series based on the nonlinear ARI model by using the SVR model, and by comparing the prediction results with the maximum likelihood method (MLE) and artificial neural network (ANN), we found that the SVR model considers both fitting and prediction, while the maximum likelihood method and artificial neural network are more inclined to in-sample fitting, and thus, SVR has better prediction results. Literature [15] constructs a new exchange rate forecasting index system based on the traditional exchange rate decision theory, uses the support vector machine method to forecast and empirically study the RMB-USD exchange rate, and compares its forecasting results with those of the autoregressive model and BP neural network model. Stability has a more complete model theoretical basis and better reflects the economic factors affecting exchange rate changes. Literature [16] categorizes the risk measurement methods VaR of financial markets into three types: one is a parametric measurement method based on Risk Metrics and GARCH model family; the second is a 2 Journal of Mathematics semiparametric method based on CAViaR model and extreme value theory; the third is a nonparametric measurement method, mainly based on two major methods: historical simulation method and Monte Carlo simulation method. Among them, except for the CAViaR model, which directly calculates the VaR of a specific quantile through regression techniques, other methods are indirect methods to calculate the VaR, and among the indirect methods, the GARCH model family is the most widely used. e root of financial market risk measurement is that the volatility of financial product or portfolio returns is not a fixed constant, but time-varying volatility, the meaning of which refers to the return series presenting different volatilities (standard deviation of the distribution) at different points in time, and the change in this volatility is based on time. us, the key variable in how financial asset pricing models can effectively construct portfolios and supervise and manage the risks that exist in financial markets is volatility, which reflects the shape of the distribution of financial returns. SVM can approximate arbitrary nonlinear mapping relations, and the use of kernel functions can effectively avoid the computational complexity caused by the high dimensionality of the feature layer. In addition to this, VaR cannot be separated from the choice of financial asset return volatility models.

Nonlinear ARIMA Models with Feedback SVR in Financial
Market Forecasting

Nonlinear ARIMA Model with Fused Feedback SVR.
A time series is a series of values of the same statistical indicator arranged in the chronological order of their occurrence. Its formation is very simple, but it is used very widely; the process of development of many things and phenomena can be expressed by it, for example, in meteorological statistics of the annual precipitation; in industrial production, the number of finished products in each quarter; in the field of the national economy, the annual gross domestic product; the fields of study involved in time series are innumerable. In people's research on time series, generally, there are two purposes: one is to build a model by which to understand the mechanism that generates the time series, and the other is to predict the level that the time series may reach in the next period based on the historical data of the series, and some other influencing factors that affect the series. ARIMA model is one of the most common methods used in time series analysis. ere are two main methods for testing the smoothness of sequences: one is the graph test method, in which the discrimination is derived from the characteristics shown by the autocorrelation graph and the time series graph; the other is the method of hypothesis testing by constructing a test statistic. Because of its simplicity and ease of observation, the graph test method is widely used in discriminating smoothness. However, at the same time, the graph test method is easily influenced by subjective views when making the discriminations, so it would be better to use the statistical test method as an auxiliary judgment when using this method. e horizontal forecast is a forecast of the possible future values and trends of economic and financial variables. e difference between the predicted value and the true variable is the forecast error. e current smoothness test that is more widely used is the unit root test. e nature of a smooth time series shows that its mean and variance are constants, so a smooth series will always fluctuate up and down around a constant value, and the fluctuation range is bounded. If a series shows a significant trend or periodicity, it is usually a nonstationary series. e horizontal and vertical axes of the autocorrelation plot indicate the number of delay periods and the autocorrelation coefficient, respectively. Usually, a smooth series has a certain degree of short-term correlation, in terms of the autocorrelation coefficient; that is, the autocorrelation coefficient of a smooth series tends to decay rapidly toward zero as the number of delay periods k increases. In contrast, a nonstationary time series is one in which the autocorrelation coefficient decays to zero more slowly.
Firstly, there exist objects to be predicted, and each object is linked to its corresponding time to form an ordered sequence of data (random sequence), and this random sequence is described by applying a reasonably accurate mathematical model. At a later stage, this model can be identified, and after identification, the model can predict the future values of the sequence based on the past and present values of the sequence, which is the basic idea of the ARIMA model [17]. e ARIMA model is usually abbreviated as ARIMA(p, d, q) model; ARIMA model is only applicable when the time series is smooth; if the time series is nonlinear or there is seasonality, the ARMA model will not be applicable and cannot be predicted. e main idea of the ARIMA model is to build a differential autoregressive moving average model, which is written as ARIMA(p, d, q), where d denotes the number of differences performed during the construction of the model, p denotes the number of regressions of the autoregressive model where it is located, and q denotes the number of model moving averages.
(1) e ARMA model is combined with the difference calculation to form a new combinatorial model, which is called the summated autoregressive moving average model, and if the difference series of a nonstationary series is differenced by an appropriate order, it will change the difference series of the series to a stationary series, and then the difference series model can be fitted.
Usually, if there is a more obvious linear trend in the time series, generally, the first-order difference can achieve a smooth state; if there is a more obvious curvilinear trend in the time series, the influence of the curve trend that already exists can be abstracted away after the difference calculation; if there is a fixed period in the time series, the difference calculation that is generally used is the period length.
If there is some pattern between the perturbation terms of the time series, it means that the original series does not Journal of Mathematics sufficiently extract useful information and will lead to inaccurate standard deviations calculated by ordinary least squares, and if there is a lagged dependent variable, it will cause the OLS estimates to be biased and inconsistent, and the significance level test of the parameter estimates of the regression will no longer have true confidence. e serial correlation test for time series can be done by the LM test. In general, the LM test is still valid in the presence of lagged variables in the series, and the Lagrange multiplier method is still a good test for autocorrelation of higher-order series.
eoretically, multiple differencing of an unsteady time series can make it stable and sufficiently efficient to extract useful information from the original series, but not an infinite number of differences, and too many orders of differencing can have an impact on the valid data information, as some of it can be lost.
As shown in Figure 1, the steps in ARIMA modeling are as follows.
(1) Determine the smoothness of the series based on the 3 most basic tests for time series. If the test is not passed, the series is nonstationary and nonlinear, and a difference calculation is required to reach the stationary state. (2) If the original data series is a nonstationary time series and there is a clear trend, the difference needs to be implemented, and if there is heteroskedasticity in the data, further technical processing is required to perform correlation tests and analysis on the series formed by the d-order difference, and the values of p and q are determined from the graph of the test results, which should be chosen with as few parameters as possible. If the smooth series obtained after differencing is biased correlation function truncated autocorrelation function trailing, then choose AR model; if biased correlation function is trailing and autocorrelation function is truncated, then choose MA model; if both biased correlation function and autocorrelation function are trailing, then choose ARMA model [18]. en, input variables AR(p), MA(q) for estimating equations. (4) a test of the residual series of the model results to diagnose whether they are consistent with randomness and within the confidence band requirement. (5) Conduct hypothesis tests to diagnose whether the residual series is white noise. (6) Predict the data based on the tested model, observed by the precise evaluation indicators that already exist and judged by the accuracy of the model prediction.
e Support Vector Machine Regression (SVR) model is the model that is used when the Support Vector Machine (SVM) model analyzed in the previous paragraph is used as regression and is one of the models chosen for this paper. e development of quantitative forecasting methods can be roughly divided into the following three phases according to the order of their appearance: structural econometric model stage, time series analysis stage, and intelligent forecasting stage. e specific construction principle of the SVR model is that for a given data set.
In general, the general regression model calculates the loss by computing the difference between the fitted and true values, while the support vector machine regression model gives a certain degree of tolerance deviation, and the loss is considered only if the absolute value of the difference between the fitted and true values is greater than the tolerance deviation [19]. Intuitively, an interval band of width f (x) is constructed on each side of the original function f (x), denoted as f (x) + ξ and f (x) − ξ, and the prediction is considered correct if the fitted value O falls within this interval, and only if the fitted value O falls outside this interval is the prediction considered biased and the loss is calculated. us, the SVR can be expressed by Finally, the kernel function is introduced to support vector machine regression, i.e., the SVR model can eventually be expressed as In the modeling process of SVR, the choice of kernel function type, the size of penalty parameter C, and the value of the insensitive loss function £ parameter play a crucial role in the final model effect. Different kernel functions will form different algorithms, and the commonly used kernel functions are linear kernel function, polynomial kernel function, Gaussian kernel function, and Sigmoid kernel function. e insensitive loss function parameters affect the number of support vectors, and the smaller the value, the higher the model accuracy. A larger penalty factor C represents less tolerance for error, and overfitting may occur when C tends to infinity. However, if C is particularly small or tends to 0, underfitting may occur. In practice, C defaults to 1 and generally requires cross-validation to choose a suitable C. Usually, C needs to be smaller if there are more noise points.
ere are three general algorithms for parameter finding in SVR, namely, grid search, GA algorithm, and PS0 algorithm. Grid search is to try every possibility to find the best performing parameter among all the candidate parameters by traversing them in a loop to get the optimal result. For example, when searching for two parameters, list the possible values of these two parameters; if there are four possibilities for parameter a and three possibilities for parameter b after all the possibilities are listed, a table of 4 * 3 is formed, and each cell contains two different combinations of parameters, and the circular traversal is to obtain the optimal result by counting the cells one by one. e final result of the grid search is highly dependent on the result of the initial data division, such that cross-validation is often needed to reduce the chance.
In practice, to improve the applicability of models, usually researchers choose multiple models to predict the same problem and select the optimal model from them. Sometimes, researchers also make predictions by combining models to achieve the effect of improving prediction accuracy. Traditional structural econometric models and time series forecasting methods are difficult to use to uncover and analyze a large amount of historical data of this nature, and it is not easy to discover the laws contained therein and find the correct forecasting model, and the intelligent forecasting methods themselves have an adaptive, self-organizing learning mechanism that can be useful in this regard. For example, in this paper, it is chosen to forecast linear and nonlinear problems in financial markets by combining feedback SVR models with nonlinear ARIMA models. Combined model forecasting refers to the use of multiple forecasting methods for the same problem. e purpose of the combination is to take advantage of the strengths of the various methods, the information provided, and to maximize forecasting accuracy while reducing the risk associated with choosing a single forecasting model. Combined forecasting has a clear advantage over a single model in terms of the confidence of the forecast results.
Both ARIMA and SVR are important models when doing time series forecasting, and to improve the accuracy of time series forecasting models, some studies use a combination of ARIMA and SVR models. e tandem method, i.e., ARIMA, is used to model the time series linearly first, and then SVR is used to model the nonlinear part to get the combined forecasting results. For the order in which the two models are used, some studies also suggest that the ARIMA model can be built using SVR first and then the SVR residuals. In the existing studies, other areas of forecasting demand, electricity load, etc. can get more accurate corrected residuals when building a tandem combination model of ARIMA-SVR to correct the residuals with input variables other than prices that affect the financial market. Firstly, the SVR model is used to forecast the nonlinear component of the series data, and then the ARIMA model is used to analyze the forecast error of the SVR model containing linear characteristics, and finally, the sum of the two single model forecasts is the final forecast result, and the process is shown in Figure 2.

Application of Nonlinear ARIMA Models Incorporating Feedback SVR in Financial Market
Forecasting. e continuous development and evolution of financial markets and the existence of uncertainty determine the difficulty of predictive analysis of them. e selection of appropriate financial market forecasting methods for the analysis and forecasting of individual stocks or financial markets is a popular topic of common concern among research scholars in various countries. Model parameters are fitted based on sample data to make forecasts for out-of-sample data. e core of the evaluation of whether the performance of the selected model is up to the standard, the forecast error is reduced, or the forecast accuracy is improved is based on a ranking comparison of the forecasting strength of the model based on an appropriate evaluation metric. What is more, in the actual financial market forecasting and analysis process, the model may not perfectly portray the volatility ups and downs of the stock index in reality. erefore, the construction of the profit and loss function is not the difficulty, but how to build a suitable forecasting model to achieve a more accurate and scientific forecasting output value based on the evaluation index. e ups and downs of the financial market time series and the factors affecting the financial market exhibit highly nonlinear characteristics. erefore, a quality financial market forecasting method should have powerful features to deal with nonlinear problems. However, it is unfortunate that traditional forecasting models are mostly methods for solving linear problems, which appear to be incompetent for such a complex nonlinear system as financial markets, and the forecasting results are not very satisfactory and need to be further improved. e general steps to build a forecasting model for the financial market are as follows: first, obtain the transaction data and represent them in a time series, then make feature extraction for the financial time series, and finally make the input to the forecasting model and get the forecasting results. e financial market is a market where the supply and demand of funds are traded through the use of financial instruments, and the trading behavior of the financial market can achieve the purpose of capital circulation, price discovery, and reduction of financing costs. Financial instruments, i.e., the objects of transactions in financial markets, are mainly of two types: basic financial instruments including stocks and derivative financial instruments including futures. Considering the development of the financial market system structure toward diversification and complexity, to cover a more comprehensive financial transaction data, according to the different trading objects, this study selects the trading price data from the following three aspects: global stock index, foreign exchange market, and commodities. e Global Stock Index, or the stock price index of each country in the world, is compiled by a stock exchange or financial institution by doing a comprehensive analysis of individual stocks in the country, including weighted averages. Stock price indices reflect the overall situation of a country's stock market and the overall trend of the country. Currency is used to measure the value of commodities and, to some extent, reflects the capital position of countries and individuals. In the face of globalization of capital, the monetary policy directly affects a country's foreign and domestic trade and indirectly affects all aspects of people's lives. e volatility of exchange rates directly reflects and influences the formulation of monetary policy.  Commodities refer to basic raw material commodities including energy, metals, and agricultural products, which are essential for human production and life. e global stock index, foreign exchange market, and commodities are represented, respectively. In simple terms, financial markets are the place where the exchange of commodities is achieved, commodities contain the basic materials for production and life, foreign exchange is the medium and contract for the exchange of commodities, and stock indices represent the overall economic situation of the country. In the study of macroeconomic and financial time series, especially in the case of long-term analysis, natural logarithm transformation is often used, which itself can reduce the degree of unsteadiness of the time series and reduce the severity of autocorrelation and multicollinearity of the time series model. us, the trading data of these three major markets provide a more comprehensive picture of the financial markets as a whole. is paper describes the coupling relationship between financial markets in three dimensions, which are homogeneous relationship (coupling relationship between the same financial markets between different countries), nonhomogeneous relationship (coupling relationship between different financial markets), and autoregressive relationship (coupling relationship between financial markets in past time nodes and financial markets in current time nodes), and Figure 3 shows the coupling relationship between financial markets.

Journal of Mathematics
Trading data in the financial markets is divided into two main categories: basic trading data, such as opening and closing prices, volume, and high and low prices; and technical indicator data, such as MACD, RSI, and KDJ. Technical indicators are artificially designed by researchers to analyze basic trading data. Elliott wave theory studies the inherent characteristics of the financial market and describes the basic trading data of the financial market; Elliott wave consists of the highest and lowest prices. For this reason, the data used in this study are the opening, closing, highest and lowest prices of the global financial markets, foreign exchange, and commodities. e core component of deep learning is to extract highlevel feature information from data layer by layer. e prediction model built in this paper using deep confidence networks can achieve two functions: feature extraction and classification. In this paper, the easy price information is abstracted as a time series, and the features are extracted by the deep learning model to finally output the Elliott wave pattern corresponding to the original time series. Since the time to complete a wave, the structure is usually uncertain, meaning that the length of the time series of the same wave pattern varies, but the length of the input sequence needs to be uniform. erefore, the raw time series need to be preprocessed to fix the length of the input data before being fed into the model. is step of the operation can also be considered as a dimensionality reduction and compression of the time series. We need to perform this operation in such a way that the basic structural features of the data are guaranteed and the part that is discarded is the one that has less impact on the global structure.

Experimental Verification and Conclusions
At present, research on applying artificial intelligence techniques such as SVR and ARIMA models to Elliott wave theory and achieving predictions of future movements in financial markets is still in its infancy, and therefore, a standard dataset of Elliott wave series has not yet emerged. In this study, a total of 5822 Elliott wave samples were drawn from the historical price data of 18 traded instruments in three types of markets: global stock indices, foreign exchange markets, and commodities. Among them, the sample of global stock indices accounts for 26.7% of the total sample, the sample of foreign exchange market accounts for 30.7% of it, and the sample of commodities accounts for 42.6% of it. In the sample set, the sample sizes of the eight types of Elliott waves are 121, 92, 126, 118, 83, 96, 72, and 84, respectively. As mentioned in Section 4.5, adjustment waves tend to be longer in duration than driving waves, hence, the higher the sample size for mode 6. Given the timing of adjustment waves and the commonness of sawtooth shapes, mode 3 has the largest sample size. Figure 4 illustrates the sample statistics for the specific 4 types of wave patterns.
e SVR-ARIMA model training sample input variable input_train is a matrix of 3400 rows and 44 columns, and the training sample output vector has only the subsequent wave pattern samples. e training samples are input into the BPNN model for learning training, and the simulation results of the SVR-ARIMA model fitting to the training samples are shown in Figure 5. e blue curve is the prediction fitting curve of the training sample, the red curve is the actual data curve of the closing price of the CSI 300 index, and the yellow curve is the prediction error of the training sample of the BPNN model. Comparing the two curves, it can be seen that the training effect of the SVR-ARIMA model is better. Because of the effectiveness of the models in characterizing the historical patterns of time series data, the models with small lag lengths have replaced structural econometric models to a considerable extent as popular approaches to macroeconomic and financial time series data estimation and forecasting. From Figure 5, it can be seen that the SVR-ARIMA model generates better quality prediction functions after the training samples perform learning training, and the fitting error of the training samples is smaller. e traditional support vector machine approach of resampling the sample and using each transaction data equally has proved in practice to spend too much time on model training, and due to the time-sensitive nature of financial market data, the model requires continuous training on the market to ensure the validity of the model. is is too far from the applied analysis and high-frequency trading time of machine learning to meet the needs of the trading market. erefore, this paper constructs a nonlinear ARIMA model with feedback SVR, which can meet the timeliness requirements of financial market data by time-weighting the sample data, while also reducing the complexity and difficulty of model training and reducing the training time. When using a weighted support vector machine for model fitting, no data sampling of the sample is required. In Journal of Mathematics Figure 6, the model can automatically perform sample selection using time weighting, which takes advantage of the high timeliness requirements of financial markets, assigning higher weights to data that is closer to the present and lower weights or even no weights to data that are farther away from the present, reducing the data preprocessing requirements for model training, and also enabling the observation of the real market data structure. e weighted support vector machine is also a binary model that needs to be transformed into a trip classification model, so the sample uses the oneagainst-one approach for multiclassification training, and the model only needs to construct three binary models. e first-class feature system and the first three-class feature system are used for the training of the input data, which reduces the complexity of the feature system and ensures the training efficiency of the model under the premise of making full use of the feature information. Observing the above model training results, we can find that the law of model training results is more similar to the law of traditional support vector machine model training results. at is, with the increasing prediction period under the uniform feature system, the model prediction effect is getting worse. And under the uniform prediction period, all of them show that the training effect of the three-class feature system is better than that of the one-class feature system. e shorter the period, the better the fitting and analysis effect under the uniform feature system. Comparing traditional support vector machines and weighted support vector machines, the model fitting effects under the same conditions do not differ much, and the model fitting accuracy, recall, and F1 scores of all weighted support vector machines are slightly lower than those of traditional support vector machines. But the difference between the two is not particularly large. At the same time, the fitting effect of all the weighted support vector machine models is better than that of the random forest algorithm.
According to the SVR algorithm, the batch size, epoch, and step are selected, and the SVR-ARIMA model is constructed after experimentation, and again, drop-out is added to the SVR-ARIMA model to prevent overfitting. e loss function value MSE � 0.408040 is calculated, and the fitting result is shown in Figure 7, where the horizontal coordinate is the time, and the vertical coordinate is the daily minimum price, red represents the true value, and yellow represents the predicted value. By comparing the predicted and true values, it can be seen that the SVR-ARIMA model can fit the trend of change though, and the predicted value curve can better approximate the true value. By comparing the PSO-LSTM model and the SVR-ARIMA model in 5-day prediction, 10-day prediction, and 20-day prediction, respectively, it can be seen that whether it is a 5-day prediction, 10-day prediction, or 20-day prediction, the MSE values, RMSE values, and MAE values of the SVR-ARIMA model are smaller than those of the PSO-LSTM model. Among them, the indicators of the SVR-ARIMA model in the 5-day forecasting part are reduced by 9.9324%, 5.0961%, and 7.2591%, the indicators of the SVR-ARIMA model in the 10day forecasting part are reduced by 14.5792%, 7.5766%, and 7.5598%, and the indicators of SVR-ARIMA model in the 20day forecasting part are reduced by 5.87%. Indicators were reduced by 5.8732%, 2.9810%, and 6.5514% in that order. It can be seen that the ARIMA model is indeed effective for the correction of the SVR model, and the hybrid model can achieve better prediction results. After the univariate model is correctly identified, the second step is to estimate the model where the lagged explanatory variables are endogenous, which violates the strict homogeneity of the explanatory variables required by ordinary least squares. e lagged explanatory variables are endogenous, violating the assumption of strict homogeneity of the explanatory variables required by ordinary least squares. By comparing the cross-sectional comparisons of the LSTM model, PSO-LSTM model, and SVR-ARIMA model for 5-day prediction, 10-day prediction, and 20-day prediction, respectively, it can be seen that the MSE value, RMSE value, and MAE value all gradually become larger as the number of prediction days increases, and considering the data division method used in this paper, under the condition that the amount of data in the undivided training set is the same, with the number of prediction days increasing and the number of divided training sets gradually decreasing, and there is a risk of inadequate training. Figure 8 shows the training error variation curve of the SVR-ARIMA model in the classification phase. From this figure, it can be seen that the error curve of the SVR-ARIMA model is smoother than that of the SVR network during the fine-tuning phase, which indicates that the stability of the SVR-ARIMA model is better than that of the SVR model. Although the training error change curve of the ARIMA model also reflects that the network has higher stability and convergence speed, it ends up with higher training error than the SVR-ARIMA model, which may be caused by the network falling into a local optimum, which proves that the SVR-ARIMA model is more reliable than ARIMA and can effectively improve the local optimum problem caused by the batch use of BP algorithm. Comparing the curves of this figure shows that the SVR-ARIMA model possesses faster convergence and higher accuracy than the two shallow network models. e experimental results show that, during iterations for classification training, the SVR-ARIMA model network converges faster, the final training error is lower, and the smoothness of the network is better compared to the reference model. e experimental results demonstrate that the SVRbased ARIMA deep learning model for identifying Elliott wave patterns in financial time series proposed in this paper possesses high reliability. e prediction performance of the model outperforms five reference models, Elliott wave identification models based on other machine learning models, and two types of deep learning models that do not incorporate Elliott wave theory, in terms of four evaluation criteria, MSE, RMSE, MAE, and ER.

Conclusion
For nonnormal nonstationary nonlinear multiscale financial time series, this paper improves the financial market forecasting model by improving the nonlinear ARIMA combination of feedback SVR, hair-mining the essential laws of the financial market, providing investor-assisted decision support, helping investors to reduce investment risk, and helping to strengthen government departments to prevent market risks and the supervision of the securities market, enriching and improving stock price forecasting Innovation of research methods. rough the research and collation of various financial forecasting models at the present stage, this paper summarizes the relevant research results mainly by  nonlinear methods. After the study of Elliott's wave theory, this paper puts forward the important view that Elliott's wave is an advanced characteristic representation of financial time series. is paper verifies that the combined model has a better forecasting effect compared with the single ARIMA model and SVR model. e test results show that the feedback method is better than the other three benchmark methods, both in terms of the absolute level of prediction and the correctness of the predicted direction, which illustrates the theoretical merits of the nonparametric method. is confirms that financial market series contain both linear and nonlinear components of information, and that to fully exploit the best performance of the model, one cannot choose only a single linear or nonlinear model, which may result in the loss of some potentially valuable information about the financial market, and the resulting prediction of the financial market may not be satisfactory. e combined SVR-ARIMA model takes into account both the linear characteristics of the financial market, which effectively extracts the linear components of the financial market and the nonlinear characteristics, which extract valuable information from the residuals to further improve the prediction accuracy of the financial market. For time series with less volatility, the ARIMA model has a good forecasting effect. e actual value of the exchange rate prediction interval selected in this paper fluctuates in the range of 6.90-6.97, with small fluctuations, and the ARIMA model fits well, which shows that, for the linear components in the time series, ARIMA has the good capturing ability.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.