An Online Kernel Adaptive Filtering-Based Approach for Mid-Price Prediction

The idea of multivariate and online stock price prediction via the kernel adaptive ﬁltering (KAF) paradigm is proposed in this article. The prediction of stock prices is traditionally done with regression and classiﬁcation, thereby requiring a large set of batch-oriented and independent training samples. This is problematic considering the nonstationary nature of a ﬁnancial time series. In this research, we propose an online kernel adaptive ﬁltering-based approach for stock price prediction to overcome this challenge. To examine a stock’s performance and demonstrate the work’s superiority, we use ten diﬀerent KAF family of algorithms. In this paper, we take on this challenge and propose an approach for predicting stock prices. To analyze a stock’s performance and demonstrate the work’s superiority, we use ten distinct KAF algorithms. Besides, the results are analyzed on nine-time windows such as one day, sixty minutes, thirty minutes, twenty ﬁve minutes, twenty minutes, ﬁfteen minutes, ten minutes, ﬁve minutes, and one minute. We are the ﬁrst to experiment with several time windows for all ﬁfty stocks on the Indian National Stock Exchange, to the best of our knowledge. It should be noted here that the experiments are performed on stocks making up the main index: Nifty-50. In terms of performance and compared to existing methods, we have a 66% probability of correctly predicting a stock’s next upward or downward movement. This number clearly shows the edge that the proposed method has in actual deployment. Furthermore, the experimental ﬁndings show that KAF is not only a better option for predicting stock prices but that it may also be used as an alternative in high-frequency trading due to its low latency.


Introduction
Time-series prediction is prevalent in economics and investment research. Stock price prediction is one of the most popular applications of time-series prediction. Its success stems from its ability to reduce asset management costs, market impacts, and volatility risks [1]. It is a commonly held notion that stock markets are complex, volatile, and chaotic [2]. e markets, in our perspective, are made up of a variety of factors that influence stock movement. Predicting stock's value at any given time in the future is, therefore, an important problem of academia and industry. Previous studies [3] have shown that the prediction of stock prices, particularly with the nonstationary and the nonlinear nature of the underlying asset, is challenging. In this regard, several models have been proposed, but the problem is nowhere near its end [4], and a substantial improvement is required. In addition, studies have also extended the problem by predicting option prices, volatility [5], and so on. is significant body of work demonstrates that stock price prediction remains a significant issue requiring solutions to a wide range of problems.
As discussed in the previous paragraph, stock price prediction is a significant challenge. In this regard, a plethora of techniques have been used for predicting stock prices, such as neural network (NN), support vector machine (SVM), genetic algorithm, fuzzy logic, and Bayesian model [6]. However, getting an optimal solution is still a long way to go. During our literature review, we discovered that current research has overlooked kernel adaptive filtering (KAF) and has not thoroughly investigated this paradigm for financial time-series forecasting, especially stock prediction. Although there are a few introductory studies [7], a largescale comprehensive evaluation lacks literature. With this shortcoming in mind, we would like to emphasize that KAF can be an effective stock prediction tool. e following observations serve as the foundation for our argument: first, KAF-based algorithms have a faster convergence rate; that is, the algorithm requires fewer iterations. Second, KAF has demonstrated excellent performance in nonstationary timeseries prediction [8].
ird, KAF algorithms exhibit universal function approximation properties useful in highly dynamic environments [9]. Lastly, KAF has been used extensively in chaotic time-series prediction [10,11]. Hence, it is also worth exploring the idea in financial time-series prediction. erefore, in this article, we use the concept of KAF to examine and comprehend the real-time movement of stock prices. In addition to KAF being one of the largest unexplored paradigms in stock prediction, the literature review revealed one more issue. It contains one of the problems related to batch learning. We believe that sequential learning is the best tool rather than batch learning for financial time-series forecasting. is is mainly because a financial time series is nonstationary. We further argue that expecting a model trained on offline samples to perform excellently in a real situation is a slippery slope. e rationale here is supported by the work presented in the literature, which claims that online learning is the best way to understand and interpret nonstationary data behavior [12]. Consequently, studies have shown that online learning can be an effective method [13]. It is based on the concept of sequential measurement (training is performed sample-bysample and in real time). Various scenarios can easily be added, and the algorithm adjusts the weight vector to provide accurate predictions. As a result, in order to solve the issue stated in the article, we enhance the KAF idea with online learning.
With respect to the challenges and the ideas discussed in this section, we present an online KAF algorithms to predict the price of stock. e use of KAF techniques to stock price prediction is still limited [7,14]. However, the concept is based upon this study is precedent and builds upon it to extend the application of KAF to a broader range of environments and contexts. With data taken from Nifty-50, the Indian Stock Index, we first build our dataset consisting of prices collected at a time window of one day, sixty minutes, thirty minutes, twenty five minutes, twenty minutes, fifteen minutes, ten minutes, five minutes, and one minute. ese windows are chosen as they are some of the most common windows looked at by day traders. It should be noted here that the prices are collected for a total of fifty companies (they make up the main index: Nifty-50). Subsequently, we apply the ideas on each of the time windows and predict the next potential number for the "mid-price" of the stock. With comprehensive numerical investigation, we have found that the proposed trading algorithm has an extra 16% edge in the field, thereby making it an effective method capable of generating good returns in the long run. e following are the paper's key contributions: (1) A novel KAF-based online method for forecasting a stock's mid-price is introduced. We look at two situations in which the mid-price is measured as (high + low)/2 and (open + close)/2, respectively. e main motivation for looking into mid-price was that mid-price time series is less noisy than close-price time series. (2) With a comprehensive investigation performed on nine different time windows. We discover the best window for predicting stock prices. In the literature, several authors have focused on predicting daily prices [15,16]. We, however, show that focusing efforts on other time windows could also be optimal. (3) In this article, ten different KAF algorithms are used, and a detailed analysis is presented to validate the work. To the best of our knowledge, an investigation of this magnitude eludes literature.
e following section has been divided into sections. e methods proposed by various researchers in the subject of stock prediction are discussed in Section 2. Proposed methodology is described in Section 3. e experiments performed with different KAF algorithm, and their results are included in Section 4. Finally, in Section 5, the conclusions and future scopes are described.

Related Work
e work of other authors in the field of stock prediction is discussed in this section. Predicting stock has remained one of the nontrivial issues of the literature [17]. Previous studies have shown that the prediction of stock prices is difficult due to the inherent nonstationary behavior in the data [18]. Several studies [5] have shown that stock prediction is challenging and noisy. Various linear techniques such as correlations, discriminating analysis, autoregressive models, and moving averages have also been studied in the past [19]. Machine learning (ML) has been a popular field in timeseries prediction in recent years. ML-based techniques are explored heavily as they can recognize complex patterns in stock prices [20]. Due to the nonlinear and time-varying nature of time-series, there has recently been a surge in demand for online prediction algorithms [21]. Online algorithms use the sequential calculation to achieve reliable and faster outcomes [13]. In this regard, several techniques have been developed, such as online support vector regression (SVR), NN [8], and KAF algorithms [10]. NN methods take a lot of processing power and have a slow convergence rate [22]. SVR provides superior applicability; however, it is not appropriate for huge datasets. Furthermore, the multifilter neural network (MFNN) is investigated, and it is discovered that MFNN outperforms SVR, random forests, and other neural network-based approaches. e use of convolutional neural networks (CNN) has also been explored to predict the next-day prices [23]. CNN outperformed for multimodality images in the biomedical domain [24,25]. Furthermore, for stock price prediction, long short-term memory (LSTM) is applied [26]. e authors used an LSTM network with a single layer and 200 nodes in [27]. Furthermore, the network employs a single-layer LSTM with 140 nodes [28]. In contrast to using a deep architecture with four LSTM layers and 96 nodes in the hidden layers, each LSTM layer was further followed by a dropout layer [29].
Adaptive filtering has been proven to be a preferable choice for streaming data having nonstreaming behavior [11,30,31]. For sequential stock prediction, KAF can be used by exploiting market interdependence. Fast convergence, low computational complexity, and nonparametric behavior make KAF a preferable choice [10,32]. One research [33] focuses on adaptive asynchronous differential evolution with trigonometric mutation modified mutation operation, and adaptive parameters modified the convergence speed and diversity. In [34], the authors proposed meta-cognitive recurrent kernel online learning for multistep predictions of stocks. Although these studies show the potential that KAF has, KAF has not been investigated thoroughly in the context of stock price prediction. ough there are few studies in literature focusing on the area, large-scale investigation eludes literature. Nevertheless, we must point that the work presented in [14] proposes a two-phase method for stock prediction. First, sequential learning using KAF was applied to learn the underlying model for each stock separately. In the second phase, to improve prediction, real time models are learned from different stock. In [7], the authors proposed the idea of multikernel adaptive filters for online options trading.
e method was applied to Taiwan composite stock index. Garcia-Vega et al. [35] presented a multikernel learning approach to overcome the two primary concerns with KAF: kernel size and step size. Despite the fact that these papers concentrate on using the KAF paradigm to forecast stock prices, none of them validates the paradigm's effectiveness on a large-scale dataset. Moreover, the impact of multiple time windows is not considered. In our opinion, testing the method on multiple time windows that are often looked at by traders is of prime importance.

Brief Discussion on KAF.
We work with online learning-based KAF techniques, as discussed in Section 1. e purpose of KAF is to learn with well-known input-output mapping f: S ⟶ R, and it contains sequence of data such as ((s 1 , d 1 ), (s 2 , d 2 ), . . . , (s i , d i )), where, S ⊆ R L is the input space, s i , i � 1, . . . , n, is the system input at sample time, and d i is known as desired response. In reproducing kernel Hilbert space (RKHS) F, KAF transforms the data into a set of points. Inner products can then be used to solve the problem.
ere is no need to do expensive computations in high-dimensional space, owing to the famous "kernel trick." In KAFs, generally, the computation involves the use of a kernel. e following equation is an example of a kernel: where σ represents the kernel width.

Kernel Adaptive Filtering Algorithms.
In this subsection, we briefly describe the ten different KAF methods.

Least Mean Square (LMS).
e LMS algorithm, according to [36] employs a finite impulse response (FIR) filter, also known as a traversal filter, whose output is based on a linear combination of the input presented in the following equation: where ϖ (i−1) represents the weight vector at iteration (i − 1). e following equation contains the main idea of the LMS algorithm: where η and e i stands for step size and prior error. e weight-update equation findings were represented in the following equation: e following equation represents the inner product:

Kernel Least Mean Square (KLMS).
To derive KLMS [36], the input (s i ) is converted into F as ϕ(s i ). Using LMS, we can now rewrite the input and output mapping as follows: where e i is represented as the prediction error, η is the size of every step, and ϕ(s i ) is defined as the transformed filter input at a certain point in time or iteration i. Equation (7) compute the result, where we can use the famous kernel tricks. Consequently, the model now becomes Scientific Programming In KLMS, a new unit of the kernel is assigned to all new samples points with ηe i as the coefficient value. Following the radial basis function (RBF) described in this section, the system is represented as follows: e coefficients o(i) and the centers C(i) � s(j) i j�1 are saved inside the storage during the training process.

Kernel Affine Projection Algorithm (KAPA).
KAPA [37] is used where we want to improve the performance owing to the gradient noise. In KAPA, we estimate using the weight vector ϖ and minimise the cost function with the sequences d 1 , d 2 and ϕ(1), ϕ(2) as shown below (9) We replace the concept of covariance and cross variance matrix-vector by local approximation directly from the data using stochastic gradient descent summarized in where ψ(i) � [ϕ(i − K + 1), . . . , ϕ(i)] and K is the observation and regressor.

Leaky Kernel Affine Projection Algorithm (LKAPA).
LKAPA [37] is the extension of KAPA as discussed in Section 3.2.3.Based on the selected kernels, the feature space can be infinitely dimensional, where the weight updation task is difficult. In the common consideration, the solution is the modification in equation (10) as follows. e weight vector in Equation (11) is calculated using the following criteria: Equation (12) is used to reduce the following objective function from the perspective of empirical risk minimization: en, we get the updated weight, and it is shown in where

Normalized Online Regularized Risk Minimization
Algorithm (NORMA). Similarly, the LKAPA [37] extension comes in NORMA, and also it is related to KAPA discussed in Section 3.2.3. It also includes the regularization and nonfunctional approaches.

Quantized Kernel Least Mean Square Algorithm (QKLMS).
Quantization techniques are used in various applications such as digitization, data compression, speech, and image coding. QKLMS is a famous algorithm proposed in [11], which deals with the issue of data redundancy. e computational complexity of QKLMS and KLMS is nearly identical. e main difference between the two algorithms is that QKLMS uses redundant data to update the coefficient of closest centre in real time. e following equation represents the main idea using the quantization operator: where Q[.] signifies the quantization in feature space F. e following equation summarises the learning rule for QKLMS:

Fixed Budget Quantized Kernel Least Mean Square Algorithm (FBQKLMS).
e FBQKLMS [38] deals with the increasing popularity of online kernel approaches. e 4 Scientific Programming suggested algorithm uses a significance measure-based pruning criterion based on the weighted contribution of existing data centres.

Kernel Adaptive Filtering with Maximum Correntropy Criterion (KMCC).
e fundamental goal of the method is to maximise the crossentropy between the desired d i and actual output y i [39]. Using the MCC technique [39] and SGD, the algorithm can be written as follows: where η is the step size and σ is the kernel width. e entire amount of error and prediction calculation can be summarized in equation (17).

Multikernel Normalized Least Mean Square (MKNLMS).
According to [30], the KNLMS algorithm is used to create dictionaries based on the coherence requirement. Here, we explore at KNLMS through the perspective of MKNLMS-CS (multi-kernel normalised least mean square algorithm with coherence-based sparsification). Consider the empty dictionary at the initial stage represented as (J cs 0 :� ∅), by which the H 0 is shown as an empty matrix as M * . Consider the Hilbertian unit for simplification of κ(s, s) � 1, ∀ s ∈ s, which is satisfied by the Gaussian kernel. n is added into J cs n in the case when the defined condition holds in the proposed methodology presented in equation (18) ‖κ‖ max :� max m∈M max j∈J cs n κ m s n , s j , ≤ ϕ, n ∈ N, (18) where η ∈ [0, 2] and Λ > 0 denotes the step size and regularization parameter, respectively. δ > 0 is the threshold. Considering If equation (18) where

Probabilistic Least-Mean Square Filter (PROB-LMS).
PROB-LMS [31] gives adaptable step-size to the LMS algorithm in Section 3.2.1. and also applied in the stationary and nonstationary environment. e LMS filter can be approximated effectively using a probabilistic approach. It includes a step-size LMS algorithm that may be modified as well as a measure of estimation uncertainty. It also maintains the standard LMS's linear complexity.

Problem Formulation.
Our main objective, is to predict the stock's mid-price as stated in Section 1. e motive of stock price prediction is to calculate stock's future values depending on historical values. For this, we measured the percentage change in mid-price. As a result, we used the concept of order n auto-regression to predict future stock price changes. e sample regression equation is shown in Table 1. Multivariate financial time-series estimation often employs this formulation [40,41] to predict future values of a time series. e formulation shown in Table 1 is done by considering daywise mid-prices. It should be noted here that the same procedure was used for all time windows. As a result, the problem was rephrased as follows: autoregressionbased next percentage prediction. e exact mid-price of the stock may therefore be easily calculated using the percentage change. Figure 1 depicts the proposed approach's overall methodology. e Nifty-50 dataset was used in the experiments. We consider two different aspects for the mid-price prediction: (i) (high + low)/2 and (ii) (open + close)/2. As a result of this calculation, we created the dataset and preprocessed it using nine prediction windows (one-minute, five-minutes, ten-minutes, fifteen-minutes, twenty-minutes, twenty-five minutes, thirty-minutes, sixty-minutes, and one day). Further, the percentage change was calculated for each time windows, and min-max normalization was applied. e selection of embedding dimension (M) is a difficult task. We choose different M ∈ {2, 3, 4, 5, 6, 7} and set the maximum dictionary size for required algorithms to 500 with Gaussian Scientific Programming kernel for each time window. In Table 1, we have shown an example considering the time window of 1 day (Stock-TITAN). e error estimation was performed with the help of ten different KAF algorithms for each time window. For example, the performance of each algorithm is analyzed to find which embedding dimension produces the best result. After getting the best embedding dimension for each algorithm, the embedding dimension that produces the best result and the corresponding algorithm is selected.

Dataset Description.
is section explores into the specifics of the dataset that was used to test the applicability of the proposed method. For this study, we used data from the National Stock Exchange of India. e main index of NSE, Nifty-50, has 50 stocks. Based on the average and total daily turnover for equity shares, Nifty-50 is India's largest stock exchange. We collected data between January 01, 2021, and May 31, 2021, from 9:15 a.m. to 3:30 p.m. In addition, experimentation data are available at https:// shorturl.at/lnvF2. e original data included open, high, low, and close (OHLC) prices and were available for one minute. e original data consisted of OHLC prices and were available for one minute. e dataset is generated and preprocessed in accordance with the nine prediction windows (one minute, five minutes, ten minutes, fifteen minutes, twenty minutes, twenty five minutes, thirty minutes, sixty minutes, and one day). Data samples range is different according to their time window. As pointed out in Section 1, we are trying to predict the mid-price with two different scenarios: (high + low)/2 and (open + close)/2. For this, firstly we calculated the percentage change of midprice. All data values were normalized between zero to one range. Ten different KAF algorithms were used to the final preprocess data and each stock and analyse the comparative performance.

Evaluation Criterion.
To measure and analyse the efficacy of various KAF algorithms, standard assessment criteria are used. In Table 2, y i and d i represent the actual and predicted output. n is the time step, and Calculating the evaluation metrics with Nifty-50.
(1) e parameter listed in Table 3 were tuned manually. e parameter description for ten different algorithms are presented in Table 3. ese values were found after multiple rounds of experimentation.
(2) For the error values, we applied the methods to all stocks and tried to quantify the predictive performance via the metrics discussed in this section. In total, we get 50 × 3 (one for each stock) error values for MSE, MAE, and DS, respectively. (3) en, for each of the 50 stocks, error estimation was performed using nine different prediction windows for ten different KAF algorithms. (4) Finally, we used the average of all fifty-error metrics for a single time window and a single stock to reach the final value, which is presented in Tables 4 and 5.
On all 50 stocks, the provided number represents the models' overall predictive capacity.

Prediction, Convergence, and Residual Analysis.
In this subsection, we examined prediction, converge and residual analysis with the help of KAF algorithms. Regarding this, we have shown the prediction graphs with the KAPA algorithm (discussed in Section 3.2.3) for one stock (TITAN). Figure 2 shows the results for (high + low)/2, while Figure 3 shows the results for (open + close)/2. e predictive curve suits well against the original curve, as can be seen from the prediction graphs. It is worth noting that we have only given results for one prediction window (thirty-minutes) with one stock (TITAN). However, we must note that other stocks in the dataset produced similar result. e prediction graphs clearly show that the predictions are not exact, although they are close. To be precise, the numbers for MSE and MAE are presented in Tables 4 and 5. We must point out that getting accurate value in financial time series forecasting is tough. e goal has always been to get close enough values. erefore, the result that we achieved shows the good predictive capability of the work. Figures 4 and 5 show the convergence graph for mid-price for (high+low)/2, (open+close)/2, respectively. We have provided the results using the KAPA algorithm with only one prediction window (thirty minutes) and one stock (TITAN), similar to the prior scenario. e algorithm converges quickly, as evidenced by the graphs, at the 1000th data point. We can see in KAF algorithms capacity to adapt and converge quickly. One more important point to note from the convergence graphs is that although there is some fluctuation in the graphs, it is nevertheless acceptable. is is because there will be noise in the new data and minor changes are inevitable. In addition to the results discussed so far, we have complemented the analysis by presenting the distribution of error residuals in Figures 6 and 7. It can be seen from the figures that residuals follow a normal distribution. Moreover, the outliers are also less. Furthermore, the residual's variance is low, demonstrating the KAF algorithm's superior prediction capability and potential in predicting the next immediate, mid-priced occurrence. Directional symmetry is used to determine the continuity of actual and expected prices in terms of stock movement. It is a measure for determining a model's ability to predict a stock's direction. We examined the ten different algorithms mentioned in Section 3 to better understand the actions of a stock's movement. e experiment revealed that using KNLMS, we have a 66% percent chance of accurately predicting the next up or down movement. is is shown in Table 5. e best result is obtained at the window of ten minutes, and the worst result is obtained at the one-minute window. From the table, it is also visible that there is a big difference in the number obtained for the one minute window and that for the rest of the windows. is is expected as there is much noise in a minute, which indeed affects prediction. It should be noted here that literature often ignores looking at these different time windows. Work mostly focuses on predicting daily prices [42,43]. We discovered the perfect balance by playing with various time windows. Furthermore, when trading, it is recommended to strike a balance between error minimization and directional symmetry.

Comparative Evaluation of KAF Algorithms.
Since we have used ten algorithms in our experimentation, it becomes essential to compare their performance. In this context, we present the topic in two separate situations. First, we analyze the results considering mid-price as (high + low)/2 to find the best algorithm. In the next scenario, we tried mid-price using (open + close)/2. Tables 4 and 5 show the outcome of this experiment. In terms of MSE and MAE, the tables show that KAPA outperforms other algorithms. When it comes to directional symmetry, we can see a contradiction. In directional symmetry, we see a conflict. Here, NORMA and KNLMS give the best performance.

Comparison with Methods of a Similar Kind.
We have also compared the result with other existing techniques such as [28,29] and [44]. ese are some of the most recent deep learning (DL)-based algorithms for predicting stock prices. It should be noted here that these methods were trained and tested using 80:20 splits for 25 epochs. e time taken to train and make prediction was recorded. Specifically, these methods [28,29] and [44] were reimplemented based on the architecture details and hyper-parameters setting found in the respective papers. e Nifty-50 dataset was used to train all of the methods. To ensure consistency across different methods for experimentation, we use sixty-minute time periods, for fifty stocks. All of the methods' results were then compared to the proposed method.

Experimentation with Dictionary
Size. We also conducted experiments with various dictionary sizes. e result for this test is shown in Table 8. For this test, we used a thirtyminute time window with the algorithm KMCC. It is visible σ � kernel width, σ 2 n � variance of observation noise, σ 2 d � variance of filter weight diffusion, η � step-size, ϵ � regularization parameter, Λ � Tikhonov regularization, tcoff � learning rate coefficient, τ � memory size (terms retained in truncation), mu0 � coherence criterion threshold, P � memory length, nu � approximate linear dependency (ALD) threshold, and β � forgetting factor for influence.    Table 8 that increasing the dictionary size leads to an improvement in the system's performance. It should be noted here that when the size is 1000, the performance has fallen. e reason for this behavior could be the erratic behavior of the stock, the presence of noise, or too much irrelevant data. e exact reason is unknown. However, it is worth noting that with a dictionary size of 500, for forecasting a single stock, execution time is 0.82 seconds. is low number clearly shows the advantage one can achieve in high-frequency trading.

Important Note: Error Minimization and Profitability.
We obtain an MSE of 10 -4 as the lowest error. We can observe from Tables 4 and 5 that KAPA gives best results in terms of MSE and MAE. It is important to note that, in the one-minute time window, we reached a minimum error value. From Tables 4 and 5, we can also see that going down the column (for MSE and MAE only), the results are improving with the one-minute time window giving the best figures. However, because the time window is one minute, the volatility is low enough that decreasing error will not result in too much benefit. Moreover, there is too much noise while trading at one-minute window. To look at it another way, one-minute volatility is lower, resulting in very close predictions. However, in a low-volatility environment, the chances of taking a position and making a highly profitable trade are also low.

Conclusion and Future Work
is paper focuses on predicting a stock's mid-price. Predicting a financial nonstationary time series is an open fundamental and a nontrivial problem of literature. To address this, we proposed a framework based on online learning-driven KAF algorithms. In the proposed work, ten different KAF algorithms were evaluated and analyzed on Indian National Stock Exchange (Nifty-50). In contrast to the existing methods, experiments are performed on nine different time windows. is was done keeping in mind the method's applicability in intraday and swing trading. Previous studies often underestimated the importance of intraday time windows. We, therefore, tried to bridge this gap through the work presented here. e experimental results show the superiority and predictive capabilities of the work. e KAF class of algorithms was also discovered to be not only efficiently working in execution time but also providing best results of error minimization, demonstrating their importance in high-frequency trading. e goal of the research was to propose a KAF-based method for the prediction of stock's mid-price. e empirical results on Nifty-50 dataset show that the proposed method achieved superior performance over existing stock prediction methods. On voting schema KAPA shown better prediction performance with all-time windows, NORMA & KNLMS gave the best performance in terms of directional symmetry. It is worth noting that every KAF-based algorithm is hyperparametersensitive. As a result, in the future, we will experiment with various hyper-parameter optimization approaches in order to enhance the framework's predictive capabilities.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request. Table 6: e proposed research is compared to different state-of-the-art stock prediction approaches.