An Empirical Study of Machine Learning Algorithms for Stock Daily Trading Strategy

According to the forecast of stock price trends, investors trade stocks. In recent years, many researchers focus on adopting machine learning (ML) algorithms to predict stock price trends. However, their studies were carried out on small stock datasets with limited features, short backtesting period, and no consideration of transaction cost. And their experimental results lack statistical significance test. In this paper, on large-scale stock datasets, we synthetically evaluate various ML algorithms and observe the daily trading performance of stocks under transaction cost and no transaction cost. Particularly, we use two large datasets of 424 S&P 500 index component stocks (SPICS) and 185 CSI 300 index component stocks (CSICS) from 2010 to 2017 and compare six traditional ML algorithms and six advanced deep neural network (DNN) models on these two datasets, respectively. The experimental results demonstrate that traditional ML algorithms have a better performance in most of the directional evaluation indicators. Unexpectedly, the performance of some traditional ML algorithms is not muchworse than that of the best DNNmodels without considering the transaction cost. Moreover, the trading performance of all ML algorithms is sensitive to the changes of transaction cost. Compared with the traditional ML algorithms, DNN models have better performance considering transaction cost. Meanwhile, the impact of transparent transaction cost and implicit transaction cost on trading performance are different. Our conclusions are significant to choose the best algorithm for stock trading in different markets.


Introduction
The stock market plays a very important role in modern economic and social life.Investors want to maintain or increase the value of their assets by investing in the stock of the listed company with higher expected earnings.As a listed company, issuing stocks is an important tool to raise funds from the public and expand the scale of the industry.In general, investors make stock investment decisions by predicting the future direction of stocks' ups and downs.In modern financial market, successful investors are good at making use of high-quality information to make investment decisions, and, more importantly, they can make quick and effective decisions based on the information they have already had.Therefore, the field of stock investment attracts the attention not only of financial practitioner and ordinary investors but also of researchers in academic [1].
In the past many years, researchers mainly constructed statistical models to describe the time series of stock price and trading volume to forecast the trends of future stock returns [2][3][4].It is worth noting that the intelligent computing methods represented by ML algorithms also present a vigorous development momentum in stock market prediction with the development of artificial intelligence technology.The main reasons are as follows.(1) Multisource heterogeneous financial data are easy to obtain, including high-frequency trading data, rich and diverse technical indicators data, macroeconomic data, industry policy and regulation data, market news, and even social network data.(2) The research of intelligent algorithms has been deepened.From the early linear model, support vector machine, and shallow neural network to DNN models and reinforcement learning algorithms, intelligent computing methods have made significant improvement.They have been effectively applied to the fields of image recognition and text analysis.In some papers, the authors think that these advanced algorithms can capture the dynamic changes of the financial market, simulate the trading process of stock, and make automatic investment decisions.
(3) The rapid development of high-performance computing hardware, such as Graphics Processing Units (GPUs), large servers, and other devices, can provide powerful storage space and computing power for the use of financial big data.High-performance computer equipment, accurate and fast intelligent algorithms, and financial big data together can provide decision-making support for programmed and automated trading of stocks, which has gradually been accepted by industry practitioners.Therefore, the power of financial technology is reshaping the financial market and changing the format of finance.
In this paper, we select 424 SPICS and 185 CSICS from 2010 to 2017 as research objects.The SPICS and CSICS represent the industry development of the world's top two economies and are attractive to investors around the world.The stock symbols are shown in the "Data Availability".For each stock in SPICS and CSICS, we construct 44 technical indicators as shown in the "Data Availability".The label on the -th trading day is the symbol for the yield of the  + 1-th trading day relative to the -th trading day.That is, if the yield is positive, the label value is set to 1, otherwise 0. For each stock, we choose 44 technical indicators of 2000 trading days before December 31, 2017, to build a stock dataset.After the dataset of a stock is built, we choose the walk-forward analysis (WFA) method to train the ML models step by step.In each step of training, we use 6 traditional ML methods which are support vector machine (SVM), random forest (RF), logistic regression (LR), naïve Bayes model (NB), classification and regression tree (CART), and eXtreme Gradient Boosting algorithm (XGB) and 6 DNN models which are widely in the field of text analysis and voice translation such as Multilayer Perceptron (MLP), Deep Belief Network (DBN), Stacked Autoencoders (SAE), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) to train and forecast the trends of stock price based on the technical indicators.Finally, we use the directional evaluation indicators such as accuracy rate (AR), precision rate (PR), recall rate (RR), F1-Score (F1), Area Under Curve (AUC), and the performance evaluation indicators such as winning rate (WR), annualized return rate (ARR), annualized Sharpe ratio (ASR), and maximum drawdown (MDD)) to evaluate the trading performance of these various algorithms or strategies.
From the experiments, we can find that the traditional ML algorithms have a better performance than DNN algorithms in all directional evaluation indicators except for PR in SPICS; in CSICS, DNN algorithms have a better performance in AR, PR, and F1 expert for RR and AUC.(1) Trading performance without transaction cost is as follows: the WR of traditional ML algorithms have a better performance than those of DNN algorithms in both SPICS and CSICS.The ARR and ASR of all ML algorithms are significantly greater than those of the benchmark index (S&P 500 index and CSI 300 index) and BAH strategy; the MDD of all ML algorithms are significantly greater than that of BAH strategy and are significantly less than that of the benchmark index.In all ML algorithms, there are always some traditional ML algorithms whose trading performance (ARR, ASR, MDD) can be comparable to the best DNN algorithms.Therefore, DNN algorithms are not always the best choice, and the performance of some traditional ML algorithms has no significant difference from that of DNN algorithms; even those traditional ML algorithms can perform well in ARR and ASR.(2) Trading performance with transaction cost is as follows: the trading performance (WR, ARR, ASR, and MDD) of all machine learning algorithms is decreasing with the increase of transaction cost as in actual trading situation.Under the same transaction cost structure, the performance reductions of DNN algorithms, especially MLP, DBN, and SAE, are smaller than those of traditional ML algorithms, which shows that DNN algorithms have stronger tolerance and risk control ability to the changes of transaction cost.Moreover, the impact of transparent transaction cost on SPICS is greater than slippage, while the opposite is true on CSICS.Through multiple comparative analysis of the different transaction cost structures, the performance of trading algorithms is significantly smaller than that without transaction cost, which shows that trading performance is sensitive to transaction cost.The contribution of this paper is that we use nonparametric statistical test methods to compare differences in trading performance for different ML algorithms in both cases of transaction cost and no transaction cost.Therefore, it is helpful for us to select the  most suitable algorithm from these ML algorithms for stock trading both in the US stock market and the Chinese A-share market.
The remainder of this paper is organized as follows: Section 2 describes the architecture of this work.Section 3 gives the parameter settings of these ML models and the algorithm for generating trading signals based on the ML models mentioned in this paper.Section 4 gives the directional evaluation indicators, performance evaluation indicators, and backtesting algorithms.Section 5 uses nonparameter statistical test methods to analyze and evaluate the performance of these different algorithms in the two markets.Section 6 gives the analysis of impact of transaction cost on performance of ML algorithms for trading.Section 7 gives some discussions of differences in trading performance among different algorithms from the perspective of data, algorithms, transaction cost, and suggestions for algorithmic trading.Section 8 provides a comprehensive conclusion and future research directions.

Architecture of the Work
The general framework of predicting the future price trends of stocks, trading process, and backtesting based on ML algorithms is shown in Figure 1.This article is organized from data acquisition, data preparation, intelligent learning algorithm, and trading performance evaluation.In this study, data acquisition is the first step.Where should we get data and what software should we use to get data quickly and accurately are something that we need to consider.In this paper, we use R language to do all computational procedures.Meanwhile, we obtain SPICS and CSICS from Yahoo finance and Netease Finance, respectively.Secondly, the task of data preparation includes ex-dividend/rights for the acquired data, generating a large number of well-recognized technical indicators as features, and using max-min normalization to deal with the features, so that the preprocessed data can be used as the input of ML algorithms [34].Thirdly, the trading signals of stocks are generated by the ML algorithms.In this part, we train the DNN models and the traditional ML algorithms by a WFA method; then the trained ML models will predict the direction of the stocks in a future time which is considered as the trading signal.Fourthly, we give some widely used directional evaluation indicators and performance evaluation indicators and adopt a backtesting algorithm for calculating the indicators.Finally, we use the trading signal to implement the backtesting algorithm of stock daily trading strategy and then apply statistical test method to evaluate whether there are statistical significant differences among the performance of these trading algorithms in both cases of transaction cost and no transaction cost.

ML Algorithms
3.1.ML Algorithms and Their Parameter Settings.Given a training dataset D, the task of ML algorithm is to classify class labels correctly.In this paper, we will use six traditional ML models (LR, SVM, CART, RF, BN, and XGB) and six DNN models (MLP, DBN, SAE, RNN, LSTM, and GRU) as classifiers to predict the ups and downs of the stock prices [34].The main model parameters and training parameters of these ML learning algorithms are shown in Tables 1 and 2.
In Tables 1 and 2, features and class labels are set according to the input format of various ML algorithms in R language.Matrix (m, n) represents a matrix with m rows and n columns; Array (p, m, n) represents a tensor and each layer of the tensor is Matrix (m, n) and the height of the tensor is p. c (h1, h2, h3, . ..) represents a vector, where the length of the vector is the number of hidden layers and the -th element of c is the number of neurons of the -th hidden layer.In the experiment,  = 250 represents that we use the data of the past 250 trading days as training samples in each round of WFA;  = 44 represents that the data of each day has 44 features.In Table 2, the parameters of DNN models such as activation function, learning rate, batch size, and epoch are all default values in the algorithms of R programs.[35] is a rolling training method.We use the latest data instead of all past data to train the model In this paper, we use ML algorithms and the WFA method to do stock price trend predictions as trading signals.In each step, we use the data from the past 250 days (one year) as the training set and the data for the next 5 days (one week) as the test set.Each stock contains data of 2,000 trading days, so it takes (2000-250)/5 = 350 training sessions to produce a total of 1,750 predictions which are the trading signals of daily trading strategy.The WFA method is as shown in Figure 2.

The Algorithm Design of Trading Signal.
In this part, we use ML algorithms as classifiers to predict the ups and downs of the stock in SPICS and CSICS and then use the prediction results as trading signals of daily trading.We use the WFA method to train each ML algorithm.We give the generating algorithm of trading signals according to Figure 2, which is shown in Algorithm 1.

Evaluation Indicators and Backtesting Algorithm
4.1.Directional Evaluation Indicators.In this paper, we use ML algorithms to predict the direction of stock price, so the main task of the ML algorithms is to classify returns.Therefore, it is necessary for us to use directional evaluation indicators to evaluate the classification ability of these algorithms.
The actual label values of the dataset are sequences of sets {DOWN, UP}.Therefore, there are four categories of predicted label values and actual label values, which are expressed as TU, FU, FD, and TD.TU denotes the number of UP that the actual label values are UP and the predicted label In this paper, "UP" is the profit source of our trading strategies.The classification ability of ML algorithm is to evaluate whether the algorithms can recognize "UP".Therefore, it is necessary to use PR and RR to evaluate classification results.These two evaluation indicators are initially applied in the field of information retrieval to evaluate the relevance of retrieval results.
PR is a ratio of the number of correctly predicted UP to all predicted UP.That is as follows.
High PR means that ML algorithms can focus on "UP" rather than "DOWN".
RR is the ratio of the number of correctly predicted "UP" to the number of actually labeled "UP".That is as follows.
High RR can capture a large number of "UP" and be effectively identified.In fact, it is very difficult to present an algorithm with high PR and RR at the same time.Therefore, it is necessary to measure the classification ability of the ML algorithm by using some evaluation indicators which combine PR with RR.F1-Score is the harmonic average of PR and AR.F1 is a more comprehensive evaluation indicator.That is as follows.
Here, it is assumed that the weights of PR and RR are equal when calculating F1, but this assumption is not always correct.It is feasible to calculate F1 with different weights for PR and RR, but determining weights is a very difficult challenge.
AUC is the area under ROC (Receiver Operating Characteristic) curve.ROC curve is often used to check the tradeoff between finding TU and avoiding FU.Its horizontal axis is FU rate and its vertical axis is TU rate.Each point on the curve represents the proportion of TU under different FU thresholds [36].AUC reflects the classification ability of classifier.The larger the value, the better the classification ability.It is worth noting that two different ROC curves may lead to the same AUC value, so qualitative analysis should be carried out in combination with the ROC curve when using AUC value.In this paper, we use R language package "ROCR" to calculate AUC.

Performance Evaluation
Indicator.Performance evaluation indicator is used for evaluating the profitability and risk control ability of trading algorithms.In this paper, we use trading signals generated by ML algorithms to conduct the backtesting and apply the WR, ARR, ASR, and MDD to do the trading performance evaluation [34].WR is a measure of the accuracy of trading signals; ARR is a theoretical rate of return of a trading strategy; ASR is a risk-adjusted return which represents return from taking a unit risk [37] and the risk-free return or benchmark is set to 0 in this paper; MDD is the largest decline in the price or value of the investment period, which is an important risk assessment indicator.

Backtesting Algorithm.
Using historical data to implement trading strategy is called backtesting.In research and the development phase of trading model, the researchers usually use a new set of historical data to do backtesting.Furthermore, the backtesting period should be long enough, because a large number of historical data can ensure that the trading model can minimize the sampling bias of data.We can get statistical performance of trading models theoretically by backtesting.In this paper, we get 1750 trading signals for each stock.If tomorrow's trading signal is 1, we will buy the stock at today's closing price and then sell it at tomorrow's closing price; otherwise, we will not do stock trading.Finally, we get AR, PR, RR, F1, AUC, WR, ARR, ASR, and MDD by implementing backtesting algorithm based on these trading signals.Hja: the evaluation indicator j of all strategies are the same Hjb: the evaluation indicator j of all strategies are not the same It is worth noting that any evaluation indicator of all trading algorithm or strategy does not conform to the basic hypothesis of variance analysis.That is, it violates the assumption that the variances of any two groups of samples are the same and each group of samples obeys normal distribution.Therefore, it is not appropriate to use t-test in the analysis of variance, and we should take the nonparametric statistical test method instead.In this paper, we use the Kruskal-Wallis rank sum test [38] to carry out the analysis of variance.If the alternative hypothesis is established, we will need to further apply the Nemenyi test [39] to do the multiple comparisons between trading strategies.DRR t = (P t -P t-1 )/P t-1 #DRR is daily return rate.That is daily return rate of BAH strategy.(10) TDRR t =lag (TS t ) * DRR t #TDRR is the daily return through trading.(11) Table=Confusion Matrix(TS, Label) ( 12)

Comparative Analysis of Performance of Different Trading Strategies in SPICS.
ARR[i]=Return.annualized (TDRR)# TDRR, BDRR, or DRR can be used.( 20) ASR[i]=SharpeRatio.annualized (TDRR)# TDRR, BDRR, or DRR can be used.( 21) MDD[i]=maxDrawDown (TDRR)# TDRR, BDRR, or DRR can be used.( 22) Performance=cbind (AR, PR, RR, F1, AUC, WR, ARR, ASR, MDD) (33) return (Performance) Algorithm 2: Backtesting algorithm of daily trading strategy in R language.various trading algorithms in AR, PR, RR, F1, AUC, WR, ARR, ASR, and MDD.We can see that the AR, RR, F1, and AUC of XGB are the greatest in all trading algorithms.The WR of NB is the greatest in all trading strategies.The ARR of MLP is the greatest in all trading strategies including the benchmark index (S&P 500 index) and BAH strategy.The ASR of RF is the greatest in all trading strategies.The MDD of the benchmark index is the smallest in all trading strategies.
It is worth noting that the ARR and ASR of all ML algorithms are greater than those of BAH strategy and the benchmark index.
(1) Through the hypothesis test analysis of H1a and H1b, we can obtain p value<2.2e-16.
Therefore, there are statistically significant differences between the AR of all trading algorithms.Therefore, we need to make multiple comparative analysis further, as shown in  5  and 4, we can see that the AR of all DNN models are significantly lower than those of all traditional ML models.The AR of MLP, DBN, and SAE are significantly greater than those of RNN, LSTM, and GRU.There are no significant differences among the AR of MLP, DBN, and SAE.There are no significant differences among the AR of RNN, LSTM, and GRU.
(2) Through the hypothesis test analysis of H2a and H2b, we can obtain p value<2.2e-16.So, there are statistically significant differences between the PR of all trading algorithms.Therefore, we need to make multiple comparative analysis further, as shown in Table 6.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 6  and 4, we can see that the PR of MLP, DBN, and SAE are significantly greater than that of other trading algorithms.The PR of LSTM is not significantly different from that of GRU and NB.The PR of GRU is significantly lower than that of all traditional ML algorithms.The PR of NB is significantly lower than that of other traditional ML algorithms.
(3) Through the hypothesis test analysis of H3a and H3b, we can obtain p value<2.2e-16.So, there are statistically significant differences between the RR of all trading algorithms Therefore, we need to make multiple comparative analysis further, as shown in Table 7.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 7 and 4, we can see that there is no significant difference among the RR of all DNN models, but the RR of any DNN model is significantly lower than that of all traditional ML models.The RR of NB is significantly lower than that of other traditional ML algorithms.The RR of CART is significantly lower than that of other traditional ML algorithms except for NB.
(4) Through the hypothesis test analysis of H4a and H4b, we can obtain p value<2.2e-16.So, there are statistically significant differences between the F1 of all trading algorithms.Therefore, we need to make multiple comparative analysis further, as shown in Table 8.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 8 and 4, we can see that there is no significant difference among the F1 of MLP, DBN, and SAE.The F1 of MLP, DBN, and SAE are significantly greater than that of RNN, LSTM, GRU, and NB, but are significantly smaller than that of RF, LR, SVM, and XGB.The F1 of GRU and LSTM have no significant difference, but they are significantly smaller than that of all traditional ML algorithms.The F1 of XGB is significantly greater than that of all other trading algorithms.(5) Through the hypothesis test analysis of H5a and H5b, we can obtain p value<2.2e-16.So, there are statistically significant differences between the AUC of all trading algorithms.Therefore, we need to make multiple comparative analysis further, as shown in Table 9.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 9 and 4, we can see that there is no significant difference among the AUC of all DNN models.The AUC of all DNN models are significantly smaller than that of any traditional ML model.(6) Through the hypothesis test analysis of H6a and H6b, we can obtain p value<2.2e-16.So, there are statistically significant differences between the WR of all trading algorithms.Therefore, we need to make multiple comparative analysis further, as shown in Table 10.The number in the table is p value of any two algorithms of Nemenyi test.From Tables 4 Table 10: Multiple comparison analysis between the WR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.Index  and 10, we can see that the WR of MLP, DBN, and SAE have no significant difference, but they are significantly higher than that of BAH and benchmark index, and significantly lower than that of other trading algorithms.The WR of RNN, LSTM, and GRU have no significant difference, but they are significantly higher than that of CART and significantly lower than that of NB and RF.The WR of LR is not significantly different from that of RF, SVM, and XGB.(7) Through the analysis of the hypothesis test of H7a and H7b, we obtain p value<2.2e-16.Therefore, there are significant differences between the ARR of all trading strategies including the benchmark index and BAH.We need to do further multiple comparative analysis, as shown in Table 11.From Tables 4 and 11, we can see that the ARR of the benchmark index and BAH are significantly lower than that of all ML algorithms.The ARR of MLP, DBN, and SAE are significantly greater than that of RNN, LSTM, GRU, NB, and LR, but not significantly different from that of CART, RF, SVM, and XGB; there is no significant difference between the ARR of MLP, DBN, and SAE.The ARR of RNN, LSTM, and GRU are significantly less than that of CART, but they are not significantly different from that of other traditional ML algorithms.In all traditional ML algorithms, the ARR of CART is significantly greater than that of NB and LR, but, otherwise, there is no significant difference between ARR of any other two algorithms.
(8) Through the hypothesis test analysis of H8a and H8b, we obtain p value<2.2e-16.Therefore, there are significant differences between ASR of all trading strategies including the benchmark index and BAH.The results of our multiple comparative analysis are shown in Table 12.From Tables 4  and 12, we can see that the ASR of the benchmark index and BAH are significantly smaller than that of all ML algorithms.The ASR of MLP and DBN are significantly greater than that of CART and are significantly smaller than that of NB, RF, and XGB, but there is no significant difference between MLP, DBN, and other algorithms.The ASR of SAE is significantly greater than that of CART and significantly less than that of RF and XGB, but there is no significant difference between SAE and other algorithms.The ASR of RNN and LSTM are significantly greater than that of CART and significantly less than that of RF, but there is no significant difference between RNN, LSTM, and other algorithms.The ASR of GRU is significantly greater than that of CART, but there is no significant difference between GRU and other traditional ML algorithms.In all traditional ML algorithms, the ASR of all algorithms are significantly greater than that of CART, but otherwise, there is no significant difference between ASR of any other two algorithms.
(9) Through the hypothesis test analysis of H9a and H9b, we obtain p value<2.2e-16.Therefore, there are significant differences between MDD of trading strategies including the benchmark index and the BAH.The results of multiple comparative analysis are shown in Table 13.From Tables 4 and 13, we can see that MDD of any ML algorithm is significantly greater than that of the benchmark index but significantly smaller than that of BAH strategy.The MDD of MLP and DBN are significantly smaller than those of GRU, RF, and XGB, but there is no significant difference between MLP, DBN, and other algorithms.The MDD of SAE is significantly smaller than that of XGB, but there is no significant difference between SAE and other algorithms.Otherwise, there is no significant difference between MDD of any other two algorithms.
In a word, the traditional ML algorithms such as NB, RF, and XGB have good performance in most directional evaluation indicators such as AR, PR, and F1.The DNN algorithms such as MLP have good performance in PR and ARR.In traditional ML algorithms, the ARR of CART, RF, SVM, and XGB are not significantly different from that of MLP, DBN, and SAE; the ARR of CART is significantly greater than that of LSTM, GRU, and RNN, but otherwise the ARR of all traditional ML algorithms are not significantly worse than that of LSTM, GRU, and RNN.The ASR of all traditional ML algorithms except CART are not significantly worse than that of the six DNN models; even the ASR of NB, RF, and XGB are significantly greater than that of some DNN algorithms.The MDD of RF and XGB are significantly less that of MLP, DBN, and SAE; the MDD of all traditional ML algorithms are not significantly different from that of LSTM, GRU, and RNN.The ARR and ASR of all ML algorithms are significantly greater than that of BAH and the benchmark index; the MDD of any ML algorithm is significantly greater than that of the benchmark index, but significantly less than that of BAH strategy.

Comparative Analysis of Performance of Different Trading
Strategies in CSICS.The analysis methods of this part are similar to Section 5.2.From Table 14, we can see that the AR, PR, and F1 of MLP are the greatest in all trading algorithms.The RR, AUC, WR, and ASR of LR are the greatest in all trading algorithms, respectively.The ARR of NB is the highest in all trading strategies.The MDD of CSI 300 index (benchmark index) is the smallest in all trading strategies.The WR, ARR, and ASR of all ML algorithms are greater than those of the benchmark index and BAH strategy.
(1) Through the hypothesis test analysis of H1a and H1b, we can obtain p value<2.2e-16.Therefore, there are significant differences between the AR of all trading algorithms.Therefore, we need to do further multiple comparative analysis and the results are shown in Table 15.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 14  and 15, we can see that the AR of MLP, DBN, and SAE have no significant difference, but they are significantly greater than that of all other trading algorithms except for SVM.The AR of GRU is significantly smaller than that of all traditional ML algorithms.There is no significant difference between the AR of any two traditional ML algorithms except for CART and SVM.
(2) Through the hypothesis test analysis of H2a and H2b, we can obtain p value<2.2e-16.Therefore, there are significant differences between the PR of all trading algorithms.Therefore, we need to do further multiple comparative analysis and the results are shown in Table 16.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 14 and 16, we can see that the PR of MLP, DBN, and SAE are significantly greater than that of all other trading algorithms, and the PR of MLP, DBN, and SAE have no significant difference.The PR of SVM is significantly greater than that of all other traditional ML algorithms which have no significant difference between any two algorithms except for SVM.The PR of RNN is significantly greater than that of all traditional ML algorithms except for SVM.The PR of GRU and LSTM are not significantly different from that of all traditional ML algorithms except for SVM and LR.
(3) Through the hypothesis test analysis of H3a and H3b, we can obtain p value<2.2e-16.Therefore, there are significant differences between the RR of all trading algorithms.Therefore, we need to do further multiple comparative analysis and the results are shown in Table 17.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 14 and 17, we can see that the RR of all DNN models are not significantly different.There is no significant difference among the RR of all traditional ML algorithms.The RR of RNN, GRU, and LSTM are significantly smaller than that of any traditional ML algorithm except for CART.
(4) Through the hypothesis test analysis of H4a and H4b, we can obtain p value<2.2e-16.Therefore, there are significant differences between the F1 of all trading algorithms.Therefore, we need to do further multiple comparative analysis and  and 18, we can see that the F1 of MLP, DBN, and SAE have no significant difference, but they are significantly greater than that of all other trading algorithms.There is no significant difference among traditional ML algorithms except SVM, and the F1 of SVM is significantly greater than that of all other traditional ML algorithms.
(5) Through the hypothesis test analysis of H5a and H5b, we can obtain p value<2.2e-16.Therefore, there are significant differences between the AUC of all trading algorithms.Therefore, we need to do further multiple comparative analysis and the results are shown in Table 19.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 14 and 19, we can see that the AUC of all DNN models have no significant difference.There is no significant difference 0.4117 MLP 0.0000 0.0000 DBN 0.0000 0.0000 1.0000 SAE 0.0000 0.0000 1.0000 1.0000 RNN 0.0000 0.0000 0.0002 0.0006 0.0000 LSTM 0.0000 0.0000 0.0000 0.0000 0.0000 0.9772 GRU 0.0000 0.0000 0.0000 0.0000 0.0000 0.9911 1.0000 CART 0.9931 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 NB 0.0031 0.0000 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 RF 0.0000 0.0000 0.0000 0.0000 0.0000 0.0205 0.6437 0.5358 0.0000 0.0000 LR 0.0000 0.0000 0.0000 0.0000 0.0000 0.0010 0.1611 0.1105 0.0000 0.0000 1.0000 SVM 0.0000 0.0000 0.0000 0.0000 0.0000 0.9914 1.0000 1.0000 0.0000 0.0000 0.5322 0.1090 XGB 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 between the AUC of all traditional ML algorithms.The AUC of all traditional ML algorithms except for CART are significantly greater than that of any DNN model.There is no significant difference among the AUC of MLP, SAE, DBN, RNN, and CART.(6) Through the hypothesis test analysis of H6a and H6b, we can obtain p value<2.2e-16.Therefore, there are significant differences between the WR of all trading algorithms.Therefore, we need to do further multiple comparative analysis and the results are shown in Table 20.The number in the table is a p value of any two algorithms of Nemenyi test.From Tables 14 and 20, we can see that the WR of BAH and benchmark index have no significant difference, but they are significantly smaller than that of any ML algorithm.The WR of MLP, DBN, and SAE are significantly smaller than that of the other trading algorithms, but there is no significant difference between the WR of MLP, DBN, and SAE.The WR of LSTM and GRU have no significant difference, but they are significantly smaller than that of XGB and significantly greater than that of CART and NB.In traditional ML models, the WR of NB and CART are significantly smaller than that of other algorithms.The WR of XGB is significantly greater than that of all other ML algorithms.(7) Through the analysis of the hypothesis test of H7a and H7b, we obtain p value<2.2e-16.
Therefore, there are significant differences between the ARR of all trading strategies including the benchmark index and BAH strategy.Therefore, we need to do further multiple comparative analysis and the results are shown in Table 21.From Tables 14 and 21, we can see that ARR of the benchmark index and BAH are significantly smaller than that of all trading algorithms.The ARR of MLP is significantly higher than that of RF, but there is no significant difference between MLP and other algorithms.The ARR of SAE and DBN are significantly higher than that of RF and XGB, but they are not significantly different from ARR of other algorithms.The ARR of NB is significantly higher than that of RF, SVM, and XGB.But, otherwise, there is no significant difference between any other two algorithms.Therefore, the ARR of most traditional ML models are not significantly worse than that of the best DNN model.( 8) Through the hypothesis test analysis of H8a and H8b, we obtain p value<2.2e-16.Therefore, There are significant differences between ASR of all trading strategies including the benchmark index and BAH strategy.The results of multiple comparative analysis are shown in Table 22.From Tables 14  and 22, we can see that the ASR of the benchmark index and BAH are significantly smaller than that of all trading algorithms.The ASR of all ML algorithms are significantly higher than that of CART and NB, but there is no significant difference between the ASR of CART and NB.Beyond that, there is no significant difference between any other two algorithms.Therefore, the ASR of all traditional ML models except NB and CART are not significantly worse than that of any DNN model.(9) Through the hypothesis test analysis of H9a and H9b, we obtain p value<2.2e-16.Therefore, there are significant differences between the MDD of these trading strategies including the benchmark index and the BAH strategy.The results of multiple comparative analysis are shown in Table 23.From Tables 14 and 23, we can see that the MDD of the benchmark index is significantly smaller than that of other trading strategies including BAH strategy.The MDD of BAH is significantly greater than that of all trading algorithms except NB.The MDD of MLP, DBN, and SAE are significantly lower than that of NB, but significantly higher than that of RNN, LSTM, GRU, LR, and XGB.The MDD of NB is significantly greater than that of all other trading algorithms.Beyond that, there is no significant difference between any other two algorithms.Therefore, all ML algorithms expect NB, especially LSTM, RNN, GRU, LR, and XGB, can play a role in controlling trading risk.
In a word, some DNN models such as MLP, DBN, and SAE have good performance in AR, PR, and F1; traditional ML algorithms such as LR and XGB have good performance in AUC and WR.The ARR of some traditional ML algorithms such as CART, NB, LR, and SVM are not significantly different from that of the six DNN models.The ASR of the six DNN algorithms are not significantly different from all traditional ML models except NB and CART.The MDD of LR and XGB are significantly smaller than those of MLP, DBN, and SAE, and are not significantly different from that of LSTM, GRU, and RNN.The ARR and ASR of all ML algorithms are significantly greater than those of BAH and benchmark index; the MDD of all ML algorithms are significantly smaller than that of the benchmark index but significantly greater than that of BAH strategy.
From the above analysis and evaluation, we can see that the directional evaluation indicators of some DNN models are very competitive in CSICS, while the indicators of some traditional ML algorithms have excellent performance in SPICS.Whether in SPICS or CSICS, the ARR and ASR of all ML algorithms are significantly greater than that of the benchmark index and BAH strategy, respectively.In all ML algorithms, there are always some traditional ML algorithms which are not significantly worse than the best DNN model for any performance evaluation indicator (ARR, ASR, and MDD).Therefore, if we do not consider transaction cost and other factors affecting trading, performance of DNN models are alternative but not the best choice when they are applied to stock trading.
In the same period, the ARR of any ML algorithm in CSICS is significantly greater than that of the same algorithm in SPICS (p value <0.001 in the Nemenyi test).Meanwhile, the MDD of any ML algorithm in CSICS is significantly greater than that of the same algorithm in SPICS (p value <0.001 in the Nemenyi test).The results show that the quantitative trading algorithms can more easily obtain excess returns in the Chinese A-share market, but the volatility risk of trading in Chinese A-share market is significantly higher than that of the US stock market in the past 8 years.

The Impact of Transaction Cost on Performance of ML Algorithms
Trading cost can affect the profitability of a stock trading strategy.Transaction cost that can be ignored in long-term strategies is significantly magnified in daily trading.However, many algorithmic trading studies assume that transaction cost does not exist ( [10,17], etc.).In practice, frictions such as transaction cost can distort the market from the perfect model in textbooks.The cost known prior to trading activity is referred to as transparent such as commissions, exchange fees, and taxes.The costs that has to be estimated are known as implicit, including comprise bid-ask spread, latency or slippage, and related market impact.This section focuses on the transparent and implicit cost and how do they affect trading performance in daily trading.

Experimental Settings and Backtesting Algorithm.
In this part, the transparent transaction cost is calculated by a certain percentage of transaction turnover for convenience; the implicit transaction cost is very complicated in calculation, and it is necessary to make a reasonable estimate for the random changes of market environment and stock prices.Therefore, we only discuss the impact of slippage on trading performance.
The transaction cost structures of American stocks are similar to that of Chinese A-shares.We assume that transparent transaction cost is calculated by a percentage of turnover such as less than 0.5% [40,41] and 0.2% and 0.5% in the literature [42].It is different for the estimation of slippage.
In some quantitative trading simulation software such as JoinQuant [43] and Abuquant [44], the slippage is set to 0.02.The transparent transaction cost and implicit transaction cost are charged in both directions when buying and selling.It is worth noting that the transparent transaction cost varies with the different brokers, while the implicit transaction cost is related to market liquidity, market information, network status, trading software, etc.
We set slippages s = {s0=0, s1=0.01,s2=0.02,s3=0.03,s4=0.04}; the transparent transaction cost c = {c0=0, c1=0.001,c2=0.002,c3=0.003,c4=0.004,c5=0.005}.For different {s, c} combinations, we study the impact of different transaction cost structures on trading performance.We assume that buying and selling positions are one unit, so the turnover is the corresponding stock price.When buying stocks, we not only need to pay a certain percentage cost of the purchase price, but also need to pay an uncertain slippage cost.That is, we need to pay a higher price than the real-time price  −1 when we are buying.But, when selling stocks, we not only need to pay a certain percentage cost of the selling price, but also to pay an uncertain slippage cost.Generally speaking, we need to sell at a price lower than the real-time price   .It is worth noting that our trading strategy is selffinancing.If ML algorithms predict the continuous occurrence of buying signals or selling signals, i.e., |  −  −1 | = 0, we will continue to hold or do nothing, so the transaction cost at this time is 0. when |  −  −1 | = 1, it is indicated that the position may be changed from holding to selling or from empty position to buying.At this time, we would pay transaction cost due to the trading operation.Finally, we get a real yield is where   denotes the -th closing price,   denotes the -th trading signal,   denotes the -th executing price, and   denotes the -th return rate.
We propose a backtesting algorithm with transaction cost based on the above analysis, as is shown in Algorithm 3.

Analysis of Impact of Transaction Cost on the Trading
Performance of SPICS.Transaction cost is one of the most important factors affecting trading performance.In US stock trading, transparent transaction cost can be charged according to a fixed fee per order or month, or a floating fee based on the volume and turnover of each transaction.Sometimes, customers can also negotiate with broker to determine transaction cost.The transaction cost charged by different brokers varies greatly.Meanwhile, implicit transaction cost is not known beforehand and the estimations of them are very complex.Therefore, we assume that the percentage of turnover is the transparent transaction cost for ease of calculation.In the aspect of implicit transaction cost, we only consider the impact of slippage on trading performance.
(1) Analysis of Impact of Transaction Cost on WR.As can be seen from Table 24, WR is decreasing with the increase of transaction cost for any trading algorithm, which is intuitive.When the transaction cost is set to (s, c) = (0.04, 0.005), the WR of each algorithm is the lowest.Compared with setting (s, c) = (0, 0), the WR of MLP, DBN, SAE, RNN, LSTM, GRU, CART, NB, RF, LR, and SVM to XGB are reduced by 5.80%, 5.97%, 5.91%, 15.83%, 18.04%, 13.95%, 21.71%, 16.04%, 22.16%, 18.54%, 18.50%, and 25.97%, respectively.Therefore, MLP, DBN, and SAE are more tolerant to transaction cost.Generally speaking, the DNN models have stronger capacity to accommodate transaction cost than the traditional ML models.From the single trading algorithm such as MLP, if we do not consider slippage, i.e., s=0, the average WR of MLP is 0.5510 under transaction cost structures { (s0, c1), (s0, c2), (s0, c3), (s0, c4), (s0, c5) }; if we do not consider transparent transaction cost, i.e., c=0, the average WR of MLP is 0.5618 under transaction cost structures { (s1, c0), (s2, c0), (s3, c0), (s4, c0) }; so transparent transaction cost has greater impact than slippage.Through multiple comparative analysis, the WR under the transaction cost structure (s1, c0) is not significantly different from the WR without transaction cost for MLP, DBN, and SAE.The WR under all other transaction cost structures are significantly smaller than the WR without transaction cost.For all trading algorithms except for MLP, DBN, and SAE, the WR under the transaction cost structure { (s1, c0), (s2, c0) } are not significantly different from the WR without transaction cost; the WR under all other transaction cost structures are significantly smaller than the WR without transaction cost.
(2) Analysis of Impact of Transaction Cost on ARR.As can be seen from Table 25, ARR is decreasing with the increase of transaction cost for any trading algorithm.Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the ARR of each algorithm is the lowest.Compared with the settings without transaction cost, the ARR of MLP, DBN, and SAE reduce by 40.31%, 41.57%, and 40.93%, respectively, while the ARR of other trading algorithms decrease by more than 100% compared with those without transaction cost.Therefore, excessive transaction cost can lead to serious losses in accounts.For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), ARR of MLP, DBN, and SAE decrease by 23.26%, 24.00%, and 23.61%, respectively, while the ARR of other algorithms decrease by more than 50% and that of CART and XGB decrease by more than 100%.Therefore, MLP, DBN, and SAE are more tolerant to high transaction cost.From single trading algorithm such as RNN, if we do not consider slippage, i.e., s=0, the average ARR of RNN is 0.1434 under Through the analysis of the Table 27 performance evaluation indicators, we find that trading performance after considering transaction cost will be worse than that without considering transaction cost as is in actual trading situation.It is noteworthy that the performance changes of DNN algorithms, especially MLP, DBN, and SAE, are very small after considering transaction cost.This shows that the three algorithms have good tolerance to changes of transaction cost.Especially for the MDD of the three algorithms, there is no significant difference with that with no transaction cost.So, we can consider applying them in actual trading.Meanwhile, we conclude that the transparent transaction cost has greater impact on the trading performances than the slippage for SPICS.This is because the prices of SPICS are too high when the transparent transaction cost is set to a certain percentage of turnover.In actual transactions, special attention needs to be paid to the fact that the transaction performance under most transaction cost structures is significantly lower than the trading performance without considering transaction cost.It is worth noting that the performance of traditional ML algorithm is not worse than that of DNN algorithms without considering transaction cost, while the performance of DNN algorithms is better than that of traditional ML algorithms after considering transaction cost.

Analysis of Impact of Transaction Cost on the Trading
Performance of CSICS.Similar to Section 6.2, we will discuss the impact of transaction cost on trading performance of CSICS in the followings.In the Chinese A-share market, the transparent transaction cost is usually set to a certain Mathematical Problems in Engineering find highly profitable trading algorithms in the presence of transaction cost.Financial data, which is generated in changing financial market, are characterized by randomness, low signal-to-noise ratio, nonlinearity, and high dimensionality.Therefore, it is difficult to find inherent patterns in financial big data by using algorithms.In this paper, we also prove this point.
When using ML algorithms to predict stock prices, the directional evaluation indicators are not as good as expected.For example, the AR, PR, and RR of LSTM and RNN are about 50%-55%, which are only slightly better than random guess.On the contrary, some traditional ML algorithms such as XGB have stronger ability in directional predictions of stock prices.Therefore, those simple models are less likely to cause overfitting when capturing intrinsic patterns of financial data and can make better predictions about the directions of stock price changes.Actually, we assume that sample data are independent and identically distributed when using ML algorithm to classify tasks.DNN algorithms such as LSTM and RNN make full use of autocorrelation of financial time series data, which is doubtful because of the characteristics of financial data.Therefore, the prediction ability of these algorithms may be weakened because of the noise of historical lag data.
From the perspective of trading algorithms, traditional ML models map the feature space to the target space.The parameters of the learning model are quite few.Therefore, the learning goal can be better accomplished in the case of fewer data.The DNN models mainly connect some neurons into multiple layers to form a complex DNN structure.Through the complex structure, the mapping relationships between input and output are established.As the number of neural network layers increases, the weight parameters can be automatically adjusted to extract advanced features.Compared with the traditional ML models, DNN models have more parameters.So their performance tends to increase as the amount of data grows.Complex DNN models need a lot of data to avoid underfitting and overfitting.However, we only use the data for 250 trading days (one year) as training set to construct trading model, and then we predict stock prices in the next week.So, too few data may lead to poor performance in the directional and performance predictions.
In the aspect of transaction cost, it is unexpected that DNN models, especially MLP, DBN, and SAE, have stronger adaptability to transaction cost than traditional ML models.In fact, the higher PR of MLP, DBN, and SAE indicate that they can identify more trading opportunities with higher positive return.At the same time, DNN model can adapt to the changes of transaction cost structures well.That is, compared with traditional ML models, the reduction of ARR and ASR of DNN models are very small when transaction cost increases.There especially is no significant difference between the MDD of DNN models under most of transaction cost structures and that without considering transaction cost.This is further proof that DNN models can effectively control downside risk.Therefore, DNN algorithms are better choices than traditional ML algorithm in actual transactions.In this paper, we divide transaction cost into transparent transaction cost and implicit transaction cost.In different markets, the impact of the two transaction cost on performance is different.We can see that transparent transaction cost is a larger impact than implicit transaction cost in SPICS while they are just the opposite in CSICS, because the prices of SPICS are higher than that of CSICS.While we have taken full account of the actual situation in real trading, the assumption of transaction cost in this paper is relatively simple.Therefore, we can consider the impact of opportunity cost and market impact cost on trading performance in future research work.
This paper makes a multiple comparative analysis of trading performance for different ML algorithms by means of nonparameter statistical testing.We comprehensively discuss whether there are significant differences among the algorithms under different evaluation indicators in both cases of transaction cost and no transaction cost.We show that the DNN algorithms have better performance in terms of profitability and risk control ability in the actual environment with transaction cost.Therefore, DNN algorithms can be used as choices for algorithmic trading and quantitative trading.

Conclusion
In this paper, we apply 424 SPICS in the US market and 185 CSICS in the Chinese market as research objects, select data of 2000 trading days before December 31, 2017, and build 44 technical indicators as the input features for the ML algorithms, and then predict the trend of each stock price as trading signal.Further, we formulate trading strategies based on these trading signals, and we do backtesting.Finally, we analyze and evaluate the trading performance of these algorithms in both cases of transaction cost and no transaction cost.
Our contribution is to compare the significant differences between the trading performance of the DNN algorithms and the traditional ML algorithms in the Chinese stock market and the American stock market.The experimental results in SPICS and CSICS show that some traditional ML algorithms have a better performance than DNN algorithms in most of the directional evaluation indicators.DNN algorithms which have the best performance indicators (WR, ARR, ASR, and MDD) among all ML algorithms are not significantly better than those traditional ML algorithms without considering transaction cost.With the increase of transaction cost, the transaction performance of all ML algorithms will become worse and worse.Under the same transaction cost structure, the DNN algorithms, especially the MLP, DBN, and SAE, have lower performance degradation than the traditional ML algorithm, indicating that the DNN algorithms have a strong tolerance to the changes of transaction cost.Meanwhile, the transparent transaction cost and implicit transaction cost are different impact for the SPICS and CSICS.The experimental results also reveal that the transaction performance of all ML algorithms is sensitive to transaction cost, and more attention is needed in actual transactions.Therefore, it is essential to select the competitive algorithms for stock trading according to the trading performance, adaptability to transaction cost, and the risk control ability of the algorithms both in the American stock market and Chinese A-share market.

Mathematical Problems in Engineering 29
With the rapid development of ML technology and the convenient access to financial big data, future research work can be carried out from the following aspects: (1) using ML algorithms to implement dynamic optimal portfolio among different stocks; (2) using ML algorithms to do highfrequency trading and statistical arbitrage; (3) considering the impact of more complex implicit transaction cost such as opportunity cost and market impact cost on stock trading performance.The solutions of these problems will help to develop an advanced and profitable automated trading system based on financial big data, including dynamic portfolio construction, transaction execution, cost control, and risk management according to the changes of market conditions and even the changes of investor's risk preferences of over time.

Figure 1 :
Figure 1: The framework for predicting stock price trends based on ML algorithms.

Table 1 :
Main parameter settings of traditional ML algorithms.Matrix(250,1) A specification for the model link function is logit.SVM Matrix(250,44) Matrix(250,1) The kernel function used is Radial Basis kernel; Cost of constraints violation is 1.CART Matrix(250,44) Matrix(250,1) The maximum depth of any node of the final tree is 20; The splitting index can be Gini coefficient.RF Matrix(250,44) Matrix(250,1) The Number of trees is 500; Number of variables randomly sampled as candidates at each split is 7. BN Matrix(250,44) Matrix(250,1) the prior probabilities of class membership is the class proportions for the training set.XGB Matrix(250,44) Matrix(250,1) The maximum depth of a tree is 10; the max number of iterations is 15; the learning rate is 0.3.

Table 2 :
Main parameter settings of DNN algorithms.
and then apply the trained model to implement the prediction for the out-of-sample data (testing dataset) of the future time period.After that, a new training set, which is the previous training set walk one step forward, is carried out the training of the next round.WFA can improve the robustness and the confidence of the trading strategy in real-time trading.

Table 3 :
Confusion matrix of two classification results of ML algorithm.
Table 4 shows the average value of Input: TS #TS is trading signals of a stock.Output: AR, PR, RR, F1, AUC, WR, ARR, ASR, MDD (1) N=length of Stock Code List #424 SPICS, and 185 CSICS.(2) B t =Benchmark Index ["Closing Price"] # B is the closing price of benchmark index.

Table 4 :
Trading performance of different trading strategies in the SPICS.Best performance of all trading strategies is in boldface.

Table 5 :
Multiple comparison analysis between the AR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 6 :
Multiple comparison analysis between the PR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 5
. The number in the table is a p value of any two algorithms of Nemenyi test.When p value<0.05, we think that the two trading algorithms have a significant difference, otherwise we cannot deny the null assumption that the mean values of AR of the two algorithms are equal.From Tables

Table 7 :
Multiple comparison analysis between the RR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 8 :
Multiple comparison analysis between the F1 of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 9 :
Multiple comparison analysis between the AUC of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 11 :
Multiple comparison analysis between the ARR of any two trading strategies.The p value of the two trading strategies with significant difference is in boldface.

Table 12 :
Multiple comparison analysis between the ASR of any two trading strategies.The p value of the two trading strategies with significant difference is in boldface.

Table 13 :
Multiple comparison analysis between the MDD of any two trading strategies.The p value of the two trading strategies with significant difference is in boldface.

Table 14 :
Trading performance of different trading strategies in CSICS.Best performance of all trading strategies is in boldface.

Table 15 :
Multiple comparison analysis between the AR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 16 :
Multiple comparison analysis between the PR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 17 :
Multiple comparison analysis between the RR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 18 :
Multiple comparison analysis between the F1 of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 19 :
Multiple comparison analysis between the AUC of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 20 :
Multiple comparison analysis between the WR of any two trading algorithms.The p value of the two trading strategies with significant difference is in boldface.

Table 21 :
Multiple comparison analysis between the ARR of any two trading strategies.The p value of the two trading strategies with significant difference is in boldface.

Table 22 :
Multiple comparison analysis between the ASR of any two trading strategies.The p value of the two trading strategies with significant difference is in boldface.

Table 23 :
Multiple comparison analysis between the MDD of any two trading strategies.The p value of the two trading strategies with significant difference is in boldface.

Table 24 :
The WR of SPICS for daily trading with different transaction cost.The result that there is no significant difference between performance without transaction cost and that with transaction cost is in boldface.

Table 25 :
The ARR of SPICS for daily trading with different transaction cost.The result that there is no significant difference between performance without transaction cost and that with transaction cost is in boldface.32%, respectively.The MDD of other trading algorithms increase by more than 80% compared with those without considering transaction cost.Therefore, excessive transaction cost can cause serious potential losses to the account.For a general setting of s and c, i.e., (s, c) = (0.02, 0.003), the MDD of MLP, DBN, and SAE increase by 4.83%, 5.80%, and 5.
(4) Analysis of Impact of Transaction Cost on MDD.As can be seen from Table27, MDD increases with the increase of transaction cost for any trading algorithm.Undoubtedly, when the transaction cost is set to (s, c) = (0.04, 0.005), the MDD of each algorithm increases to the highest level.In this case, compared with the settings without transaction cost, the MDD of MLP, DBN, and SAE increase by 9.32%, 11.08%, and 10.

Table 26 :
The ASR of SPICS for daily trading with different transaction cost.The result that there is no significant difference between performance without transaction cost and that with transaction cost is in boldface.