MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi 10.1155/2018/9013839 9013839 Research Article Modeling and Simulation of Gas Emission Based on Recursive Modified Elman Neural Network http://orcid.org/0000-0002-4034-2465 Wei Lin 1 Wu Yongqing 1 Fu Hua 2 Yin Yuping 2 Zhang Qian 1 Department of Basic Education Liaoning Technical University Huludao China lntu.edu.cn 2 School of Electrical and Control Engineering Liaoning Technical University Huludao China lntu.edu.cn 2018 2022018 2018 13 10 2017 03 01 2018 22 01 2018 2022018 2018 Copyright © 2018 Lin Wei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

For the purpose of achieving more effective prediction of the absolute gas emission quantity, this paper puts forward a new model based on the hidden recurrent feedback Elman. The recursive part of classic Elman cannot be adjusted because it is fixed. To a certain extent, this drawback affects the approximation ability of the Elman, so this paper adds the correction factors in recursive part and uses the error feedback to determine the parameters. The stability of the recursive modified Elman neural network is proved in the sense of Lyapunov stability theory, and the optimal learning rate is given. With the historical data of mine actual monitoring to experiment and analysis, the results show that the recursive modified Elman neural network model can effectively predict the gas emission and improve the accuracy and efficiency of prediction compared with the classic Elman prediction model.

Foundation of Liaoning Educational Committee LJ2017QL021 National Natural Science Foundation of China 61304173
1. Introduction

In the daily management of mine safety, an effective method of prevention and control of mine gas disasters is the scientific analysis of the gas emission data provided by the monitoring system . The gas is one of the most important factors threatening the safety production of mine . Most recent work mainly focuses on different methods for improving the prediction performance of the absolute gas emission quantity, such as Grey theory , principal component regression analysis method , partial least squares support vector machine , virtual state variables and Kalman filter , BP neural network , and RBF neural network .

In recent years, intelligent computing methods have been rapidly developed in dynamic system identification [9, 10], time series prediction [11, 12], and other fields. In fact, there are many factors influencing the absolute gas emission quantity, such as coal seam gas content, burying depth, and coal seam thickness [13, 14]. That means the gas emission prediction model is a multidimensional complex dynamic system, and it is difficult to accurately predict gas emission quantity, since recurrent neural network is a highly nonlinear dynamical system that exhibits complex behaviors and good ability of processing dynamic information . As is well known, the recurrent neural network has wide applications in various areas [16, 17]. It is expected that recurrent neural network possesses better performance than feedforward neural network (such as BP and RBF) in modeling and predicting gas emission quantity. In particular, Elman neural network (ENN) has been proved successful in gas emission prediction [18, 19]. Some works on improving the performance of gas emission prediction using ENN can be found in, for example, . However, a common drawback of the above gas emission prediction models based on classic Elman neural network is that the recursive part of the hidden layer cannot be adjusted because it is fixed. This drawback affects the nonlinear approximation ability of classic Elman neural network.

From the above observation, this paper proposes a novel strategy of adding correction factors in recursive part of ENN, resulting in a new model called recursive modified Elman neural network (RMENN). The stability and convergence of RMENN model are theoretically proved, and some meaningful results are obtained in this paper. In practice, through the analysis of the main factors affecting coal gas emission, this paper puts forward the gas emission prediction model based on recursive modified Elman neural network.

The rest of this paper is organized as follows. The establishment of RMENN model is described in Section 2. The learning algorithms of RMENN model are described in Section 3. The performance analysis and flowchart of RMENN model are described in Section 4. Experiment analysis results on the gas emission prediction are presented in Section 5. Finally, the paper is concluded in Section 6.

2. Establishment of RMENN Model

As discussed in Section 1, we aim to propose a specific architecture to overcome the aforementioned drawback of a fixed structure and improve the nonlinear approximation ability of ENN. In Figure 1, this paper adds correction factors in the context layer and the output layer to adjust the values of the recursive parts. Let u ( k ) R r and y ( k ) R m denote the network input and output vectors at the discrete time k , respectively. Let w 1 R n × n , w 2 R n × r , w 3 R m × n , w 4 R n × m denote weight matrices of context-hidden, input-hidden, hidden-output, and output-hidden, respectively. Let x c ( k ) R n and x ( k ) R n denote the output vectors of the context layer and the hidden layer at the discrete time k , respectively. Let β x ( k - 1 ) be the correction part of the hidden context layer. Let γ y ( k - 1 ) be the correction part of the output context layer. f ( · ) and g ( · ) are the activation functions, respectively. In a general way, f ( · ) is the sigmoid function and g ( · ) is the linear function. Let y c ( k ) R m be the output vector of the output context layer at the discrete time k .

Topology of the RMENN.

With this feature, the new model called RMENN is able to improve update power of the classic ENN and exhibit rapid convergence and high prediction accuracy. The relationship between input and output of RMENN can be expressed as (1) x k = f w 1 x c k + w 2 u k - 1 + w 4 y c k x c k = α x c k - 1 + β x k - 1 y c k = γ y c k - 1 + φ y k - 1 y k = g w 3 x k , where α , β ( 0 α < 1,0 < β 1 ) are, respectively, feedback factor and correction factor of the context layer. γ , φ ( 0 γ < 1,0 φ 1 ) are, respectively, feedback factor and correction factor of the output layer. In particular, when α = 0 , β = 1 , γ = 0 , φ = 0 , the special model is the classic Elman neural network .

The topology of the RMENN is shown in Figure 1.

3. Learning Algorithm for RMENN

The main objective of learning algorithm is to minimize a predefined energy function by adaptively adjusting the vector of network parameters based on a given set of input-output pairs. The particular energy function used in the RMENN is as follows: (2) e k = 1 2 y q w k - y k T y q w k - y k , where y q w ( k ) R m is the desired output associated with the input pattern and y ( k ) R m is the inferred output at the discrete time k .

The weights of the RMENN are updated by the negative gradient of the energy function; those are (3) Δ w i j 3 = η 3 k δ i o x j k , i = 1,2 , , m ; j = 1,2 , , n ; (4) Δ w j q 2 = η 2 k δ j h u q k - 1 , j = 1,2 , , n ; q = 1,2 , , r ; (5) Δ w j l 1 = η 1 k i = 1 m δ i o w i j 3 x j k w j l 1 , j = 1,2 , , n ; l = 1,2 , , n ; (6) Δ w j s 4 = η 4 k i = 1 m δ i o w i j 3 x j k w j s 4 , s = 1,2 , , m ; j = 1,2 , , n , where η 1 ( k ) , η 2 ( k ) , η 3 ( k ) , η 4 ( k ) are the learning rate of w 1 , w 2 , w 3 , w 4 , respectively. With the learning-rate parameters η 1 ( k ) , η 2 ( k ) , η 3 ( k ) , η 4 ( k ) , the terms δ i o and δ j h are calculated as follows: (7) δ i o = y q w , i k - y i k δ j h = i = 1 m δ i o w i j 3 f j · , where x j ( k ) / w j l 1 and x j ( k ) / w j s 4 are calculated as follows: (8) x j k w j l 1 = α x j k - 1 w j l 1 + β f j · x l k - 1 (9) x j k w j s 4 = γ x j k - 1 w j s 4 + φ f j · y s k - 1 .

4. Performance Analysis 4.1. Convergence and Stability

The appropriate learning rate can make the learning algorithm converge at a faster speed. According to the Lyapunov stability theory, η 1 ( k ) , η 4 ( k ) can be detailedly proved, and the proof of η 2 ( k ) , η 3 ( k ) is similar to η 1 ( k ) , η 4 ( k ) .

Theorem 1.

Let the weights of RMENN be updated by (3)–(9).

(1) If 0 < η 1 ( k ) < 32 ( 1 - α ) 2 β - 2 n 2 max ij w i j 3 ( k ) 2 - 1 , the iterative learning process of w 1 is a stable and convergent process by (5).

(2) If 0 < η 2 ( k ) < 8 n r max k u k ( k ) max ij w i j 3 ( k ) - 1 , the iterative learning process of w 2 is a stable and convergent process by (4).

(3) If 0 < η 3 ( k ) < 2 / n , the iterative learning process of w 3 is a stable and convergent process by (3).

(4) If 0 < η 4 ( k ) < 32 ( 1 - γ ) 2 ( m n ) - 1 φ M max ij w i j 3 ( k ) - 2 , the iterative learning process of w 4 is a stable and convergent process by (6).

Proof.

(1) Let the energy function be described by (2).

Since (10) Δ e k = e k + 1 - e k = 1 2 i = 1 m e i 2 k + 1 - e i 2 k , where (11) e i k + 1 = e i k + j = 1 n e i k w i j 1 Δ w i j 1 = e i k - j = 1 n y i k w i j 1 Δ w i j 1 , then (12) Δ e k = 1 2 i = 1 m e i 2 k 1 - η 1 k y i k w 1 T y i k w 1 2 - 1 = 1 2 i = 1 m e i 2 k 1 - η 1 k y i k w 1 2 2 - 1 , where w 1 is a n × n matrix and · is 2-norm.

Since (13) y i k w j l 1 = y i k x j k x j k w j l 1 = w i j 3 k x j k w j l 1 , i = 1,2 , , n ; j = 1,2 , , n ; l = 1,2 , , n , according to (8) with the initial condition (14) x j 0 w j l 1 = 0 , j = 1,2 , , n ; l = 1 , 2 , , n , we can get the following equation: (15) x j k w j l 1 β t = 1 k α t - 1 f j k - t + 1 x l k - t + 1 , j = 1,2 , , n ; l = 1 , 2 , , n , Since 0 < f j ( · ) 1 / 4 , 0 α < 1 , 0 < β 1 , 0 < x l ( k - t + 1 ) < 1 , we can get the following conclusions: (16) x j k w j l 1 β t = 1 k α t - 1 f j k - t + 1 x l k - t + 1 < β 4 1 - α , j = 1,2 , , n ; l = 1 , 2 , , n . Therefore (17) y i k w j l 1 = w i j 3 k x j k w j l 1 < β 4 1 - α max i , j w i j 3 k , j = 1,2 , , n ; l = 1 , 2 , , n ; y k w 1 < n β 4 1 - α max i , j w i j 3 k . Since 0 < η 1 ( k ) < 32 ( 1 - α ) 2 / n 2 β 2 max i , j w i j 3 ( k ) 2 , for e ( k ) 0 and Δ e ( k ) = e ( k + 1 ) - e ( k ) < 0 , we can ensure that Δ w j l 1 is a stable and convergent process.

Proof (4) It is similar to the proof (1).

Since (18) Δ e = 1 2 i = 1 m e i 2 k 1 - η 4 k y i k w 4 2 2 - 1 according to (9) with the initial condition (19) x j 0 w j s 4 = 0 , j = 1,2 , , n ; s = 1 , 2 , , n , we can get the following equation: (20) x j k w j s 4 φ t = 1 k γ t - 1 f j k - t + 1 y s k - t + 1 , j = 1,2 , , n ; s = 1 , 2 , , n . Let M = max k y s ( k ) .

Then 0 γ < 1 and x j ( k ) / w j s 4 < φ M / 4 ( 1 - γ ) .

Hence, (21) y i k w j s 4 = w i j 3 k x j k w j s 4 < φ M 4 1 - γ max i , j w i j 3 k , y k w 4 < φ M m n 4 1 - γ max i , j w i j 3 k . Since 0 < η 4 ( k ) < 32 ( 1 - γ ) 2 ( m n ) - 1 φ M max ij w i j 3 ( k ) - 2 , for e ( k ) 0 and Δ e ( k ) < 0 , we can ensure that Δ w j s 4 is a stable and convergent process.

The proof of η 2 ( k ) , η 3 ( k ) is similar to η 1 ( k ) , η 4 ( k ) .

The proof of theorem is completed.

4.2. Adaptive Learning Rate of RMENN

As explained before, we can get the optimal learning rate as follows.

Let 1 - η 1 ( k ) y i ( k ) / w 1 2 = 0 , Δ e ( k ) is the minimum negative value, and the convergence speed of RMENN is the fastest.

Let y ( k ) / w 1 n β / 4 ( 1 - α ) max i , j w i j 3 ( k ) ; we can get the optimal learning rate as follows: (22) η 1 k = 16 1 - α 2 n β max i , j w i j 3 k - 2 . Similarly, (23) η 2 k = 4 n r max k u k k max ij w i j 3 k - 1 η 3 k = 1 n η 4 k = 16 1 - γ 2 m n - 1 φ M max ij w i j 3 k - 2 , where η 1 ( k ) , η 2 ( k ) , η 3 ( k ) , η 4 ( k ) are the optimal adaptive learning rate of w 1 , w 2 , w 3 , w 4 , respectively.

The training algorithm procedures of RMENN are shown in Figure 2.

Training algorithm procedures of RMENN.

5. Model Test 5.1. Data Selection and Preliminary Analysis

China is the larger coal consumer among the developing countries. China’s secure producing situation of the coalmine is very grim, especially the accident of gas disasters, which would result in a large quantity of casualties and property losses, and has absorbed high attention of the government. The precise prediction of gas emission is important for the mineral production safety in China. Gas emission prediction model based on small sample has always been a significant subject in coal mine gas study field .

This paper selects Kailuan mining group money mining camp in May 2007 to December 2008 working face of absolute gas emission quantity . The main factors are shown in Table 1 such as coal seam gas content ( x 1 ), burying depth ( x 2 ), coal seam thickness ( x 3 ), coal seam dip angle ( x 4 ), mining height ( x 5 ), daily work progress ( x 6 ), working face length ( x 7 ), production rate ( x 8 ), adjacent layer gas content ( x 9 ), adjacent layer thickness ( x 10 ), adjacent layer spacing ( x 11 ), mining intensity ( x 12 ), interlayer lithology ( x 13 ), and gas emission quantity ( y ).

The statistical data of coalface gas emission and influencing factors.

Number x 1 /(m3⋅t−1) x 2 /m x 3 /m x 4 /(o) x 5 /m x 6 /(m⋅d−1) x 7 /m x 8 % x 9 /(m3⋅t−1) x 10 /m x 11 /m x 12 /(t⋅d−1) x 13 y /(m3⋅min−1)
(1) 1.92 408 2.0 10 2.0 4.42 155 96 2.02 1.50 20 1 5.03 3.34
(2) 2.14 421 1.8 11 1.8 4.13 145 95 2.64 1.62 19 1 4.75 3.56
(3) 2.58 450 2.3 10 2.3 4.67 150 95 2.41 1.48 18 2 4.91 3.67
(4) 2.40 456 2.2 15 2.2 4.51 160 94 2.55 1.75 20 2 4.63 4.17
(5) 3.22 516 2.8 13 2.8 3.45 180 93 2.21 1.72 12 2 4.78 4.60
(6) 2.80 527 2.5 17 2.5 3.28 180 94 2.81 1.81 11 1 4.51 4.92
(7) 3.23 517 2.8 13 2.8 3.46 180 93 2.23 1.71 12 2 4.76 4.61
(8) 3.35 531 2.9 9 2.9 3.68 165 93 1.88 1.42 13 2 1.82 4.78
(9) 3.61 550 2.9 12 2.9 4.02 155 92 2.12 1.60 14 2 4.83 5.23
(10) 3.71 573 3.2 11 3.2 2.92 175 91 3.11 1.46 13 2 4.63 5.62
(11) 4.21 590 5.9 8 5.9 2.85 170 79 3.40 1.50 18 3 4.77 7.24
(12) 4.03 604 6.2 9 6.2 2.64 180 81 3.15 1.80 16 3 4.70 7.80
(13) 4.80 630 6.5 9 6.16 2.77 165 78 3.02 1.74 17 3 4.62 7.68
(14) 4.67 640 6.3 11 .3 2.75 175 80 2.56 1.75 15 3 4.60 7.95
(15) 2.43 450 2.7 11 2.7 4.32 165 93 2.35 1.85 16 2 4.58 5.06
(16) 3.16 544 2.7 17 2.7 3.81 165 93 2.81 1.79 13 2 4.90 4.93
(17) 4.62 629 6.4 13 6.4 2.80 170 80 3.35 1.61 19 3 4.63 8.04
(18) 4.53 635 6.2 9 6.2 2.73 160 72 2.94 1.73 17 3 4.61 7.56
(19) 3.87 580 3.9 11 3.9 2.85 170 92 3.02 1.39 14 2 4.72 5.82
(20) 3.24 509 2.5 14 2.5 4.40 160 93 2.79 1.72 13 2 4.65 4.36

In order to reduce the influence of the different dimension, the experimental data are conducted by generating a value between lower and upper limits of each factor by using the formula x = x - x m i n / x m a x - x m i n . x is the experimental data as shown in Table 1. x m i n and x m a x describe the lower limit and upper limit of the experimental data, respectively. Let x be the standardized data. Then this formula x = x m a x - x m i n x + x m i n is used to restore data. In particular, we use the first 16 data for training and the remaining 4 for validation. Through some experiments, the topological structure of the classic ENN is optimal in the form of the 13-16-1. So the RMENN has the same topological structure to compare with ENN, and let α = 0.4 , β = 0.6 , γ = 0.1 , φ = 0.3 . The training error is set to 0.01.

5.2. Training Results

After 50 times independent simulations, we compare the training effects of the two models. From the point of view of training error traces, Figures 3(a)3(c) show that the RMENN has stronger update power than the classic ENN. However, the classic ENN obviously lacks update power; even it is not sufficiently close to the target error as shown in Figure 3(c). It is because the fact that recursive parts of the classic ENN cannot be adjusted, but the recursive parts of RMENN can be adjusted and the learning rate can be dynamically adjusted to improve update power.

Comparison of convergence from the classic ENN and the RMENN.

The best training error traces

The average training error traces in 50 independent simulations

The worst training error traces

From the point of view learning speed, the RMENN has superior convergence speed than the classic ENN. When it averagely reach 348 epochs, the training error of the RMENN meets the requirement, but the classic ENN do not meet the requirements in the best training error traces as shown in Figure 3(a) (mean square error of the classic ENN is 0.010193).

Figure 4 shows the state of relative error distribution in the training process. The maximum relative error, the minimum relative error, and the average relative error of the RMENN are 10.04%, 0.87%, and 3.54%, respectively. However the maximum relative error, the minimum relative error, and the average relative error of the classic ENN are 15.09%, 2.14%, and 5.21%, respectively. It demonstrates that the RMENN has higher training accuracy than the classic ENN.

Comparison of the relative error from the classic ENN and the RMENN.

Figure 5 shows that the RMENN has superior average approximation effect than the classic ENN in 50 times independent simulations.

The contrast of gas emission average prediction in 50 times independent simulations.

5.3. Comparison of Model Prediction Ability

The mean squared error (MSE), median absolute error (MAE), and mean absolute percentage error (MAPE) are used as the indicators to measure the prediction precision. These indicators are defined as follows: (24) M S E = 1 T k = 1 T y k - y ^ k 2 M A E = 1 T k = 1 T y k - y ^ k M A P E = 1 T k = 1 T y k - y ^ k y k × 100 % , where y k and y ^ k denote the real and predicted values at time k , respectively.

Table 2 shows the state of relative error distribution in the prediction process. The maximum relative error, the minimum relative error, and the average relative error of the RMENN are 5.48%, 0.32%, and 3.43%, respectively. However the maximum relative error, the minimum relative error, and the average relative error of the classic ENN are 9.12%, 2.48%, and 5.50%, respectively. The relative errors of the RMENN appear more smaller. The mean squared error (MSE), median absolute error (MAE), and mean absolute percentage error (MAPE) of the RMENN are 0.0620, 0.2181, and 3.43%, respectively. The mean squared error (MSE), median absolute error (MAE), and mean absolute percentage error (MAPE) of the classic ENN are 0.1280, 0.3385, and 5.50%, respectively. It demonstrates that the proposed RMENN model has the better performance as evaluated by MSE, MAE, and MAPE.

Results of the error analysis based on MSE, MAE, and MAPE.

Number Actual data ENN RMENN
Output results Relative error/% Output results Relative error/%
(17) 8.04 7.5926 5.56 7.7611 3.47
(18) 7.56 7.9246 4.82 7.8958 4.44
(19) 5.82 5.6757 2.48 5.8015 0.32
(20) 4.36 4.7576 9.12 4.5990 5.48

MSE (m3⋅min−1) 0.1280 0.0620

MAE (m3⋅min−1) 0.3385 0.2181

MAPE (%) 5.50 3.43

To comprehensively evaluate the performance and differences significant of the two prediction models, Diebold-Mariano (DM) test and three loss functions are adopted, including MSE, MAE, and MAPE. DM test is a comparison test that focuses on the predictive accuracy and can be used to evaluate the prediction performance of the proposed hybrid model and other comparing models. The details of DM test are given as follows: (25) D M = i = 1 T L o s s ε i 1 - L o s s ε i 2 / T S 2 / T , where L o s s ( · ) is the loss function. ε i 1 and ε i 2 are the prediction errors from two models. S 2 is an estimator of the variance of L o s s ( ε i 1 ) - L o s s ( ε i 2 ) . The hypothesis test is defined as (26) H 0 : E L o s s ε i 1 - L o s s ε i 2 = 0 ; H 1 : E L o s s ε i 1 - L o s s ε i 2 0 ; the null hypothesis is that the two models have the same accuracy. Under the null hypothesis, the test statistics DM are asymptotically N ( 0,1 ) distributed. If D M > z α / 2 , the null hypothesis will be rejected, the two models are significantly different.

Table 3 shows that the DM value as evaluated by MSE is larger than the upper limit at the 5% significance level. The DM value as evaluated by MAE is larger than the upper limit at the 0.5% significance level. The DM value as evaluated by MAPE is larger than the upper limit at the 1% significance level.

DM values based on MSE, MAE, and MAPE.

MSE MAE MAPE
DM 2.1383 3.2718 2.6912

is 5% significance level; is 1% significance level; is 0.5% significance level.

In order to further verify the validity of the RMENN model, four sets of sample data are randomly selected for validation and other data are selected for training. Table 4 shows the error analysis of prediction results. The relative error of the ENN is larger than that of the RMENN except the tenth sample, and the errors of the ENN based on MSE, MAE, and MAPE are larger than that of the RMENN. Table 5 shows that the DM value as evaluated by MSE is not larger than the upper limit at the 10% significance level. The DM values as evaluated by MAE and MAPE are larger than the upper limits. Overall conclusions based on all the tests indicate that the RMENN model is significantly better than the ENN model.

Results of the error analysis based on MSE, MAE, and MAPE.

Number Actual data ENN RMENN
Output results Relative error/% Output results Relative error/%
(2) 3.56 3.6555 2.68 3.6006 1.14
(3) 3.67 3.8832 5.81 3.7566 2.36
(10) 5.62 5.4835 2.43 5.4687 2.69
(13) 7.68 7.9757 3.85 7.7862 1.38

MSE (m3⋅min−1) 0.0402 0.0108

MAE (m3⋅min−1) 0.1852 0.0962

MAPE (%) 0.0369 0.0189

DM values based on MSE, MAE, and MAPE.

MSE MAE MAPE
DM 1.4139 1.7445 1.9719

is 10% significance level; is 5% significance level.

6. Conclusion

In this paper, we analyze the drawback of the classic ENN. A novel type of network architecture called RMENN has been proposed for gas emission prediction. In theory, the convergence and stability of learning algorithm of RMENN are proved, and the approximate optimal learning rate is given. In practice, experiment analysis results on the gas emission prediction have demonstrated that RMENN has better performance in convergence rate and prediction accuracy than the class ENN, at the cost of slightly heavier structure (correction factors). Therefore, the RMENN has certain application value and the prospect.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by Foundation of Liaoning Educational Committee (Grant no. LJ2017QL021) and the National Natural Science Foundation of China (Grant no. 61304173).

Fu H. Shao L. S. The Characteristics of Mining Gas Disaster in Coal Mine and The Fusion Prediction 2011 Beijing, China Science Press He M. C. Ren X. L. Gong W. L. Experimental analysis of mine pressure influence on gas emission and control Journal of China Coal Society 2016 41 1 7 13 Li G. Z. Li X. J. Meng Z. J. Prediction on gas emission value from unmined block of mine base on grey theory Coal Engineering 2010 29 9 85 87 F. Liang B. Sun W.-J. Wang Y. Gas emission quantity prediction of working face based on principal component regression analysis method Journal of the China Coal Society 2012 37 1 113 116 2-s2.0-84863251756 Wei L. Bai T. L. Fu H. New gas concentration dynamic prediction model based on the EMD-LSSVM Journal of Safety and Environment 2016 16 2 119 123 Wang X. L. Liu J. Lu J. J. Gas emission quantity forecasting based on virtual state variables and Kalmanfilter Journal of China Coal Society 2011 36 1 80 85 Wang T. Wang Y. Guo C. Zhang J. Application of QGA-RBF for predicting the amount of mine gas emission Chinese Journal of Sensors and Actuators 2012 25 1 119 123 2-s2.0-84860854607 10.3969/j.issn.1004-1699.2012.01.024 Gao B. B. Pan J. Y. Based on PLS associated with BP neural network for different-source gas emission prediction model of working face Journal of Hunan University of Science & Technology (Natural Science Edition) 2015 30 4 14 20 Zhang Z. Tang Z. Gao S. Yang G. An algorithm of chaotic dynamic adaptive local search method for Elman neural network International Journal of Innovative Computing, Information and Control 2011 7 2 647 656 2-s2.0-79251503897 Zhang Z. Tang Z. Gao S. Yang G. Training elman neural network for dynamic system identification using an adaptive local search algorithm International Journal of Innovative Computing, Information and Control 2010 6 5 2233 2243 2-s2.0-77953007914 Vairappan C. Tamura H. Gao S. Tang Z. Batch type local search-based adaptive neuro-fuzzy inference system (ANFIS) with self-feedbacks for time-series prediction Neurocomputing 2009 72 7-9 1870 1877 2-s2.0-61849110256 10.1016/j.neucom.2008.05.010 Zhou T. Gao S. Wang J. Chu C. Todo Y. Tang Z. Financial time series prediction using a dendritic neuron model Knowledge-Based Systems 2016 105 214 224 2-s2.0-84973131479 10.1016/j.knosys.2016.05.031 Cui H. W. Site measurement study on influence factors of gas emission value from mechanized longwall coal mining face.Coal Science and Technology 2011 39 11 70 72 He L. W. Li X. B. Lin H. Modeling and simulation of gas emission system based on CML model Journal of Central South University (Science and Technology) 2012 43 12 4801 4806 Pham D. T. Liu X. Training of Elman networks and dynamic system modelling International Journal of Systems Science 1996 27 2 221 226 10.1080/00207729608929207 Zbl0850.93186 2-s2.0-0030082425 Jiang S. Fu G. Huang L. Tang Y. Cai P. Peng H. Forecasting on ash fusion temperatures of bituminous coal and biomass co-firing based on Elman neural network Journal of Central South University (Science and Technology) 2016 47 12 4240 4247 10.11817/j.issn.1672-7207.2016.12.036 2-s2.0-85011554814 Li P.-H. Chai Y. Xiong Q.-Y. Quantum gate Elman neural network and its quantized extended gradient back-propagation training algorithm Zidonghua Xuebao/Acta Automatica Sinica 2013 39 9 1511 1522 2-s2.0-84885903545 10.3724/SP.J.1004.2013.01511 Zbl1299.68062 Fu H. Xie S. Xu Y.-S. Chen Z.-C. Gas emission dynamic prediction model of coal mine based on ACC-ENN algorithm Journal of the China Coal Society 2014 39 7 1296 1301 10.13225/j.cnki.jccs.2013.0773 2-s2.0-84906895674 Li R. Q. Shi S. L. Wu A. Research on coal mining workface gas emission prediction method based on EMD-Elman and its application China Safety Science Journal 2014 24 6 51 56 Fu H. Dai W. Dynamic prediction of Gas Emission based on LLE and BA-ENN Chinese Journal of Sensors and Actuators 2016 29 9 1383 1388 2-s2.0-84994086629 10.3969/j.issn.1004-1699.2016.09.015 Chen W. H. Yan X. H. Fu H. On the innovated Elman neural network for forecasting the mining gas emission Journal of Safety and Environment 2015 15 3 19 24 Wei L. Fu H. Yin Y. P. Gas emission prediction model of coal mine based on hidden recurrent feedback Elman China Safety Science Journal 2016 26 42 46 Xie D. Feng T. Zhu C. Attributed measurement prediction model of entropy value for coal face gas emission and its application Journal of Central South University (Science and Technology) 2013 44 6 2482 2487 2-s2.0-84882979471