An Evaluation Model for Tailings Storage Facilities Using Improved Neural Networks and Fuzzy Mathematics

With the development of mine industry, tailings storage facility (TSF), as the important facility of mining, has attracted increasing attention for its safety problems. However, the problems of low accuracy and slow operation rate often occur in current TSF safety evaluation models. This paper establishes a reasonable TSF safety evaluation index system and puts forward a new TSF safety evaluation model by combining the theories for the analytic hierarchy process (AHP) and improved back-propagation (BP) neural network algorithm.The varying proportions of cross validation were calculated, demonstrating that this method has better evaluation performance with higher learning efficiency and faster convergence speed and avoids the oscillation in the training process in traditional BP neural network method and other primary neural network methods. The entire analysis shows the combination of the two methods increases the accuracy and reliability of the safety evaluation, and it can be well applied in the TSF safety evaluation.


Introduction
According to Dixon-Hardy and Engels [1], safety problem caused by tailings has become one of the most serious problems in mine engineering.TSFs, as the man-made debris flow source of danger with high potential energy, cause a waste of resources, the loss of life and property, and environmental pollution due to current limited production technology and equipment, as well as safety awareness [2,3].With the industrial development, the number of TSFs has reached more than 12,000 [4]; therefore, the research on it has academic value but also economic and social benefits.The stability analysis of the TSFs thus is indispensable for the research on mining [4].
TSFs are used for piling up tailings and other industrial waste residues, which can be usually divided into five systems, including tailings storage system, flood control system, water return system, transportation system, and safety management system.This paper established a reasonable TSF safety evaluation index system, which includes an exhaustive list of seventeen evaluation indexes, and assessed the unambiguous prioritisation of influence for the general objective based on the AHP methodology with a case study.
Furthermore, several evaluation indexes in this system were chosen as the input vectors of the improved BP neural network (BPNN) to build a new TSF safety evaluation model by adopting a variable learning rate, introducing the backpropagation mechanism, improving the adjustment rate of weights, and accelerating the convergence speed of the error functions.
According to the simulation experiments, this improved BPNN algorithm has better evaluation performance with higher learning efficiency and faster convergence speed and avoids the oscillation in the training process of traditional BPNN algorithm.The entire analysis shows the combination of improved BPNN algorithm and fuzzy AHP methodology increases the accuracy and reliability of the safety evaluation, and it can be well applied in the TSF safety evaluation.

BPNN Algorithm
3.1.1.Network Frame.The traditional BPNN frame [12][13][14] often consists of input layer, hidden layer, and output layer.It sometimes contains more than one hidden layer; the outputs of each layer are sent directly to each neuron of the next layer [15]; it also may contain a bias neuron that produces constant outputs but receives no inputs.If the frame includes only one hidden layer, it is called one hidden layer or three-layer BP neural network frame, which is shown in Figure 1.
According to the activation function of hidden layer, the derivation value of each neuron can be calculated by the input

Input
Hidden Export Figure 1: Three-layer BP neural network frame.
value and the connection weight between input layer and hidden layer, as expressed below: Likewise, the neurons derivation value of output layer can be calculated from ( The expression of global error is given as [15]  = (3)

BPNN Algorithm Implementation.
After the network frame is determined, the prepared sample data should be trained in the network with the following training steps [16,17].
( BPNN is based on solid and rigorous theory derivation; however, it was also found that there were many limitations in the training process, including slow convergence rate, the emergence of local extremum, and limitation of practical application.In view of these drawbacks, the improved BP algorithm was proposed for using the variable learning rate, adjusting the connection weights between different nodes dynamically, improving the convergence rate in the training process, and so on [18].Figure 2 shows the propagation mechanism of improved BP algorithm.The algorithm implementation procedure of the improved BPNN is described in detail as follows. The threshold of the network can be adjusted with the weights and added into the weight matrix [19].Suppose  is expressed as the connection weight matrix between input layer and hidden layer,  is the connection weight matrix between hidden layer and output layer,  is the input vector,  is the output vector of the network,  is the sum of samples, () is the activation function, and   is the anticipated output value.Then, The forward-propagation process of improved BPNN with the innovative adaptive learning rate algorithm is similar to the process of the traditional one [20]: The general ideas of the weight adjustment are shown as follows [21,22].
(1) Suppose rs is the row spread matrix of matrix , rs is the row spread matrix of matrix , and rsV (2) Supposing  is the whole weight matrix of the BPNN,  = [rs, rs].To easily finish the backpropagation calculation, we define  * as the reversed order weight matrix of ,  * = [rs, rs].If  and  are the row vector numbers of the whole weight matrix, then there exists a relationship: where  = ( + 1) ×  + ( + 1) × ,  = ( + 1) × .
If () is the weight matrix of BPNN after learning  times and  * () is the reversed order weight matrix, then the initial weight matrix and initial reversed order weight matrix can be expressed as (0) and  * (0).When adjusting the weights of  * ,   () is the value of   at the th learning.
In the forward-propagation stage, the learning termination condition of () is Here, () is the error in the th learning and  is the preset precision.
If it does not satisfy the termination condition, the learning continues to enter the stage of back-propagation error adjustment.
To classify the process of back-propagation of the new algorithm, we take any weight   in  * ; for example, the adjustment process is shown below: (1) If the gradient of   equals zero, /  = 0, then we adjust the next weight  +1 ; if /  ̸ = 0, the weights are calculated by (10) in the th learning: (2) In the forward-propagation stage, if   ( + 1) is superior to   (), namely, the error decreases, then, Along with the negative gradient direction of   (),   ( + 1) is recalculated with new learning rate   ().
In fact, when learning rate was lower, training time got longer and convergence became slower.When learning rate was too high, oscillation and divergence had emerged; this caused an unstable system.After the complex adjustment process, a new weight matrix ( + 1) and a new reversed order weight matrix  * ( + 1) are acquired.( + 1) is the optimal weight matrix of this network if the termination condition ( 9) is met; otherwise,  * ( + 1) should be put into a new round of adjustment until the termination condition (9) is met.

The Improved BPNN Algorithm Implementation.
The application of improved BPNN with a new adaptive learning rate algorithm should be implemented step by step from the algorithm derivation to the computer programming.Following are its general training steps [23].
(1) The initialization of BP neural network: set up the network structure and confirm the expected input and output of sample.Randomly select the minor weight matrix  and vector  and set the error precision  of the network learning.
(2) Input sample with forward-propagation: record the forward-propagating error of  (0) this time.
(4) Seeking the optimum weight in the gradient direction: adjust the weight according to (12) and calculate error  (1) with forward-propagation.Compare  (1)  and  (0) .If network error decreases, then increase the learning rate, readjust the weight, and calculate error  (1) with forward-propagation until the error no longer decreases.If network error increases, then decrease the learning rate, readjust the weight, and calculate error  (1) with forward-propagation until the error no longer decreases.
(5) Forward-propagation with new weight: update the network value  (1) .If the error is less than the preset The sturdiness of drainage tunnel P 6 installation P 7  precision , then the network jumps out of the whole propagation; stop the learning and turn to step (6).If the error is larger than the preset precision , make  =  + 1.When  < , which means that there is also some weight of node that has not been adjusted, then turn to step (2) and add a weight adjustment.When  = , which means that all of weight of node has been adjusted but the error at that time could not meet the accuracy requirement, make  = 0, a new round of iterative learning from the first node.(6) Finish the BP neural network learning and record the final weight matrix.foundation failure, structure failure, overtopping failure, and seepage failure; we analyzed the risk factors of these failure modes and established the evaluation index system in Figure 3.

AHP Procedures.
The fuzzy AHP methodology is effective for sectors of risk assessment and has been successfully applied in many fields [24][25][26][27].In this paper, we analyzed the safety situation of a typical TSF case in China based on fuzzy AHP methodology.We selected several experts' perspectives and suggestions from our research organisation, assessed the evaluation indexes in this system according to the investigation reports and the regional inspections, and finally ranked them by the prioritisation of influence for the general objective of this system [28].The total sorting for the hierarchy structure is shown in Table 1.
Our results indicate that each judgment matrix has a satisfactory consistency because each CR is less than 0.10 [24].Overall, the most risky index is the stability and reliability of the flood control system; it may depend on the design of dam drainage and flood control installation, which is also the second most risky index.The third most risky index, the proportion of the side slope and the saturation line observations, may cause an increase in dam slumping due to pipe and dam slope seepage.The fourth most risky index is the stability and reliability of the water return system, which should be emphasised because it will directly affect flood control system and even the entire safe operation of TSFs.In addition, dam seismic capacity, regular and casual safety inspection and maintenance, and the sturdiness of drainage tunnel are also important in this example.Actually, no evaluation index in this system should be neglected because the indexes may interact with each other.Therefore, relevant precautions should be taken timely by the order of prioritisation and emergency to avoid accidents and dangers.
On the basis of the total sorting, the top five evaluation indexes are chosen as the input vectors  1 to  5 of the improved BP neural network to build a new TSF safety evaluation model, namely, stability and reliability of the flood control system  1 , design of dam drainage and flood control installation  2 , the proportion of the side slope and the saturation line observations  3 , stability and reliability of the water return system  4 , and dam seismic capacity  5 .[29]) 4.3.1. Training Sample.TSF safety evaluation is a pattern recognition problem in essence.We compared the measured values of a group or several groups of TSF safety risk factors with the standard values and analyzed the closest safety evaluation level of measured values, and that was the recognition result of BPNN model.Based on improved BP algorithm principles, this paper integrated the safety posture grade of TSFs to construct the training samples for the network with chosen indexes.In order to improve them, this paper adopted the section interpolation method to extend the training sample set.Through the comparative analysis of the linear interpolation and random interpolation, the network model trained by the extended samples proved to be more stable.Here, we selected three typical China's typical TSF examples in Hunan province and obtained 63 training samples to form the training samples set for our research according to the investigation reports and the regional inspections of all indexes in this model from year 2009 to 2013, and a total of 78 samples were chosen in this model.

Test Sample.
We selected 5 groups of monitoring data consisting of 15 testing samples.In order to obtain the simulation results more accurately, the cross validations were calculated; that is, 15 testing samples were chosen randomly from 78 total samples for several times until each sample was tested once.

Determination of BP Network
Topology.The settings of network topology include the number of network layers and hidden layers, nodes number of the input layer, the output layer, and the hidden layer.The reasonability of the setting directly relates to the precision and objectivity of the evaluation and the application value of this model [23].layer to improve the network precision and decrease the error.Therefore, a three-layer network model was built in this paper.

Nodes Number.
The nodes number of input and output layer is mainly determined by the practical situation in the research.Here, we chose five nodes for input layer corresponding to five chosen indexes, as well as one node for output layer corresponding to recognition result of the TSF safety evaluation.Finally, ten optimal hidden layer nodes were chosen with golden cut method.

Performance Analysis of the Improved Algorithm.
In order to verify the effectiveness of this improved model, this paper made the numerical experiments to compare the actual result of improved BPNN and traditional one [30,31].
In our study, MATLAB 7.0 was used to realize the neural network model and the algorithm by using nonlinear approximation property of neural network to deal with this complex nonlinear function.
The nonlinear function is constructed as Two models with ten nodes in the hidden layer and 63 training samples were simulated at the same experiment environment.Sigmoid function is expressed as transfer function of hidden layer and output layer; we defined anticipated training precision as 0.0001 and maximum operation number as 10000.The training errors of both training models are shown in Figures 4 and 5, together with the fitting curves in Figures 6 and 7.
From the figures above, we can analyze that the improved BPNN algorithm makes the learning of network weights more efficient, optimizes the convergence speed, and avoids the oscillation of the traditional BPNN algorithm.Experiment results confirm the feasibility of TSF safety evaluation model proposed in this paper.

Results and Discussion
. We introduced the simulation experiments into real example for the TSF safety evaluation model.Here, danger coefficient is the target value, usually divided into four levels [12]: 0.1, 0.2, 0.3, and 0.4; they correspond to the four result grades: safe, defective, seriously defective, and extremely dangerous.

1 characteristics analysis P 2 saturation line observations P 3 Dam seismic capacity P 4 tailings dam P 5
the side slope and the The sturdiness and displacement of the Design of dam drainage and flood control Stability and reliability of the flood Runnability of returning water pump Design of dam water return installation The condition of concentration basin and Stability of tailings transportation system The management organisation structure Safety technical standards and regulations Regular and casual safety inspection and Stability and reliability of the water

Figure 3 :
Figure 3: Evaluation structure for the safe operation of the TSF.
input vector , and the connection weight  ℎ ; the output vector  = { 1 ,  2 , . . .,  3 } is then calculated in terms of the hidden layer output value  and the connection weight  ℎ .The total error  is obtained by error function; the allowable error is calculated with the actual output value and the anticipative one; if  < , the training ends; otherwise, go to step (3).The weight allowance Δ ℎ between output and hidden layer and Δ ℎ between input and hidden layer can be calculated with weighting allowance function and total error .Then Δ ℎ and Δ ℎ should be adjusted with the following formulas:

Table 1 :
Total sorting for the hierarchy structure.
4.4.1.Network Layer.As described before, if the hidden layer is solved with Sigmoid function and the activation function between input layer and output layer is a linear function, the multilayer forward neural network with single hidden layer is approximate to rational function with any precision.It is not difficult to get the conclusion that the training effect is easily reached by increasing the nodes number of hidden