Parameter Sensitivity Analysis on Deformation of Composite Soil-Nailed Wall Using Artificial Neural Networks and Orthogonal Experiment

Based on the back-propagation algorithm of artificial neural networks (ANNs), this paper establishes an intelligent model, which is used to predict the maximum lateral displacement of composite soil-nailed wall. Some parameters, such as soil cohesive strength, soil friction angle, prestress of anchor cable, soil-nail spacing, soil-nail diameter, soil-nail length, and other factors, are considered in the model. Combined with the in situ test data of composite soil-nail wall reinforcement engineering, the network is trained and the errors are analyzed. Thus it is demonstrated that the method is applicable and feasible in predicting lateral displacement of excavation retained by composite soil-nailed wall. Extended calculations are conducted by using the well-trained intelligent forecast model. Through application of orthogonal table test theory, 25 sets of tests are designed to analyze the sensitivity of factors affecting the maximum lateral displacement of composite soil-nailing wall. The results show that the sensitivity of factors affecting the maximum lateral displacement of composite soil nailing wall, in a descending order, are prestress of anchor cable, soil friction angle, soil cohesion strength, soil-nail spacing, soil-nail length, and soil-nail diameter. The results can provide important reference for the same reinforcement engineering.


Introduction
Soil-nailing was first devised in France in the early 1970s and offers advantages of low cost, versatility, and easily negotiating curves and corners. Furthermore, nailing does not require large equipment [1]. Composited with prestressed cables or micropiles, soil-nailed wall's displacement can be reduced [2]. The lateral displacement of soil-nail wall depends on multiple factors such as friction angle, coherence force, and tensile strength of soil mass; diameter, spacing between nails, inclination angle, and length of nail; strength of grout; and prestress of cable and other factors [3]. The same viewpoint was also presented by Yang [2] based on 7 typical case histories of composite soil-nailing walls with prestressed anchors in deep excavations, and he concluded that the lateral displacements were influenced by geology conditions of site, design parameters of soil-nail, and prestressed anchors. These factors will be uncertain for a special case. So, how much the influence of the possible swinging range of these uncertain factors on the lateral displacement of excavation facing is and to which factor the lateral displacement is the most sensitive are often important concerns for a soil-nailing project.
In 1986, Hohenbichler and Rackwitz [4] conducted a study of the sensitivity of reliability on random variables. Then Madsen et al. [5], Bjerager and Krenk [6], Karamchandani and Cornell [7] further studied this issue. There was at present less research on the sensitivity of parameters of composite soil-nail wall in China. The existing research methods for sensitivity have been confined to the finite element method (FEM) [8][9][10][11][12][13]. The obvious drawback of FEM is that the amount of calculation is very large if too many parameters are required. On the other hand, it is difficult to determine the combination of multiple parameters.
Based on the in situ test trials of a soil-nail wall, under the comprehensive consideration of the factors such as soil cohesive strength, soil friction angle, prestress of anchor cable, soil-nail spacing, soil-nail diameter, soil-nail length, and other factors, an intelligent model for predicting the maximum lateral displacement of soil-nail wall is established  by using error back-propagation (BP) algorithm of artificial neural networks (ANNs) and MATLAB ANN toolbox [14][15][16]. On this basis, the sensitivity of the maximum lateral displacement of soil-nail wall to every parameter is analyzed, by adopting the orthogonal table test theory [16,17]. It provides reference for the practical application of the same reinforcement engineering.

Overview: Artificial Neural Networks
Artificial neural networks (ANNs) are the result of academic investigations that involve using mathematical formulations to model biological nervous system operations. ANNs are highly parallel computational systems comprising interconnected artificial neurons or processing units. Neural networks use logical parallelism combined with serial operations, as information in one layer is transferred to neurons in another layer [18]. A typical multilayer perceptron consists of an input layer, one or more hidden layers, and one output layer. The structural chart of artificial neural network [19] is shown as Figure 1. A neuron forms the basis of the ANN where it receives, processes, and transmits information ( Figure 2). Input layers receive inputs from sources external to the system under study. The output layers send signals out of the system, while the hidden layers are those whose inputs and outputs are within the system [20]. Hidden layers are necessary for the network to learn interdependencies in the model. The units in a neural network are connected by links, where each link has a numeric weight associated with it. Weights are the primary means of long-term storage in neural networks, and learning occurs through changes to the weights.
Neural networks require some computational method of weight adjustment. Back-propagation, which was proposed by Rumelhart and McClelland in 1985, is a gradient descent algorithm that compares actual outputs with desired outputs. If an error exists, its reduction is accomplished by backpropagating the error through the network and adjusting the weights. It provides a way of dividing the calculation of the gradient among the units, so the change in each weight can  be calculated by the unit to which the weight is attached using only local information [21].
Each layer's computation is split into two components. First, a linear component, called the input function, computes the weighted sum of the layer's input values. Second, a nonlinear component, called the activation function ( Figure 3), transforms the weighted sum into the layer's final activation value.
The net input to a processing unit is given by where the is the output from the previous layer, is the weight of the link connecting unit to unit , and is the bias of unit , which determines the location of the logsig function on the horizontal axis. The activation value (output) of unit is given by In its simplest form, the weight-update is a scaled step in the opposite direction of the gradient. Hence, the weightupdate rule is where is the momentum term and determines the influence from the previous iteration on the present one. In this equation, the total error is given by where ∈ (0, 1) is a parameter that determines the step size and is called the learning rate and and are the target and the actual response value of output neuron corresponding to pattern . is the number of training patterns and represents the number of output units. This error information is propagated backwards through the ANNs and the weights are adjusted. After some number  of iterations, the training stops when the calculated output values best approximate the desired values. The learning course is made up of two aspects: forwardand back-propagation. In the course of forward-propagation, the state in every layer only affects the next neuron network. If the outcomes in output layer are not the expected targets, that is, the error between actual outputs and expected targets exceeds the given error, then the course would shift to that of back-propagation. During back-propagation, the error signal would backtrack along the original path. The connected weights in every nerve cell are adjusted and the course gradually propagates to the input layer of neural network. The forward-propagation would restart again. The two repeating courses would not stop until the error between actual outputs and expected targets is less than the given error [22].

Determination of Parameters of the Hidden Layer.
According to Kolmogorov's theorem, with a rational structure and appropriate weights, a three-layer feed-forward network can be trained to produce any continuous function with any desired level of precision. The three-layer BP network established in this paper consists of an input layer, a hidden layer, and an output layer. The network can perform anydimensional to -dimensional mapping, in which the hidden variables are 2 + 1 = 13.

Prediction Model.
According to the node numbers of input layers, hidden layers, and output layers and the numbers of the hidden layers, the architecture of the BP forecast model built in this paper is 6-13-1 network (Figure 4).

Model Training and
Testing. The constraints of this problem are formulated as follows.

Training
Samples. Ten sets of data obtained from the in situ test of a composite soil-nailing wall were selected as learning samples in this paper, as shown in Table 1. Figure 5 shows the cross section of the composite soil-nailed wall.

Training of the Samples.
Before training the sample, in order to accelerate the convergence speed of the network and enhance the computational efficiency, using the normalized "premnmx" function offered by MATLAB neural network toolbox, we normalize the input matrix and output matrix to the range [−1, 1].
The constructed model is trained by using the normalized data. The training function is the default "trainlm" of the MATLAB neural network system. In our study, we used an ANN with one hidden layer of 13 neurons. The number of epochs is regulated by net.trainParam.epochs = 1000. The maximum amount of time to stop the training stage and the goal for performance were determined by trainParam.time = inf and net.trainParam.goal = 0.01, respectively. The ANN was trained with resilient propagation for each dataset to produce the lowest errors. Value of 0.01 was used for the learning rate by trainParam.lr = 0.01. Finally, we used 50 for epochs between displays in the training stage by trainParam.show = 50.
As the iteration proceeds, the error decreases from its initial high value ( Figure 6). It attains a value of about 0.01 only after 14 iterations.

Predicted Results and Error Analysis.
After training, the network output matches the target. The inherent nonlinear mapping relationship offered by the samples is stored in each layer of the network. Thus it defined the weights of various input and output layers and their thresholds. In order to analyze the accuracy of prediction of the established network model, five groups of in situ test data were firstly selected as the testing samples. The samples were used to get predicted values by using the "sim" function. Then the predicted values were antinormalized to get the final forecasted values by using the "postmnmx" function. Finally, we compared the predicted values and in situ test values, as shown in Table 2. The results show that the average relative error of the predicted results is 2.09%. It can meet the needs in engineering, indicating that the model has certain feasibility and practicality in forecast of the maximum lateral displacement of the pit.

Sensitivity Analysis of the Factors.
As mentioned before, the trained network has certain feasibility and practicality in the maximum lateral displacement of excavation side.
With the project mentioned before, we predict the maximum lateral displacement of the another side and analyze the sensitivity of those factors. Here, weighted average of the soil cohesive strength is 32.4 kPa, weighted average of the soil friction angle is 22.3 ∘ , the nail spacing is 12 m, the prestress of anchor cable is 300 kN, the nail length is 12 m, and the nail diameter is 20 mm. We predicted, respectively, the above 25 trial samples by using the trained network model. The results are shown in Table 5.
The average value of each index can be obtained for each factor at different levels by averaging the results of five experiments of the same level of each factor. The extreme differences corresponding to the factor variation can be obtained by working out the difference between the maximum and minimum values of indicators of the same factor at different levels. They are shown in Table 6. The experimental results showed the order of the sensitivity of factors affecting the maximum lateral displacement if excavation side is as follows: prestress of anchor cable, soil friction angle, soil cohesive strength, soil-nail spacing, soilnail length, and soil-nail diameter.

Conclusion
Artificial neural networks (ANNs) have good self-adaptability, self-organization and strong learning, associative memory, fault tolerance and anti-interference ability, and so on. By introducing ANNs into the composite soil-nailed wall system, we can fully consider many factors that affect the lateral displacement of excavation sides and make an intelligent prediction.
The prediction model of the maximum lateral displacement of excavation established in this paper by utilizing ANNs makes a very nonlinear mapping between various factors affecting the lateral displacement and lateral displacement. With the in situ test data of displacement of an excavation side, we have verified the feasibility and applicability of ANNs in excavation engineering, providing a new method reference for predicting the lateral displacement of other similar excavations.
In this paper, combining ANNs with the orthogonal trial design, we performed the sensitivity analysis of factors affecting the maximum displacement of excavation side. The analysis results showed that the sensitivity of these factors, in a descending order, is the prestress of anchor cable, soil friction angle, soil cohesive strength, soil-nail spacing, soilnail length, and soil-nail diameter.