Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM) is put forward. The actual test data fromHuangLing coal gangue power plant are used for simulation experiments and compared with support vectormachine prediction model optimized by particle swarm algorithm (PSOSVM) and kernel function extreme learning machine prediction model (KELM). The results prove that mean square error (MSE) for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.


Introduction
The coal slurry as one waste is transported through pipelines to circulating fluidized bed boiler for mixed burning in gangue power plant; on the one hand, the low calorific value energy can be made full use of, and the coal slurry can be processed; on the other hand, pollution is prevented, which brings good economic and social benefits for the coal mine enterprises.At present, the coal slurry is transported through pipelines and wet transportation technology is applied, which can solve the secondary pollution problem caused by transportation process [1].If the moisture content of coal slurry is too low, or in the process of coal slurry transfer and storage, miscellaneous articles or objects enter into coal slurry, then it can make the coal quality unstable and causes the pipeline blockage.When the blockage happens, in the upstream of choke point, compared with the normal pressure, pipeline pressure at every point is higher, and in the downstream of choke point, pipeline pressure is lower, and even to zero.Therefore, the pressure of pipeline measuring point can be a feature of blockage prediction; in this paper, pressure prediction research of coal slurry pipeline will be made, which lays a foundation for blockage prediction.
Artificial neural network and support vector machine (SVM) can deal well with nonlinear regression problem and have wide application in the future data prediction.But much study has found that feedforward neural network has some problems such as having slow learning speed, being easy to fall into local minimum, and being sensitive to parameter selection [2][3][4], of which BP algorithm acts as a representative.Support vector machine (SVM) works as a kind of small sample learning algorithm, although there is a certain advantage in the small sample learning, but it also has slow learning speed, and its performance is more sensitive to the selection of kernel parameter.When SVM meets large data, its computational complexity is very high and consumed time is long [5][6][7][8].In this paper, based on the present research of forecasting methods, combined with the advantages of particle swarm optimization (PSO) algorithm and extreme learning machine, pressure prediction method based on kernel function extreme learning machine optimized by particle swarm algorithm is proposed to predict the coal slurry pipeline pressure.

Kernel Function Extreme Learning Machine
Guangbin Huang has proposed a new learning algorithm called extreme learning machine (ELM) [9]; it is based on the single hidden layer feedforward neural network.Firstly, all parameters of hidden layer nodes for single hidden layer feedforward neural network are randomly selected (input weights and thresholds of hidden layer nodes); then the network output weights are analyzed and calculated.All the parameters of the hidden layer nodes in the ELM are independent of the objective function and the training samples and do not need network iterative adjustment.In theory, this algorithm can get good generalization performance at an extremely rapid learning speed.
ELM has shown great potential in solving problem such as data regression and classification, which overcomes some challenging limits when feedforward neural networks or other intelligent algorithms are used.Compared with the more popular Back-Propagation (BP) and Support Vector Machine (SVM), ELM not only inherits the various advantages of neural network and support vector machine (SVM) but also has a lot of outstanding characteristics.
(1) ELM is easy to use, requiring less human intervention.
(2) ELM has faster speed of learning.Some training learning can be done in a few seconds or minutes.
(3) ELM has better generalization performance.In most cases, ELM can obtain a better generalization performance than BP and get close or better generalization performance than the SVM.
(4) ELM applies to most of the nonlinear activation functions.
(5) This algorithm is simpler.ELM is a simple algorithm with three steps that does not need to be adjusted, and simple mathematics is enough for use.
(6) ELM cannot meet problems such as local minimum, inappropriate learning speed, and the overfitting, where traditional classical learning algorithm can be faced with.
Figure 1 is the structure of extreme learning machine.
It can be seen that extreme learning machine still adopts three-layer feedforward neural network structure.
Different from the BP neural network, all the connection weights of BP neural network in training are constantly adjusted in the process of iteration, but initial weights of extreme learning machine can be set randomly and given well before the training; then they no longer need to be readjusted in the process of training, and only minimum weights of the output need to be solved out, which can be finished by solving the generalized inverse matrix once.

Output neurons
Input neurons The hidden layer output of each node for extreme learning machine is shown in formula (1): where x  is the input sample, x  = [ 1 ,  2 , . . .,   ]  ∈ R  , and t  is the output sample.a  is the input weight that connects the th hidden layer node: a  = [ 1 ,  2 , . . .,   ]  ∈ R  .b  is the bias of the th hidden layer node.  is the output weight that connects the th hidden layer node:   = [ 1 ,  2 , . . .,   ]  .() is excitation function.
Formula (1) can also be given as matrix formula (2): In formula ( 2), H is the hidden layer output: There are both empirical risk and structural risk in the statistical learning theory.Extreme learning machine not only considers the experience error minimization, which is the training error minimum, but also needs to consider the structural risk minimization as well.It is easy to generate the overfitting problem if only the minimum error is considered; that is to say, although the training error is the minimum, the optimum test effect is unable to be got.So if you want to get a good model, these two kinds of risks need to be thought about compromisingly at the same time.Therefore, it is necessary to make a compromise between minimized output weights and the minimized error; then the calculation formula ( 4) is constructed as min: The above can be expressed as formula (5): In formula ( 5), ‖‖ 2 is the error square sum and stands for structural risk; ‖  ‖ 2 stands for the empirical risk.  = [ ,1 , . . .,  , ] is the error between network output and the real sample value   . is penalty factor.ℎ() is mapping function.
According to the KKT conditions, Lagrange function can be used to solve the above problem; that is to say, the above problem can be solved through the following formula (6): where   is Lagrange multiplier and nonnegative.The corresponding optimization constraints are as follows: where H is hidden layer output matrix that is mentioned above, it is only relative to the sample number and the number of hidden layer nodes, and it has nothing to do with the number of output nodes.For classification problems, it has no connection with the sample number of categories.So bring (7a) and (7b) into (7c), and then So the above formulas can be combined as In the end, formula (11) can be derived: So approximating function of extreme learning machine can be written as In addition, in order to improve the nonlinear classification performance of extreme learning machine, it can be considered to combine the principle of support vector machine (SVM), and the nonlinear kernel mapping can be introduced into the extreme learning machine. Let The hidden layer and output of each sample ℎ(x i ) can be regarded as the nonlinear mapping of sample x i , and the mapping can be used as the type of  + , or in the form of RBF. Then It can be seen from formula (14) that it is all inner product form for ℎ(x) in the formula; in fact the specific form for ℎ(x) does not need to be known, while the mapping of ℎ(x i ) is unknown; according to the theory of kernel function, the implicit mapping can be constructed to replace it; that is to say, the kernel function can be constructed to replace HH  , as is shown in formula (15): Mathematical Problems in Engineering So formula (16) can be deduced: And (x) = ℎ(x)H  (HH  + I/) −1 T; then the solution formula of extreme learning machine can be written as The extreme learning machine of the kernel function has more strong nonlinear approximation ability; therefore, the kernel function of support vector machines (SVM) is introduced into extreme learning machine, and the blockage prediction method for coal slurry pipeline based on kernel function extreme learning machine (KELM) is proposed in this paper.
Parameters of the kernel function are closely related to the complexity of the kernel function; generally speaking, the Gaussian kernel function is one of the preferred.When prediction based on extreme learning machine and support vector machine is made in this paper, the Gaussian function works as the kernel function, as shown in formula (18): where  > 0,  is the key parameter of Gaussian kernel function.

Prediction Model Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine Is Established
Particle swarm optimization is a kind of optimization algorithm based on group search [5,10], each particle mainly has two aspects of flight information, and one is the particle's own flying experience, including the current position and the history optimal location of particles.The other is the shared flying experience from particle swarm.Each particle can obtain its optimal value and optimal value by statistical iteration to continuously updated forward direction and speed, to follow the current optimal particle; in the end positive feedback mechanism of group optimization is formed.
Particle swarm algorithm has many advantages such as fast convergence rate and strong robustness, which is easy to implement and easy to combine with other algorithms to improve performance.In view of this, particle swarm algorithm is used to optimize the parameters from support vector machine (SVM) and extreme learning machine, and their optimization process is similar.So pressure prediction workflows based on particle swarm optimization kernel function extreme learning machine (PSOKELM) are given here, as shown in Figure 2.
The main steps are as follows: (1) The original data are preprocessed and divided into training set and validation set.
(2) Initialize all particle swarm, and particle swarm algorithm is used to optimize penalty factor and kernel function  which are from kernel function extreme learning machine.
(3) -fold cross-validation is used to calculate the fitness of each particle and update the particle velocity and position.
(4) Record and save the best fitness value.
(5) When the limit conditions (the number of iterations or fitness values) are met, terminate the iteration.Otherwise, return to step (2).When the 5th step iteration is completed, then the optimal parameters of extreme learning machine are obtained.
(6) Pressure prediction model based on PSOKELM with the best parameters is established for pipeline pressure prediction.
Coal slurry pipeline pressure is related to many factors, including traffic, coal slurry water content, pump outlet pressure, coal slurry temperature, and the distance to pump outlet.Coal slurry pipeline pressure prediction is a multivariable prediction problem.The coal slurry pipeline pressure prediction method based on kernel function extreme learning machine with particle swarm optimization (PSOKELM) is proposed in this paper, which is compared with prediction method based on support vector machine optimized by particle swarm.And accuracy and prediction speed advantage of prediction method based on PSOKELM has been verified.
The effect of the designed prediction model is generally evaluated by MSE (mean square error) and the correlation coefficient In formula,  is the number of samples and (  ) is the prediction value of   .

Simulation Verification
The coal transportation system from HuangLing coal gangue power plant worked as experimental platform, PLC is the control core of the whole system, and data acquisition,  centralized monitoring, and management and automatic sequence control can be realized.Based on the data record from number 2 pump in this coal transportation system, whose time is from 2012.5.26 to 5.31, blockage prediction is made.The 110 training samples and 25 test samples were randomly selected.Input variables include flow rate, the current from main pump, oil temperature, and the distance to pump outlet and water content, and output variables are the pressure from measured point.Take the pump outlet pressure prediction as an example; the computer simulation is made in MATLAB.Kernel function is used as nonlinear mapping both for kernel function extreme learning machine (KELM) and support vector machine (SVM), and kernel function parameters and penalty factor all have certain effects on their performance.Here the effect of kernel parameter  and penalty factor  on the MSE is compared, including support vector machine (SVM) and the kernel function of extreme learning machine.Gauss kernel function is used in two algorithms.
The simulation results are shown in Figures 3 and 4.
As can be seen from Figures 3 and 4, penalty factor and kernel function have influence on the two algorithms.But it is relatively flat when the kernel function extreme learning machine is selecting parameters.
In addition, the computational complexity of kernel function extreme learning machine is far lower than the support vector machine (SVM), so the calculation time of extreme learning machine should be shorter.Training time comparison has been made for the two algorithms from different training angles.PC used in simulation is Intel I3 It can be seen from Figure 5 that the training time consumed by a kernel function extreme learning machine is far less compared to the support vector machine, and its training speed is very fast: the reason is the fact that the kernel function extreme learning machine need not to solve the complex convex quadratic optimization, which is different from support vector machine (SVM).
In addition, the SVM and KELM optimum parameters are optimized by the particle swarm algorithm, and the fitness curve is shown in Figure 6.As can be seen from Figure 6, the KELM algorithm is faster in the rate of convergence.
In order to compare the optimization effect of the particle swarm algorithm, PSOKELM and KELM without optimization algorithm are used, respectively, to forecast data sequence; the predicted results are shown in Figure 7.
According to the fitting degree and prediction effect of evaluation model from formula (19) and formula (20), MSE and  2 evaluation results based on KELM and PSOSVM are shown in Table 1.
It can be seen from Figure 7 and Table 1, compared with KELM, that PSOKELM has better prediction accuracy.
In order to further contrast the prediction effect of KELM and SVM, PSOKELM and PSOSVM are used to, respectively, predict data sequence, and the prediction results are shown in Figures 8, 9, and Table 2.
It can be seen from Figures 8 and 9 and Table 2 that the mean square error based on PSOKELM prediction model is 0.0038 and the correlation coefficient is 0.9955.The mean square error based on KELM prediction model is 0.00487 and the correlation coefficient is 0.9428.The mean square error based on PSOSVM prediction model is 0.0057 and the correlation coefficient is 0.9228.Therefore, prediction effect based on PSOKELM is much better than that based on PSOSVM in prediction effect.

Conclusion
The pressure prediction method for coal slurry transportation pipeline based on the particle swarm optimization kernel

Figure 1 :
Figure 1: The structure of extreme learning machine.
The original data are preprocessed and divided into training set and validation setInitialize the particle velocity and positionKELM network trainingCalculate the fitness of each particle with K-fold cross-validationUpdate the particle velocity and positionRecord and save the best fitness value

Figure 2 :
Figure 2: The flowchart of KELM algorithms optimized by particle swarm algorithm.

Figure 9 :
Figure 9: The relative error comparison of prediction results between PSOSVM and PSOKELM.

Table 1 :
The MSE and  2 comparison between PSOKELM and KELM.

Table 2 :
The MSE and  2 comparison between PSOKELM and PSOSVM.For pressure prediction problem of coal slurry transportation pipeline, kernel function of support vector machine is introduced into extreme learning machine, parameters are optimized by the particle swarm algorithm, and the pressure prediction method for coal slurry transportation pipeline based on PSOKELM is put forward and compared with PSOSVM prediction model and KELM prediction model.(2) Experiments simulation results prove that the prediction model based on PSOKELM is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy, the prediction relative error is within 4%, then the validity of the prediction model is determined.(3) The research and validation of pressure prediction method for coal slurry transportation pipeline lay a foundation for the research of pipeline blocking.