Regularized Least Squares Recursive Algorithm with Forgetting Factor for Identifying Parameters in the Grinding Process

,


Introduction
Parameters identi cation is one of the most important areas in system modeling and signal processing [1], and the related identi cation methods are attracted by many scholars. As one of most important parameters identi cation methods, the least square method (LSM) has been applied in various elds. In [2], to identify a ship's linear sway-yaw manoeuvring coe cients and drag-area parameters in current and wind, Wayne and Gash developed a simple least squares technique. In [3], the loss in localization accuracy induced by time di erence of arrival noises and velocity errors is reduced by the constrained total least squares method. In [4], to deal with the problem of overenhancement results, an image enhancement scheme based on weighted least squares are proposed. In [5], due to the complex biochemical characteristics of the wastewater treatment process, an adaptive dynamic nonlinear partial least squares model is proposed to improve the prediction performance and stability of e uent quality indexes. In [6], Ramezani applied the collocated discrete least squares meshless method to improve the node moving technique. In order to enhance the performance of LSM, some improved algorithms are developed. Reference [7] presents an iteratively reweighted LSM to improve the antioutlier performance of the least squares support vector machine. Reference [8] combines the partial least square with the attention mechanism in a neural network named the attention-PLS. Zhang, et al. [9] proposed the Lagrange energy-least squares similitude method to deduce output scaling laws. Furthermore, some scholars study the recursive least squares (RLS) method with the forgetting factor. In [10], Paleologu et al. pointed out that the performance of the recursive least squares algorithm is governed by the forgetting factor and proposed a variable forgetting factor RLS (VFF-RLS) algorithm for system identi cation. Reference [11] based on the framework of recursive least squares-temporal di erence proposed a new reinforcement learning method by using the forgetting factor. Sun et al. [12] presented an adaptive forgetting factor RLS method for online identi cation of the second order resistor-capacitance equivalent circuit model parameters. Meanwhile, the regularization method is used to improve the LSM. Wang et al. [13] introduced the least squares regularization method to solve the ill-posed problem of the multiplicative error model. Zhou et al. [14] investigated the method of the anti-ill-conditioned population-weighted median based on the least squares regularization method. Jin et al. [15] used the kurtosis regularization algorithm for circuit joint optimization in neural network training to increase the information entropy of neural network weight data. Bai et al. [16] proposed a generic model for least squares nonnegative matrix factorizations with Tikhonov regularization. However, in the process of parameter identi cation, the data saturated phenomenon and the illposed problem often occur simultaneously. erefore, this paper presents a regularized least squares recursive algorithm with forgetting factor (RLSRAFF).
At the same time, grinding process [17][18][19], as one of the most important procedures in the mineral processing, extracts the valuable minerals from the discernible gangue after physical grinding and classi cation. e role of this process can dissociate di erent useful minerals from each other and avail to the subsequent sorting process. e grinding and classi cation process has complex characteristics, such as large inertia, time-varying parameters, and nonlinearity. Recently, to realize the automatic production, some control technology strategies [20][21][22][23][24] for the grinding process have been attracted by majority of scienti c researchers. As the core of the control system, the accuracy of the mathematical model of the grinding process plays a vital role. Reference [25] identi ed the parameters of the prediction model of steel ball wear law in the grinding process, [26] used a nonlinear parameter identi cation method based on the improved di erential evolution algorithm to solve the parameter of the nonlinear model in the bauxite grinding classi cation process. Furthermore, Chen [27] investigated the least square recursive algorithm with forgetting factor to systematically identify the unknown parameters of the grinding process model.
Due to characteristics of complexity and nonlinearity, parameter identi cation in the mathematical model of the grinding process can lead to the ill-posed. e above literature does not consider the e ectiveness of ill-posed on the identi cation results. Regularization [28] is proved to be an e cient approach for the inverse problem. erefore, this paper develops RLSRAFF for identifying the parameters in the mathematical model of the grinding process. e main contributions can be summed as follows: (i) e ill-posed problem is considered for identifying the parameters in the mathematical model of the grinding process. (ii) RLSRAFF, which combines the forgetting factor with the regularization parameter, is presented. (iii) e recursive calculation of criterion function, the e ect of calculation error from the gain matrix, and the convergence of the proposed algorithm are analyzed.
e rest of the paper is organized as follows. In Section 2, the grinding process is described and then the parameter identi cation model is introduced. In Section 3, RLSRAFF is proposed and the calculation error of the gain matrix and convergence of the algorithm are proved. In Section 4, two experiments are carried out to verify the proposed method. In Section 5, the conclusion is given.

Grinding Process and Parameter
Identification Model 2.1. Grinding Process. e grinding process (see Figure 1) can be brie y described as follows.
e fresh ores are rstly sent into the ball mill by the conveyor belt and then crushed ceaselessly by the steel balls to produce the pulp within a certain concentration. Meanwhile, a certain amount of water is added into the mill to adjust the concentration of ore pulp within limits. After grinding, the mixed ore pulp is continuously discharged from mill into the spiral classi er for classi cation, and a certain amount of water is added into the spiral classi er to adjust the concentration. e substandard ore pulp returns to the rst stage of ball mill for regrinding, and the standard ore, extracting from the classi er, enters into the pump sump for the next process [27].

Mathematical Model of the Grinding Process Parameter
Identi cation. Mathematical model of the grinding process mainly includes the ball mill model and spiral classi er model. e ball mill model [29] has been detailed in a variety of studies. It describes the relationship between the ore feeding quantity, the water feeding quantity, the return sand quantity, and the mill pulp concentration. e spiral classi er model gives the relationship between the classi er over ow concentration, the pulp neness, the ore discharge quantity, and the return sand quantity.
However, these models only give internal mechanism of each part in the grinding process, and it has not re ected the multi-variable coupling and time delay between the primary ball mill and the classi er.
us, the mathematical model [27] between the ball mill and the classi er can be described as follows:   Journal of Mathematics where C c is the actual value of the classifier overflow concentration, (%); Q D is the actual value of the return sand quantity,(m 3 /h); U 1 is the actual value of the ore feeding quantity, (t/h); U 3 is the actual value of the classifier adding water quantity, (m 3 /h). By analyzing the model (1), we know that the U 1 and U 3 are the input variables, and C c and Q D are the output variables. e model identifies between the ball mill and the classifier is described as follows: where G 11 ′ (S), G 12 ′ (S), G 21 ′ (S), G 22 ′ (S) are consisted by the inertia link and the delay link, and Different identification methods are used to identify the parameters in these two links, respectively. e inertia link is identified by the least square algorithm, and the delay link is identified by the cross-correlation function algorithm [27]. e major study in this paper focuses on identification of the parameters for the inertia link to deal with the data saturated phenomenon and the ill-posed problem. For the inertia link, we have the following equations: where . According to equations (3) and (4), we have (6) us, the input and output observation data are used to estimate the unknown parameters ( (5) and (6).

Remark 1.
For the delay link, cross-correlation function [27] can be used to identify the delay time (τ 1 , τ 2 , τ 3 , τ 4 ), because the main work of this paper is to identify the unknown parameters, the details of identifying the delay time are not given.

Regularized Least Squares Recursive
Algorithm with Forgetting Factor and Property Analysis

Regularized Least Squares Recursive Algorithm with
Forgetting Factor. In order to identify the parameters in model (5), U 3 z and U 1 (z) are defined as the input variable, C c (z) is defined as the output variable. e difference equation is used to discrete the linear system as where k � 2, 3, . . . , 1 + N, ε(k)is uncorrelated random variable that subjects to N(0, 1) distribution, U 3 (k − 1) and U 1 (k − 1) are the actual input signal of recorded data, C c (k) is the actual output signal of recorded data, and C c (k − 1) is the function value of the output signal C c (k) at the previous sampling period. So we have the following equation from the following equation: It can be seen from (8) that the above parameters can be estimated by N sets of the observation equations: Equation (9) is defined as an observation equation. We define T . Equation (9) can be written into the following form:

Journal of Mathematics
We define where i � 1, 2, . . . , N. So the observation equation (9) can be expressed as e least square method is to minimize the sum of residuals by the observation equation (9). us, we assume the sum of residuals squares as we use equation (10) into equation (14), take the derivative of θ and set this derivative as zero, we have so we can obtain due to the ill-posed of item (Φ T Φ) − 1 , the regularization term is introduced as where λ is the regularization parameter. Because the phenomenon of data saturation, which makes the parameters to be identified and not improve with the increase of new sampling values, occurs in the least square method, this paper presents the regularized least squares recursive algorithm with forgetting factor (RLSRAFF). RLSRAFF combines the regularization parameter with forgetting factor, which is shown as follows: N is the number of data group, μ is the forgetting factor (0 < μ ≤ 1). If μ is chosen as a smaller one, the forgetting rate of old data is larger. erefore, the forgetting factor μ is adjusted according to the process characteristics.
Remark 2. e innovation of this paper mainly focuses on the combination of regularization parameter λ and the forgetting factor μ. e regularization parameter λ, added in the itemP(N) � (ϕ T ϕ + λI) −1 , can eliminate the ill-posed caused by(ϕ T ϕ) −1 and reduce error.

Property Analysis of RLSRAFF.
e property of recursive least square includes the following points: the recursive calculation of criterion function, the effect of calculation error from the gain matrix, and the convergence of the proposed algorithm. In this section, the property of RLSRAFF is analyzed. Theorem 1. We assume that J(N + 1) and J(N) are described as follows: On the basis of RLSRAFF, the recurrence equation of the criterion function can be given as follows: are the criterion function of parameter estimation at N and (N + 1), respectively.
Proof. By analyzing Section 3.1, we have where ρ(0 < ρ ≤ 1) is a factor, and we have So we have Using equation (19), the third and fourth terms of equation (26) can be transformed into 4 Journal of Mathematics Substituting equation (26) into equations (27) and (28), and using ϕ T (N)C C1 (N + 1) � 0, we have where ΔP(N) can transmit the calculation error to the gain matrix again by equation (19). If this cycle continues, the final identified result will be affected.
Proof. According to equation (19), equation P(N + 1) can be described as follows: so we have By equation (27), we know that ΔP(N + 1) is only related to the quadratic form of ΔK(N), which can effectively reduce the transmission of the calculation error and ensure the identification accuracy. However, if the number of parameters is less than 10, it is not necessary to avoid increasing the amount of calculation.

Theorem 3. If ε(k) is known to be an uncorrelated random variable with zero mean, the parameter estimation θ(N) given in (19) is uniformly convergent, and we have
where θ 0 is true value of model parameters.
Proof. We define θ 1 (N) � θ 0 − θ(N), and suppose en according to equation (12), we have Based on equation (19), we have We de ne Using (32), θ 1 (N + 1) can be obtained as Further, we de ne the eigenvalue of matrix A(N) is η, and then the following equation holds where x is nonzero eigenvector. Substituting (32) into (34), we have en, we get    Journal of Mathematics According to μ > 0, P − 1 (N) and ϕϕT are all positive de nite matrix.

Simulation Experiment
In this section, the proposed RLSRAFF is compared with other methods by two experiments. Firstly, the algorithm is compared with LSRAFF [30], the least squares method (LSM) [30], and the regularized least squares method (RLSM) [31] in Section 4.1 and Section 4.2. en, the paper tests the performance of RLSRAFF in the parameter identi cation of the grinding process in Section 4.3.

Comparison of the Algorithm Performance.
In this simulation, we choose the following equation to identify the model parameter as The output result of system True output LSRAFF [29] LSRAFF [29] (b) The output result of system True output LSM [29] LSM [29] (c) The output result of system True output RLSM [31] RLSM [31] (d) where a ′ 1 −1.5, a ′ 2 0.7, b ′ 1 1.0, b ′ 2 0.5; ε(k) is a random variable that subjects to N(0, 1) distribution; u(k) is the system input, which is generated by fourth order M sequences (the amplitude is 1); z(k) is the system output, and the length of observation data L 400. Meanwhile, the forgetting factor is chosen as μ 0.98 in this simulation.
In this section, we choose 13 regularization parameters λ of the proposed RLSRAFF method to nd the better identi cation results. e parameter identi cation results of di erent parameters λ are listed in Table 1. We can see from this table that the best one is given when λ 9. Next, the proposed RLSRAFF, LSM [30], RLSM [30], and LSRAFF [31] methods are used to identify the identi cation results, which are given in Table 2. From this table, we can nd that the proposed RLSRAFF method has better performance than others. e parameter estimation process is shown in Figure 2. By observing the parameter estimation process from Figure 2, we can nd that the curve (LSM and RLSM) changes very little in the later stage. However, the curve of RLSRAFF and LSRAFF [30] uctuate all the time, and the error between the RLSRAFF curve and asymptotic property is smaller than the LSRAFF curve. e output results of the system of these four algorithms are given in Figure 3, and the relative errors between the estimated value and the true value of system output are shown in Figure 4. From these gures, it can be seen that estimated values of system output obtained by the proposed RLSRAFF match with the true value very well. e average and maximum relative errors of system output of these four algorithms are described in Table 3, and the average and maximum relative error of RLSRAFF is 5.93% and 74.49%, respectively, which are smaller than others.   Journal of Mathematics

Statistical Results and Analysis.
In this section, the performance of the proposed RLSRAFF method is tested by the statistical way. In this experiment, we select 10 groups of different parameters (a 1 ′ , a 2 ′ , b 1 ′ , b 2 ′ ), the true values of which are given in Table 3. e proposed RLSRAFF, LSM [30], RLSM [31], and LSRAFF [30] methods are used to identify these parameters, the results of which are shown in Table 4. From the identification results of Table 4, we can see that, the proposed RLSRAFF method gives a better result. By analyzing the RLSRAFF algorithm, we can see that forgetting factors and regularization parameters are the key elements for this method. When forgetting factors are the same, the identification results of the two algorithms (RLSRAFF and LSRAFF [30]) with regularization parameters are better than those without regularization parameters. Furthermore, the forgetting factor of the proposed RLSRAFF method can eliminate the data saturated phenomenon.
Remark 3. According to equation (19), we can see that the proposed RLSRAFF method is equal to LSRAFF [30] when λ � 0. By analyzing the results of statistical results, it can be seen that the regularization parameters not only can solve illposed problems but also improve the identification results by adjusting the appropriate regularization parameters λ.

Application of Parameter Identification in the Grinding
Process. In this section, the data in the grinding process are given to identify the parameters in the model by the proposed RLSRAFF. We set the sampling time is 10 seconds and collect 70 groups of data. e sampling data are shown in Table 5 [27]. In this table, the time is from 11 : 45 : 14 to 11 : 56 : 54; the process data of the ore feeding quantity is set from 25 t/h to 233 t/h; the process data of the classifier adding water quantity is set from 5.5 m 3 /h to 35 m 3 /h; the process data of the sand return quantity is set from 22 m 3 /h to 105 m 3 /h; the process data of the classifier overflow concentration is set from 42% to 92%, the forgetting factor is 0.3, and 0.5 * randn is added in the sampling data [27]. e observation data of input variables and output variables are selected for identifying the parameters in the grinding process model and RLSRAFF is used. e average relative error between the calculated value and the actual value with different regularization parameters λ are used to test the performance of RLSRAFF, and the results are given in Table 6. e best result is given when λ � 8. e experiment compares RLSRAFF and LSRAFF [27] (We select the obtained parameters θ T � [−0.303 − 0.409 0.236 0.136] as the parameter identification results of LSRAFF in [27].), and the output results are given in Figure 5, which compares the output contrasting curves obtained by RLSRAFF and LSRAFF. It is observed from this figure that RLSRAFF can achieve a better performance than LSRAFF.       Further, the relative errors of the RLSRAFF method and the LSRAFF method are given in Figure 6, and the relative error of RLSRAFF is smaller than LSRAFF. Meanwhile, the average relative error of RLSRAFF is 3.83%, and the maximum relative error is 8.02%, but for LSRAFF is 8.79% and 18.32%, respectively. erefore, RLSRAFF gives a better performance.

Conclusion
In the actual industrial production process, a large amount of data is often obtained by online real-time measurement, and the measured data often contains some error. us, the data saturated phenomenon and the ill-posed problem often occur simultaneously. e original LSM seldom considers the above effects to the identification of parameters. erefore, this paper investigates the identification of parameters in the grinding process and considers the data saturated phenomenon and the ill-posed problem.
In order to solve the above problems, this paper presents a RLSRAFF algorithm, which combines the forgetting factor with regularization parameters. Furthermore, to analyze the performance of the RLSRAFF algorithm, this paper verifies the recursive calculation of criterion function and describes the effect of calculation error from the gain matrix and proves the convergence of the proposed algorithm. Finally, effectiveness of RLSRAFF is verified by the simulation experiments and grinding data. Comparing with other algorithms, the results show that RLSRAFF gives a better result and eliminates the ill-posed problem by the simulation and statistical results.
RLSRAFF is a universal method, the aim of which is to solve the data saturated phenomenon and the ill-posed problem. Both problems often occurs in the parameter identification process, such as the lithium ion batteries [32] and the wastewater treatment process [5]. erefore, the RLSRAFF method and its extension method also can be applied in other fields. Furthermore, for LSM-based identification methods, noises can cause biased identification results. Some noise-compensated methods [33][34][35] can be used to reduce these biases. In the future research work, the noise-compensated methods can be considered to improve the RLSRAFF method and be applied in the grinding process (see Table 7).
Data Availability e first experiment data used to support the findings of this study are included within the article. Previously reported Table 7: A List of symbols.

(S)
Transfer function with delay link and inertia link G 11 (S), G 12 (S), G 21 (S), G 22 (S) Transfer function with inertia link U 1 e actual value of ore feeding quantity (t/h) U 3 e actual value of classifier adding water quantity (m 3 /h) Q D e actual value of the return sand quantity (m 3 /h) C c e actual value of classifier overflow concentration (%) k Observed times (k � 2, 3, . . . , 1 + N) i i � 1, 2, . . . , N U 3 (k), U 1 (k)U1K e actual input signal of recorded data C c (k) e actual output signal of recorded data a 1 , b 1 , c 1 , d 1 ; a 2 , a 3 , b 2 , b 3 , c 2 , d 2 a Parameters to be identified ε(k) Uncorrelated random variable that subjects to N(0, 1) distribution N(0, 1) Normal distribution θ(N) θ(N) � [0.5a 1 , 0.5d 1 , 0.5b 1 , 0. (Research on Modeling and Control Method in Grinding and Classification Process) data were used to support this study and are available at DOI or other persistent identifier. ese prior studies (and datasets) are cited at relevant places within the text as references [26].

Conflicts of Interest
e authors declare that they have no conflicts of interest.