JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 694053 10.1155/2014/694053 694053 Research Article Data Filtering Based Recursive Least Squares Algorithm for Two-Input Single-Output Systems with Moving Average Noises Lu Xianling 1,2 Zhou Wei 2 Shi Wenlin 2 Lam Hak-Keung 1 Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education Jiangnan University Wuxi 214122 China jiangnan.edu.cn 2 School of Internet of Things Engineering Jiangnan University Wuxi 214122 China jiangnan.edu.cn 2014 26 3 2014 2014 26 09 2013 11 02 2014 27 02 2014 26 3 2014 2014 Copyright © 2014 Xianling Lu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper studies identification problems of two-input single-output controlled autoregressive moving average systems by using an estimated noise transfer function to filter the input-output data. Through data filtering, we obtain two simple identification models, one containing the parameters of the system model and the other containing the parameters of the noise model. Furthermore, we deduce a data filtering based recursive least squares method for estimating the parameters of these two identification models, respectively, by replacing the unmeasurable variables in the information vectors with their estimates. The proposed algorithm has high computational efficiency because the dimensions of its covariance matrices become small. The simulation results indicate that the proposed algorithm is effective.

1. Introduction

Studies on identification methods have been active in recent years . The recursive least squares algorithm is a popular and important identification method for many different systems . Recently, Wang and Ding presented an input-output data filtering based recursive least squares parameter estimation for CARARMA systems ; Wang et al. proposed a data filtering based recursive least squares algorithm for Hammerstein systems using the key-term separation principle ; and Ding and Duan presented a two-stage parameter estimation algorithm for Box-Jenkins systems . Hu proposed an iterative and recursive least squares estimation algorithm for moving average systems .

The filtering technique has received much attention in the field of system identification [7, 11, 12] and signal processing [13, 14]. For example, Xie et al. studied recursive least squares parameter estimation methods for nonuniformly sampled systems based on data filtering ; Wang et al. discussed filtering based recursive least squares algorithm for Hammerstein nonlinear FIR-MA systems ; Wang proposed a filtering and auxiliary model-based recursive least squares identification algorithm for output error moving average systems ; Shi and Fang developed a recursive algorithm for parameter estimation by modifying the Kalman filter-based algorithm after designing a missing output estimator ; and Wang et al. derived a hierarchical generalized stochastic gradient algorithm and a filtering based hierarchical stochastic gradient algorithm to estimate the parameter vectors and parameter matrix of the multivariable colored noise systems by using the hierarchical identification principle .

For several decades, multiple-input single-output systems  or multiple-input multiple-output systems [19, 20] have attracted researchers’ attention, but most of the work focused on the single-input single-output systems . For example, Li proposed parameter estimation for Hammerstein controlled autoregressive moving average systems based on the Newton iteration . Yao and Ding derived a two-stage least squares based iterative identification algorithm for controlled autoregressive moving average (CARMA) systems; the basic idea is to decompose a CARMA system into two subsystems and to identify each subsystem, respectively . This paper considers the identification problems of two-input single-output controlled autoregressive moving average systems by using input-output data filtering and derives a data filtering based recursive least squares method. The proposed algorithm has high computational efficiency because the dimensions of its covariance matrices become small. Although this paper focuses on two-input single-output systems, the proposed method can be extended to multiple-input single-output systems.

The rest of the paper is organized as follows. Section 2 proposes a data filtering based recursive least squares algorithm for a two-input single-output system with moving average noise. Section 3 introduces the recursive extended least squares algorithm for comparison. In Section 4, we give an example to prove the effectiveness of the proposed algorithm. Finally, concluding remarks are given in Section 5.

2. Data Filtering Based Recursive Least Squares Algorithm

Consider the two-input single-output system, described by the following controlled autoregressive moving average model, depicted in Figure 1: (1) A ( z ) y ( t ) = B 1 ( z ) u 1 ( t ) + B 2 ( z ) u 2 ( t ) + D ( z ) v ( t ) , where { u 1 ( t ) , u 2 ( t ) } are the input sequences of the system, { y ( t ) } is the output sequence of the system, { v ( t ) } is a white noise sequence with zero mean and variance σ 2 , and A ( z ) , B 1 ( z ) , B 2 ( z ) , and D ( z ) are the polynomials in the unit backward shift operator z - 1 [i.e., z - 1 y ( t ) = y ( t - 1 ) ] and defined by (2) A ( z ) 1 + a 1 z - 1 + a 2 z - 2 + + a n a z - n a , B 1 ( z ) b 11 z - 1 + b 12 z - 2 + + b 1 n 1 z - n 1 , B 2 ( z ) b 21 z - 1 + b 22 z - 2 + + b 2 n 2 z - n 2 , D ( z ) 1 + d 1 z - 1 + d 2 z - 2 + + d n d z - n d . Assume that the degrees n a , n 1 , n 2 , and n d are known and y ( t ) = 0 , u 1 ( t ) = 0 , and u 2 ( t ) and v ( t ) = 0 for t 0 .

The two-input single-output system with moving average noise.

Define the parameter vector θ and the information vector φ ( t ) as (3) θ [ θ s θ n ] n , n n a + n 1 + n 2 + n d , θ s [ a 1 , a 2 , , a n a , b 11 , b 12 , , b 1 n 1 , b 21 , b 22 , , b 2 n 2 ] T n a + n 1 + n 2 , θ n [ d 1 , d 2 , , d n d ] T n d , φ 0 ( t ) [ φ s ( t ) φ n ( t ) ] n , φ s ( t ) [ - y ( t - 1 ) , - y ( t - 2 ) , , - y ( t - n a ) , 0000 u 1 ( t - 1 ) , u 1 ( t - 2 ) , , u 1 ( t - n 1 ) , 0000 u 2 ( t - 1 ) , u 2 ( t - 2 ) , , u 2 ( t - n 2 ) ] T n a + n 1 + n 2 , φ n ( t ) [ v ( t - 1 ) , v ( t - 2 ) , , v ( t - n d ) ] T n d . The goal of this paper is to apply the data filtering technique and to develop a new recursive least squares for estimating the system parameters.

If we use the rational fraction 1 / D ( z ) (a liner filter) to filter the input-output data, we can get a simple “equation error model” which is easy to identify, then the recursive least squares algorithm can be applied. Because 1 / D ( z ) is unknown, we use its estimate 1 / D ^ ( t , z ) to filter the input-output data . The identification method based on this approach will be referred to as the data filtering based recursive least squares (F-RLS) method.

For the model in (1), define the filtered inputs u f 1 ( t ) and u f 2 ( t ) , the filtered output y f ( t ) , and the filtered information vector φ f ( t ) as (4) u f 1 ( t ) 1 D ( z ) u 1 ( t ) , u f 2 ( t ) 1 D ( z ) u 2 ( t ) , y f ( t ) 1 D ( z ) y ( t ) , φ f ( t ) [ - y f ( t - 1 ) , - y f ( t - 2 ) , , - y f ( t - n a ) , 0000 u f 1 ( t - 1 ) , u f 1 ( t - 2 ) , , u f 1 ( t - n 1 ) , 0000 u f 2 ( t - 1 ) , u f 2 ( t - 2 ) , , u f 2 ( t - n 2 ) ] T n a + n 1 + n 2 . Dividing both sides of (1) by D ( z ) gives (5) A ( z ) 1 D ( z ) y ( t ) = B 1 ( z ) 1 D ( z ) u 1 ( t ) + B 2 ( z ) 1 D ( z ) u 2 ( t ) + v ( t ) . It can be written as (6) A ( z ) y f ( t ) = B 1 ( z ) u f 1 ( t ) + B 2 ( z ) u f 2 ( t ) + v ( t ) . This filtered model is an equation error model and can be rewritten in a vector form (7) y f ( t ) = [ 1 - A ( z ) ] y f ( t ) + B 1 ( z ) u f 1 ( t ) + B 2 ( z ) u f 2 ( t ) + v ( t ) = - i = 1 n a a i y ( t - i ) + i = 1 n 1 b 1 i u f 1 ( t - i ) + i = 1 n 2 b 2 i u f 2 ( t - i ) + v ( t ) = φ f T ( t ) θ + v ( t ) . Define the inner variable: (8) w ( t ) D ( z ) v ( t ) = φ n T ( t ) θ n + v ( t ) . For two identification models (7) and (8), we can obtain the following recursive least squares algorithm for computing the estimates θ ^ s ( t ) and θ ^ n ( t ) of θ s and θ n : (9) θ ^ s ( t ) = θ ^ s ( t - 1 ) + L f ( t ) [ y f ( t ) - φ f T ( t ) θ ^ s ( t - 1 ) ] , (10) L f ( t ) = P f ( t - 1 ) φ f ( t ) 1 + φ f T ( t ) P f ( t - 1 ) φ f ( t ) , (11) P f ( t ) = [ I - L f ( t ) φ f T ( t ) ] P f ( t - 1 ) , (12) θ ^ n ( t ) = θ ^ n ( t - 1 ) + L n ( t ) [ w ( t ) - φ n T ( t ) θ ^ n ( t - 1 ) ] , (13) L n ( t ) = P n ( t - 1 ) φ n ( t ) 1 + φ n T ( t ) P n ( t - 1 ) φ n ( t ) , (14) P n ( t ) = [ I - L n ( t ) φ n T ( t ) ] P n ( t - 1 ) . Note that the filtered input u f 1 ( t ) , the filtered input u f 2 ( t ) , and the filtered output y f ( t ) are all unknown because of the unknown polynomial D ( z ) and the unmeasurable noise term v ( t ) in the information vector φ n ( t ) and w ( t ) are unknown. So it is impossible to implement the algorithm in (9)–(14). The solution we adopted here is to replace the unknown variables with their estimates according to the auxiliary model identification idea .

From (1), we get (15) w ( t ) = A ( z ) y ( t ) - B 1 ( z ) u 1 ( t ) - B 2 ( z ) u 2 ( t ) = y ( t ) - φ s T ( t ) θ s . Substituting (8) into the above equation, we get (16) y ( t ) = φ s T ( t ) θ s + w ( t ) = φ 0 T ( t ) θ + v ( t ) . Replacing θ s on the right-hand side of (15) with its estimate θ ^ s ( t - 1 ) , the estimate w ^ ( t ) can be computed by w ^ ( t ) = y ( t ) - φ s T ( t ) θ ^ s ( t - 1 ) . Let v ^ ( t ) be the estimate of v ( t ) and construct the estimate of φ n ( t ) as (17) φ ^ n ( t ) [ v ^ ( t - 1 ) , v ^ ( t - 2 ) , , v ^ ( t - n d ) ] T n d . From (8), we have v ( t ) = w ( t ) - φ n T ( t ) θ n . Replacing φ n ( t ) and θ n ( t ) with φ ^ n ( t ) and θ ^ n ( t ) , the estimate v ^ ( t ) can be computed by v ^ ( t ) = w ^ ( t ) - φ ^ n T ( t ) θ ^ n ( t ) .

Using the parameter estimates of the noise model, (18) θ ^ n ( t ) = [ d ^ 1 ( t ) , d ^ 2 ( t ) , , d ^ n d ( t ) ] T n d ; to construct the estimate of D ( z ) , (19) D ^ ( t , z ) 1 + d ^ 1 ( t ) z - 1 + d ^ 2 ( t ) z - 2 + + d ^ n d ( t ) z - n d . Filter u 1 ( t ) , u 2 ( t ) , and y ( t ) with 1 / D ^ ( t , z ) to get the estimates of u f 1 ( t ) , u f 2 ( t ) , and y f ( t ) as follows: (20) D ^ ( t , z ) u ^ f 1 ( t ) = u 1 ( t ) , D ^ ( t , z ) u ^ f 2 ( t ) = u 2 ( t ) , D ^ ( t , z ) y ^ f ( t ) = y ( t ) . From the above equations, we can recursively compute u ^ f 1 ( t ) , u ^ f 2 ( t ) , and y ^ f ( t ) by the following equations: (21) u ^ f 1 ( t ) = - d ^ 1 ( t ) u ^ f 1 ( t - 1 ) - d ^ 2 ( t ) u ^ f 1 ( t - 2 ) - - d ^ n d ( t ) u ^ f 1 ( t - n d ) + u 1 ( t ) , u ^ f 2 ( t ) = - d ^ 1 ( t ) u ^ f 2 ( t - 1 ) - d ^ 2 ( t ) u ^ f 2 ( t - 2 ) - - d ^ n d ( t ) u ^ f 2 ( t - n d ) + u 2 ( t ) , y ^ f ( t ) = - d ^ 1 ( t ) y ^ f ( t - 1 ) - d ^ 2 ( t ) y ^ f ( t - 2 ) - - d ^ n d ( t ) y ^ f ( t - n d ) + y ( t ) . Construct the estimate of the φ ^ f ( t ) : (22) φ ^ f ( t ) [ - y ^ f ( t - 1 ) , - y ^ f ( t - 2 ) , , - y ^ f ( t - n a ) , 0000 u ^ f 1 ( t - 1 ) , u ^ f 1 ( t - 2 ) , , u ^ f 1 ( t - n 1 ) , 0000 u ^ f 2 ( t - 1 ) , u ^ f 2 ( t - 2 ) , , u ^ f 2 ( t - n 2 ) ] T n a + n 1 + n 2 . Replacing the unknown information vector φ f ( t ) in (9)–(11) with φ ^ f ( t ) , y f ( t ) in (9) with y ^ f ( t ) , φ n ( t ) in (12)–(14) with φ ^ n ( t ) , and the unknown noise terms w ( t ) in (12) with w ^ ( t ) , we obtain the data filtering based recursive least squares (F-RLS) algorithm for estimating the parameter vectors θ s and θ n for the two-input single-output system : (23) θ ^ s ( t ) = θ ^ s ( t - 1 ) + L f ( t ) [ y ^ f ( t ) - φ ^ f T ( t ) θ ^ s ( t - 1 ) ] , (24) L f ( t ) = P f ( t - 1 ) φ ^ f ( t ) 1 + φ ^ f T ( t ) P f ( t - 1 ) φ ^ f ( t ) , (25) P f ( t ) = [ I - L f ( t ) φ ^ f T ( t ) ] P f ( t - 1 ) , P f ( 0 ) = p 0 I , (26) φ ^ f ( t ) = [ - y ^ f ( t - 1 ) , - y ^ f ( t - 2 ) , , - y ^ f ( t - n a ) , 0000 u ^ f 1 ( t - 1 ) , u ^ f 1 ( t - 2 ) , , u ^ f 1 ( t - n 1 ) , 0000 u ^ f 2 ( t - 1 ) , u ^ f 2 ( t - 2 ) , , u ^ f 2 ( t - n 2 ) ] T , (27) y ^ f ( t ) = - d ^ 1 ( t ) y ^ f ( t - 1 ) - d ^ 2 ( t ) y ^ f ( t - 2 ) - - d ^ n d ( t ) y ^ f ( t - n d ) + y ( t ) , (28) u ^ f 1 ( t ) = - d ^ 1 ( t ) u ^ f 1 ( t - 1 ) - d ^ 2 ( t ) u ^ f 1 ( t - 2 ) - - d ^ n d ( t ) u ^ f 1 ( t - n d ) + u 1 ( t ) , (29) u ^ f 2 ( t ) = - d ^ 1 ( t ) u ^ f 2 ( t - 1 ) - d ^ 2 ( t ) u ^ f 2 ( t - 2 ) - - d ^ n d ( t ) u ^ f 2 ( t - n d ) + u 2 ( t ) , (30) θ ^ n ( t ) = θ ^ n ( t - 1 ) + L n ( t ) [ w ^ ( t ) - φ ^ n T ( t ) θ ^ n ( t - 1 ) ] , (31) L n ( t ) = P n ( t - 1 ) φ ^ n ( t ) 1 + φ ^ n T ( t ) P n ( t - 1 ) φ ^ n ( t ) , (32) P n ( t ) = [ I - L n ( t ) φ ^ n T ( t ) ] P n ( t - 1 ) , P n ( 0 ) = p 0 I , (33) φ ^ n ( t ) = [ v ^ ( t - 1 ) , v ^ ( t - 2 ) , , v ^ ( t - n d ) ] T , (34) w ^ ( t ) = y ( t ) - φ s T ( t ) θ ^ s ( t - 1 ) , (35) v ^ ( t ) = w ^ ( t ) - φ ^ n T ( t ) θ ^ n ( t ) , (36) φ s ( t ) = [ - y ( t - 1 ) , - y ( t - 2 ) , , - y ( t - n a ) , 0000 u 1 ( t - 1 ) , u 1 ( t - 2 ) , , u 1 ( t - n 1 ) , 0000 u 2 ( t - 1 ) , u 2 ( t - 2 ) , , u 2 ( t - n 2 ) ] T , (37) θ ^ s ( t ) = [ a ^ 1 ( t ) , a ^ 2 ( t ) , , a ^ n a ( t ) , b ^ 11 ( t ) , b ^ 11 ( t ) , , 0000 b ^ 1 n 1 ( t ) , b ^ 21 ( t ) , b ^ 11 ( t ) , , b ^ 2 n 2 ( t ) ] T , (38) θ ^ n ( t ) = [ d ^ 1 ( t ) , d ^ 2 ( t ) , , d ^ n d ( t ) ] T . The data filtering based recursive least squares algorithm has high computational efficiency because the dimensions of its covariance matrices become small and can generate more accurate parameter estimation. To initialize the algorithm, we take (39) θ ^ s ( i ) = 1 n a + n 1 + n 2 p 0 , θ ^ n ( i ) = 1 n d p 0 , i 0 , P f ( 0 ) = p 0 I n a + n 1 + n 2 , P n ( 0 ) = p 0 I n d , p 0 = 1 0 6 . The steps involved in the F-RLS algorithms are listed as follows.

Set y ( t ) = 0 , u 1 ( t ) = 0 , u 2 ( t ) = 0 for t 0 .

Let t = 1 ; set the initial values of the parameter estimation vectors and the covariance matrices according to (39), and y ^ f ( i ) = 1 / p 0 , u ^ f 1 ( i ) = 1 / p 0 , u ^ f 2 ( i ) = 1 / p 0 , w ^ ( i ) = 1 / p 0 , v ^ ( i ) = 1 / p 0 for i 0 .

Collect the input–output data u 1 ( t ) , u 2 ( t ) , and y ( t ) and construct the information vectors φ s ( t ) by (36), φ ^ f ( t ) by (26), and φ ^ n ( t ) by (33).

Compute w ^ ( t ) by (34), the gain vector L n ( t ) by (31) and the covariance matrix P n ( t ) by (32).

Update the parameter estimate θ ^ n ( t ) by (30).

Compute v ^ ( t ) by (35), y ^ f ( t ) by (27), u ^ f 1 ( t ) by (28), and u ^ f 2 ( t ) by (29).

Compute the gain vector L f ( t ) by (24) and the covariance matrix P f ( t ) by (25).

Update the parameter estimate θ ^ s ( t ) by (23).

Increase t by 1; go to Step ( 3 ) .

3. The RELS Algorithm

To show the advantages of the algorithm we proposed, we give the recursive extended least squares (RELS) algorithm for comparison.

Let θ ^ ( t ) = [ θ ^ s ( t ) θ ^ n ( t ) ] be the estimate of θ = [ θ s θ n ] . Based on the identification model in (16), the unknown variables v ( t - i ) in the information vector φ 0 ( t ) are replaced with their estimates v ^ ( t - i ) , so we can obtain the following recursive extended least squares algorithm for identifying the parameter vector θ : (40) θ ^ ( t ) = θ ^ ( t - 1 ) + L ( t ) [ y ( t ) - φ ^ T ( t ) θ ^ ( t - 1 ) ] , L ( t ) = P ( t - 1 ) φ ^ ( t ) 1 + φ ^ T ( t ) P ( t - 1 ) φ ^ ( t ) , P ( t ) = [ I - L ( t ) φ ^ T ( t ) ] P ( t - 1 ) , φ ^ ( t ) = [ - y ( t - 1 ) , - y ( t - 2 ) , , - y ( t - n a ) , 0000 u 1 ( t - 1 ) , u 1 ( t - 2 ) , , u 1 ( t - n 1 ) , 0000 u 2 ( t - 1 ) , u 2 ( t - 2 ) , , u 2 ( t - n 2 ) , 0000 v ^ ( t - 1 ) , v ^ ( t - 2 ) , , v ^ ( t - n d ) ] T v ^ ( t ) = y ( t ) - φ ^ T ( t ) θ ^ ( t ) , θ ^ ( t ) = [ a ^ 1 ( t ) , a ^ 2 ( t ) , , a ^ n a ( t ) , b ^ 11 ( t ) , 0000 b ^ 11 ( t ) , , b ^ 1 n 1 ( t ) , b ^ 21 ( t ) , b ^ 11 ( t ) , , 0000 b ^ 2 n 2 ( t ) , d ^ 1 ( t ) , d ^ 2 ( t ) , , d ^ n d ( t ) ] T . In this RELS algorithm, the forgetting factor used is 1.

4. Example

Consider the following example: (41) A ( z ) y ( t ) = B 1 ( z ) u 1 ( t ) + B 2 ( z ) u 2 ( t ) + D ( z ) v ( t ) , A ( z ) = 1 + a 1 z - 1 + a 2 z - 2 = 1 + 0.50 z - 1 + 0.80 z - 2 , B 1 ( z ) = b 11 z - 1 + b 12 z - 2 = 0.40 z - 1 + 0.30 z - 2 , B 2 ( z ) = b 21 z - 1 + b 22 z - 2 = 0.50 z - 1 + 0.60 z - 2 , D ( z ) = 1 + d 1 z - 1 = 1 - 0.40 z - 1 , θ = [ a 1 , a 2 , b 11 , b 12 , b 21 , b 22 , d 1 ] T = [ 0.50,0.80,0.30,0.40,0.50,0.60 , - 0.40 ] T .

The inputs { u 1 ( t ) } , { u 2 ( t ) } are taken as two uncorrelated persistent excitation signal sequences with zero mean and unit variance, { v ( t ) } as a white noise sequence with zero mean and variance σ 2 = 0.5 0 2 and σ 2 = 0.1 0 2 , and the corresponding noise-to-signal ratio are δ ns = 59.70 % and δ ns = 11.94 % , respectively. Applying the RELS and the F-RLS algorithms to estimate the parameters of the system, the parameter estimates and their errors are shown in Tables 1 and 2, and the estimation errors δ θ ^ - θ / θ versus t are shown in Figure 2 with σ 2 = 0.1 0 2 .

The parameter estimates and their errors ( σ 2 = 0.5 0 2 , δ ns = 59.70 % ).

Algorithms t a 1 a 2 b 11 b 12 b 21 b 22 d 1 δ (%)
F-RLS 100 0.49783 0.82362 0.47259 0.33889 0.45701 0.67571 −0.31256 10.87062
200 0.53237 0.83678 0.42607 0.30153 0.50390 0.70802 −0.26142 13.33636
500 0.51680 0.83056 0.43167 0.29518 0.50208 0.66555 −0.35787 6.59957
1000 0.50285 0.81789 0.42489 0.28951 0.50703 0.61872 −0.35247 4.41100
2000 0.50405 0.81391 0.42283 0.27871 0.49694 0.62908 −0.38038 3.56288
3000 0.50565 0.80834 0.41436 0.29579 0.49903 0.62586 −0.39109 2.37173

RELS 100 0.53926 0.86216 0.45846 0.28535 0.44404 0.76552 −0.23374 18.60645
200 0.54959 0.85622 0.40695 0.27414 0.46667 0.76896 −0.23790 18.13522
500 0.51636 0.83069 0.42841 0.27674 0.48978 0.67084 −0.38132 6.46314
1000 0.50859 0.81827 0.42018 0.27842 0.50739 0.62544 −0.36367 4.24112
2000 0.50723 0.81444 0.42236 0.27730 0.49418 0.62936 −0.38827 3.47473
3000 0.50794 0.80920 0.41212 0.28907 0.49876 0.62707 −0.39464 2.48087

True values 0.50000 0.80000 0.40000 0.30000 0.50000 0.60000 −0.40000

The parameter estimates and their errors ( σ 2 = 0.1 0 2 , δ ns = 11.94 % ).

Algorithms t a 1 a 2 b 11 b 12 b 21 b 22 d 1 δ (%)
F-RLS 100 0.48293 0.79642 0.41760 0.30069 0.49608 0.58878 −0.40118 1.99087
200 0.51101 0.81127 0.40743 0.30352 0.50298 0.61790 −0.37052 2.81593
500 0.50157 0.81012 0.40710 0.29963 0.50121 0.61021 −0.39941 1.17016
1000 0.49914 0.80525 0.40512 0.29771 0.50204 0.60182 −0.38316 1.35563
2000 0.50007 0.80429 0.40469 0.29566 0.49964 0.60499 −0.39422 0.78466
3000 0.49969 0.80258 0.40296 0.29869 0.49998 0.60404 −0.40137 0.43055

RELS 100 0.51462 0.82354 0.41087 0.30152 0.48869 0.64030 −0.14920 18.69550
200 0.51683 0.81951 0.40067 0.29934 0.49279 0.63819 −0.18953 15.98366
500 0.50478 0.81270 0.40486 0.29674 0.49758 0.61544 −0.33064 5.07263
1000 0.50338 0.80708 0.40371 0.29683 0.50111 0.60612 −0.34207 4.35710
2000 0.50298 0.80552 0.40417 0.29637 0.49871 0.60680 −0.37711 1.81881
3000 0.50223 0.80344 0.40231 0.29826 0.49969 0.60581 −0.38867 0.98907

True values 0.50000 0.80000 0.40000 0.30000 0.50000 0.60000 −0.40000

The estimation errors δ versus t ( σ 2 = 0.1 0 2 ).

From Tables 1 and 2 and Figure 2, we can draw the following conclusions.

The parameter estimation errors become (generally) smaller and smaller with the data length t increasing. This shows that the proposed algorithm is effective.

The F-RLS algorithm is more accurate than the RELS algorithm. This means that the proposed F-RLS algorithm has better identification performance compared with the RELS algorithm.

The parameter estimates given by the F-RLS algorithm converge fast to their true values compared with the RELS algorithm.

The F-RLS algorithm has a higher computational efficiency than the RELS algorithm because the dimensions of its covariance matrices are smaller than those of the covariance in the RELS algorithm.

5. Conclusions

The data filtering based recursive least squares algorithm for the two-input single-output system with moving average noise is proposed by means of the data filtering technique. Compared with the recursive least squares algorithm, the proposed algorithms can require less computational load and can give more accurate parameter estimates compared with the recursive extended least squares algorithm. The proposed method can be extended to nonuniformly sampled systems and nonlinear systems. The convergence analysis of the proposed filtering based algorithm is worth further studies. The proposed method can combine the multi-innovation identification methods , the hierarchical identification methods , the auxiliary model identification methods , the iterative identification methods [48, 49], and other identification methods  to study identification and adaptive control problems for linear or nonlinear, single-rate or dual-rate, and scalar or multivariable systems .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Fundamental Research Funds for the Central Universities (no. JUSRP21129) and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).