This paper presents two methods for dual-rate sampled-data nonlinear output-error systems. One
method is the missing output estimation based stochastic gradient identification algorithm and the other
method is the auxiliary model based stochastic gradient identification algorithm. Different from the
polynomial transformation based identification methods, the two methods in this paper can estimate
the unknown parameters directly. A numerical example is provided to confirm the effectiveness of the
proposed methods.
1. Introduction
System identification plays an important part in many engineering applications [1–6]. Many identification methods assume that the input-output data at every sampling instant are available for linear systems [7–11] and nonlinear systems [12–20], which is usually not the case in practice. When the input and output signals of the systems have different sampling rates, these systems are usually called irregularly sampled-data systems [21–27], for example, dual-rate or multirate systems [28–30]. Dual-rate/multirate systems in which the input and the output are sampled at different frequencies arise widely in robust filtering and control [31–33], adaptive control [34–37], and system identification [38–43]. In the literature of dual-rate system identification, the so-called polynomial transformation technique is often used to transform the dual-rate model [44, 45].
As far as we know, the identification methods based on the polynomial transformation technique cannot directly estimate the parameters of the dual-rate system and the number of the unknown parameters to be estimated is more than the number of the unknown parameters of the original dual-rate system.
The nonlinear system consisting of a static nonlinear block followed by a linear dynamic system is called a Hammerstein system [46–49]. The nonlinearity of the Hammerstein system is usually expressed by some known basis functions [50, 51] or by a piece-wise polynomial function [52, 53]. When the Hammerstein system is a dual-rate system and has a preload nonlinearity, to the best of our knowledge, there is no work on identification of such systems. The main contributions of this paper are presenting the two methods directly for estimating the parameters of the dual-rate system. The proposed methods of this paper can combine the auxiliary model identification methods [54–57], the iterative identification methods [58–62], the multi-innovation identification methods [63–70], the hierarchical identification methods [71–83], and the two-stage or multistage identification methods [84, 85] to study identification problems for other linear systems [86–90] or nonlinear systems [91–97].
The rest of this paper is organized as follows. Section 2 introduces the dual-rate nonlinear output-error systems. Section 3 gives a missing output identification model based stochastic gradient algorithm. Section 4 provides an auxiliary model based stochastic gradient algorithm. Section 5 introduces an illustrative example. Finally, concluding remarks are given in Section 6.
2. Problem Formulation
Let “A=:X” or “X:=A” stand for “A is defined as X,” let the norm of a column vector X be ∥X∥2:=tr[XTX], and let the superscript T denote the matrix transpose.
Consider the following dual-rate nonlinear output-error system with colored noise:
(1)y(t)=B(z)A(z)f(u(t))+v(t),
where y(t) is the system output, u(t) is the system input, v(t) is a stochastic white noise with zero mean, A(z) and B(z) are the polynomials in the unit backward shift operator [z-1y(t)=y(t-1)],
(2)A(z)=1+a1z-1+a2z-2+⋯+anz-n,B(z)=b1z-1+b2z-2+⋯+bnz-n,
and f(u(t)) is a preload nonlinearity shown in Figure 1 and can be expressed as [98, 99]
(3)f(u(t))={u(t)+m1,u(t)>0,0,u(t)=0,u(t)-m2,u(t)<0,
where m1 and -m2 are two preload points.
The preload characteristics.
For the dual-rate sampled-data system, all the input data {u(t), t=0,1,2,…} and only the scarce output data {y(tq), t=0,1,2,…,(q⩾2)} are known. The intersample outputs or missing outputs y(tq+j), j=1,2,…,q-1 are unavailable.
Define a sign function
(4)sgn(u(t)):={1,ifu(t)>0,0,ifu(t)=0,-1,ifu(t)<0.
Then the function f(u(t)) can be expressed as
(5)f(u(t))=u(t)+m1+m22sgn(u(t))+m1-m22sgn(u2(t)).
Let
(6)g1=m1+m22,g2=m1-m22.
Hence, we have
(7)f(u(t))=u(t)+g1sgn(u(t))+g2sgn(u2(t)).
Once g1 and g2 are estimated, the parameters m1 and m2 can be computed by m1=g1+g2, m2=g1-g2.
3. The Missing Outputs Identification Model Based Stochastic Gradient Algorithm
Substituting (7) into (1) gets
(8)A(z)y(t)=B(z)(u(t)+g1sgn(u(t))+g2sgn(u2(t)))+A(z)v(t).
Define the parameter vector θ and information vector φ1(t) as
(9)θ≔[a1,a2,…,an,b1,b2,…,bn,b1g1,b2g1,…,bng1,b1g2,b2g2,…,bng2]T∈ℝ4n,(10)φ1(t)≔[(u2(t-n))-y(t-1)+v(t-1),-y(t-2)+v(t-2),…,-y(t-n)+v(t-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T∈ℝ4n.
From (9) and (10), we get
(11)y(t)=φ1T(t)θ+v(t)
or
(12)y(tq)=φ1T(tq)θ+v(tq).
Let θ^(t) be the estimate of θ. Defining and minimizing the cost function
(13)J(θ):=[y(tq)-φ1T(tq)θ]2
give the following stochastic gradient (SG) algorithm for estimating θ:
(14)θ^(tq)=θ^(tq-q)+φ^1(tq)r1(tq)e1(tq),(15)θ^(tq-i)=θ^(tq-q),i=q-1,q-2,…,1,e1(tq)=y(tq)-φ^1T(tq)θ^(tq-q),(16)φ^1(tq)=[(u2(t-n))-y(tq-1)+v^(tq-1),-y(tq-2)+v^(tq-2),…,-y(tq-n)+v^(tq-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T,(17)v^(tq-i)=y(tq-i)-φ^1T(tq-i)θ^(tq-i),(18)r1(tq)=r1(tq-q)+∥φ^1(tq)∥2,r(0)=1.
Since the information φ^1(tq) on the right-hand sides of (16) contains the unknown variables -y(tq-i)+v^(tq-i), i=q-1,q-2,…,1, the SG algorithm in (14)–(18) is impossible to implement. In this section, we use the missing outputs identification model (MOI) to overcome this difficulty; these unknown -y(tq-i)+v^(tq-i) are replaced with the output estimates -y^(tq-i)+v^(tq-i) of an MOI model,
(19)-y^(tq-i)+v^(tq-i)=-φ^1T(tq-i)θ^(tq-i),i=q-1,q-2,…,1,φ^1(tq-i+1)=[(u2(tq-i+1-n))-y^(tq-i)+v^(tq-i),-y^(tq-i-1)+v^(tq-i-1),…,-y^(tq-q+1)+v^(tq-q+1),-y(tq-q)+v^(tq-q),…,-y^(tq-i+1-n)+v^(tq-i+1-n),u(tq-i),u(tq-i-1),…,u(tq-i+1-n),sgn(u(tq-i)),sgn(u(tq-i-1)),…,sgn(u(tq-i+1-n)),sgn(u2(tq-i)),sgn(u2(tq-i-1)),…,sgn(u2(tq-i+1-n))]T,
where -y^(tq-i)+v^(tq-i) represents the estimate of -y(tq-i)+v(tq-i) at time tq-i, θ^(tq-i) represents the estimate of θ at time tq-i, and φ^1(tq-i) represents the estimate of φ1(q-i).
Thus, we have the following missing output estimates based SG (MOE-SG) algorithm for estimating the parameter vector θ in (9):
(20)θ^(tq)=θ^(tq-q)+φ^1(tq)r1(tq)e2(tq),(21)θ^(tq-i)=θ^(tq-q),i=q-1,q-2,…,1,(22)-y^(tq-i)+v^(tq-i)=-φ^1T(tq-i)θ^(tq-i),(23)φ^1(tq-i+1)=[(u2(tq-i+1-n))-y^(tq-i)+v^(tq-i),-y^(tq-i-1)+v^(tq-i-1),…,-y^(tq-q+1)+v^(tq-q+1),-y(tq-q)+v^(tq-q),…,-y^(tq-i+1-n)+v^(tq-i+1-n),u(tq-i),u(tq-i-1),…,u(tq-i+1-n),sgn(u(tq-i)),sgn(u(tq-i-1)),…,sgn(u(tq-i+1-n)),sgn(u2(tq-i)),sgn(u2(tq-i-1)),…,sgn(u2(tq-i+1-n))]T,(24)e1(tq)=y(tq)-φ^1T(tq)θ^(tq-q),(25)r1(tq)=r1(tq-q)+∥φ^1(tq)∥2,r(0)=1.
The steps of computing the parameter estimate θ^(tq) by the MOE-SG algorithm are listed as follows.
Let u(-j)=0, y(-j)=0, j=0,1,2,…,n-1, and give a small positive number ε.
Let t=1, r(0)=1, and θ^(0)=1/p0 with 1 being a column vector whose entries are all unity and p0=106.
Collect the input data u(tq),u(tq-1),…,u(tq-n), and collect the output data y(tq).
Let i=q-1 and compute -y^(tq-i)+v^(tq-i) by (22).
Form φ^1(tq-i+1) by (23).
Decrease i by 1; if i⩾1, go to step (4); otherwise, go to the next step.
Compute e1(tq) and r1(tq) by (24) and (25), respectively.
Update the parameter estimation vector θ^(tq) by (20).
Compare θ^(tq) and θ^(tq-q); if ∥θ^(tq)-θ^(tq-q)∥⩽ε, then terminate the procedure and obtain the θ^(tq); otherwise, increase t by 1 and go to step (3).
The flowchart of computing the MOE-SG parameter estimate θ^(tq) is shown in Figure 2.
The flowchart of computing the estimate θ^(tq).
4. The Auxiliary Model Based Stochastic Gradient Algorithm
Define
(26)x(t)=B(z)A(z)(u(t)+g1sgn(u(t))+g2sgn(u2(t))).
From (8) and (26), we have
(27)y(t)=x(t)+v(t).
Define the information vector φ2(t) as
(28)φ2(t)≔[(u2(t-n))-x(t-1),-x(t-2),…,-x(t-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T∈ℝ4n.
Then we get
(29)x(t)=φ2T(t)θ,(30)y(t)=φ2T(t)θ+v(t).
Assume t is an integer multiple of q and rewrite (30) as
(31)y(tq)=φ2T(tq)θ(tq)+v(tq).
Let θ^(t) be the estimate of θ. Defining and minimizing the cost function
(32)J(θ):=[y(tq)-φ2T(tq)θ]2
give the following SG algorithm of estimating θ:
(33)θ^(tq)=θ^(tq-q)+φ2(tq)r2(tq)e2(tq),(34)e2(tq)=y(tq)-φ2T(tq)θ^(tq-q),(35)φ2(tq)=[(u2(t-n))-x(tq-1),-x(tq-2),…,-x(tq-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T,(36)r2(tq)=r2(tq-q)+∥φ2(tq)∥2,r(0)=1.
Because of the unknown variables x(tq-i) in (33), the SG algorithm in (33)–(36) is impossible to implement. In this section, we use the auxiliary model; these unknown x(tq-i) are replaced with the outputs xa(tq-i) of an auxiliary model,
(37)xa(tq-i)=θaT(tq-i)φa(tq-i),
where θa(tq-i) is the estimate θ^(tq-i) of θ and φa(tq-i) is the estimate φ^2(tq-i) of φ2(tq-i). We can obtain an auxiliary model based stochastic gradient (AM-SG) algorithm:
(38)θ^(tq)=θ^(tq-q)+φ^2(tq)r2(tq)e2(tq),(39)θ^(tq-i)=θ^(tq-q),i=q-1,q-2,…,1,(40)xa(tq-i)=θ^T(tq-i)φ^2(tq-i),(41)φ^2(tq-i+1)=[(u2(t-i+1-n))-xa(tq-i),-xa(tq-i-1),…,-xa(tq-i+1-n),u(t-i),u(t-i-1),…,u(t-i+1-n),sgn(u(t-i)),sgn(u(t-i-1)),…,sgn(u(t-i+1-n)),sgn(u2(t-i)),sgn(u2(t-i-1)),…,sgn(u2(t-i+1-n))]T,(42)e2(tq)=y(tq)-φ^2T(tq)θ^(tq-q),(43)r2(tq)=r2(tq-q)+∥φ^2(tq)∥2,r(0)=1.
The steps of computing the parameter estimate θ^(tq) by the AM-SG algorithm are listed as follows.
Let u(-j)=0, y(-j)=0, x(-j)=0, j=0,1,2,…,n-1, and give a small positive number ε.
Let t=1, r(0)=1, and θ^(0)=1/p0 with 1 being a column vector whose entries are all unity and p0=106.
Collect the input data u(tq),u(tq-1),…,u(tq-n), and collect the output data y(tq).
Let i=q-1 and compute xa(tq-i) by (40).
Form φ^2(tq-i+1) by (41).
Decrease i by 1; if i⩾1, go to step (4); otherwise, go to next step.
Compute e2(tq) and r2(tq) by (42) and (43), respectively.
Update the parameter estimation vector θ^(tq) by (38).
Compare θ^(tq) and θ^(tq-q); if ∥θ^(tq)-θ^(tq-q)∥⩽ε, then terminate the procedure and obtain the θ^(tq); otherwise, increase t by 1 and go to step (3).
The flowchart of computing the AM-SG parameter estimate θ^(tq) is shown in Figure 3.
The flowchart of computing the estimate θ^2(tq).
Remark 1.
Compared with the polynomial transformation technique, the MOE-SG method and the AM-SG method can estimate the unknown parameters directly.
5. Example
Consider the following nonlinear output-error system with the updating period q=2:
(44)y(t)=B(z)A(z)f(u(t))+v(t),A(z)=1+a1z-1+a2z-2=1+0.49z-1-0.2z-2,B(z)=b1z-1+b2z-2=0.2z-1+0.4z-2,f(u(t))=u(t)+m1+m22sgn(u(t))+m1-m22sgn(u2(t))=u(t)+0.5+0.32sgn(u(t))+0.5-0.32sgn(u2(t))=u(t)+g1sgn(u(t))+g2sgn(u2(t))=u(t)+0.4sgn(u(t))+0.1sgn(u2(t));
the input {u(t)} is taken as a persistent excitation signal sequence with zero mean and unit variance and {v(t)} is a white noise sequence with zero mean and variance σ2=0.102. The unknown parameters are as follows:
(45)θ=[a1,a2,b1,b2,b1g1,b2g1,b1g2,b2g2]T=[0.49,-0.2,0.2,0.4,0.08,0.16,0.02,0.04]T.
Applying the MOE-SG algorithm and the AM-SG algorithm to estimate the parameters, the parameter estimates and their errors based on the MOE-SG algorithm and the AM-SG algorithm are shown in Tables 1 and 2 and the parameter estimation errors δ:=∥θ^-θ∥/∥θ∥ versus t are shown in Figures 4 and 5.
The MOE-SG algorithm estimates and errors.
t
1000
2000
3000
4000
5000
True values
a1
0.30790
0.43409
0.48162
0.49513
0.49505
0.49000
a2
−0.16601
−0.20319
−0.20626
−0.20656
−0.20341
−0.20000
b1
0.19508
0.19548
0.19462
0.19665
0.19816
0.20000
b2
0.36487
0.39043
0.39879
0.40105
0.39987
0.40000
b1g1
0.09729
0.09384
0.08995
0.08769
0.08705
0.08000
b2g1
0.13565
0.14818
0.15401
0.15931
0.15867
0.16000
b1g2
0.02161
0.02602
0.02558
0.02764
0.02770
0.02000
b2g2
0.02641
0.03181
0.03127
0.03378
0.03385
0.04000
δ (%)
26.70140
8.46344
2.72656
2.15284
1.91759
The AM-SG algorithm estimates and errors.
t
1000
2000
3000
4000
5000
True values
a1
0.39201
0.46141
0.50310
0.49802
0.48917
0.49000
a2
−0.18980
−0.19696
−0.19784
−0.20113
−0.20307
−0.20000
b1
0.18974
0.19349
0.19872
0.20192
0.20281
0.20000
b2
0.40122
0.41674
0.39648
0.40109
0.40350
0.40000
b1g1
0.09799
0.08924
0.08427
0.08475
0.08276
0.08000
b2g1
0.14716
0.15484
0.15489
0.16514
0.16040
0.16000
b1g2
0.02005
0.02781
0.02034
0.02761
0.02600
0.02000
b2g2
0.02674
0.03708
0.02712
0.03682
0.03467
0.04000
δ (%)
14.27547
5.08770
2.79209
1.91002
1.41209
The parameter estimation errors δ versus t (MOE-SG).
The parameter estimation errors δ versus t (AM-SG).
From Tables 1 and 2 and Figures 4 and 5, we can draw the following conclusions.
Both the MOE-SG algorithm and the AM-SG algorithm can estimate the unknown parameters directly.
The parameter estimation errors become smaller and smaller and go to zero with t increasing.
6. Conclusions
Two identification methods for dual-rate nonlinear output-error systems are presented to estimate the unknown parameters directly and can avoid estimating more parameters than the original systems. Furthermore, the two methods can also be extended to other systems such as
(46)y(t)=B(z)A(z)f(u(t))+D(z)C(z)v(t),A(z)y(t)=B(z)f(u(t))+D(z)v(t).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China and supported by the Natural Science Foundation of Jiangsu Province (no. BK20131109).
DingF.2013Beijing, ChinaScience PressDingF.2014Beijing, ChinaScience PressLiuY.XiaoY.ZhaoX.Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model20092154147714832-s2.0-7414908659110.1016/j.amc.2009.07.012LiuY.XieL.DingF.An auxiliary model based on a recursive least-squares parameter estimation algorithm for non-uniformly sampled multirate systems200922344454542-s2.0-6765031543710.1243/09596518JSCE686LiuY.ShengJ.DingR. F. Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systems2010598261526272-s2.0-7795018798810.1016/j.camwa.2010.01.030LiuY. J.DingR.Consistency of the extended gradient identification algorithm for multi-input multi-output
systems with moving average noises201390918401852DingF.ChenT.Performance bounds of forgetting factor least-squares algorithms for time-varying systems with finite measurement data20055235555662-s2.0-1634439104210.1109/TCSI.2004.842874DingF.ShiY.ChenT.Performance analysis of estimation algorithms of nonstationary ARMA processes2006543104110532-s2.0-3324449190410.1109/TSP.2005.862845DingF.ChenT.QiuL.Bias compensation based recursive least-squares identification algorithm for MISO systems20065353493532-s2.0-3364685702110.1109/TCSII.2005.862281DingF.YangH.LiuF.Performance analysis of stochastic gradient algorithms under weak conditions2008519126912802-s2.0-4924912862510.1007/s11432-008-0117-yDingF.Coupled-least-squares identification for multivariable systems2013716879DingF.LiuX. G.ChuJ.Gradient-based and least-squares-based iterative algorithms for Hammerstein
systems using the hierarchical identification principle20137176184WangD.DingF.Least squares based and gradient based iterative identification for Wiener nonlinear systems2011915118211892-s2.0-7955147373910.1016/j.sigpro.2010.11.004WangD. Q.DingF.Hierarchical least squares estimation algorithm for Hammerstein-Wiener systems20121912825828WangD. Q.DingF.ChuY. Y.Data filtering based recursive least squares algorithm for Hammerstein systems
using the key-term separation principle2013222203212WangD. Q.DingF.LiuX. M.Least squares algorithm for an input nonlinear system with a dynamic subspace
state space model2014751-24961ChenJ.ZhangY.DingR. F. Auxiliary model based multi-innovation algorithms for multivariable nonlinear systems2010529-10142814342-s2.0-7795601079310.1016/j.mcm.2010.05.026ChenJ.WangX.DingR. F. Gradient based estimation algorithm for Hammerstein systems with saturation and dead-zone nonlinearities20123612382432-s2.0-8005181250010.1016/j.apm.2011.05.049ChenJ.DingF.Least squares and stochastic gradient parameter estimation for multivariable nonlinear
Box-Jenkins models based on the auxiliary model and the multi-innovation identification theory2012298907921ChenJ.ZhangY.DingR. F.Gradient-based parameter estimation for input nonlinear systems with ARMA
noises based on the auxiliary model2013724865871DingF.QiuL.ChenT.Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems20094523243322-s2.0-5834910580510.1016/j.automatica.2008.08.007DingF.DingJ.Least-squares parameter estimation for systems with irregularly missing data20102475405532-s2.0-7795428747510.1002/acs.1141DingF.LiuP. X.LiuG.Multiinnovation least-squares identification for system modeling20104037677782-s2.0-7795258518310.1109/TSMCB.2009.2028871LiuY. J.DingF.ShiY.Least squares estimation for a class of non-uniformly sampled systems based on the
hierarchical identification principle201231619852000DingF.LiuG.LiuX. P.Partially coupled stochastic gradient identification methods for non-uniformly sampled systems2010558197619812-s2.0-7795539135210.1109/TAC.2010.2050713DingJ.DingF.LiuX. P.LiuG.Hierarchical least squares identification for linear SISO systems with dual-rate sampled-data20115611267726832-s2.0-8045515011710.1109/TAC.2011.2158137DingF.LiuG.LiuX. P.Parameter estimation with scarce measurements2011478164616552-s2.0-7996091204010.1016/j.automatica.2011.05.007ChenJ.Several gradient parameter estimation algorithms for dual-rate sampled systems20143511543554ChenJ.DingR.An auxiliary-model-based stochastic gradient algorithm for dual-rate sampled-data Box-Jenkins
systems201332524752485DingJ.ShiY.WangH.DingF.A modified stochastic gradient based parameter estimation algorithm for dual-rate sampled-data systems2010204123812472-s2.0-7795417491810.1016/j.dsp.2009.10.023ShiY.YuB.Output feedback stabilization of networked control systems with random delays modeled by Markov chains2009547166816742-s2.0-6794908808710.1109/TAC.2009.2020638ShiY.FangH.Kalman filter-based identification for systems with randomly missing measurements in a network environment20108335385512-s2.0-7795114115510.1080/00207170903273987ShiY.YuB.Robust mixed H2/H∞ control of networked control systems with random time delays in both forward and backward communication links20114747547602-s2.0-7995317519310.1016/j.automatica.2011.01.022DingF.ChenT.Least squares based self-tuning control of dual-rate systems20041886977142-s2.0-634424919710.1002/acs.828DingF.ChenT.A gradient based adaptive control algorithm for dual-rate systems2006843143232-s2.0-33847385393DingF.ChenT.IwaiZ.Adaptive digital control of Hammerstein nonlinear systems with limited output sampling2007456225722762-s2.0-3644895611610.1137/05062620XZhangJ.DingF.ShiY.Self-tuning control based on multi-innovation stochastic gradient parameter estimation200958169752-s2.0-5724909718110.1016/j.sysconle.2008.08.005ShiY.DingF.ChenT.Multirate crosstalk identification in xDSL systems20065410187818862-s2.0-3374924170510.1109/TCOMM.2006.881380LiuY. J.DingF.ShiY.An efficient hierarchical identification method for general dual-rate sampled-data
systems2014503962973DingF.ChenT.Identification of dual-rate systems based on finite impulse response models20041875895982-s2.0-384308412710.1002/acs.820DingF.ChenT.Combined parameter and output estimation of dual-rate systems using an auxiliary model20044010173917482-s2.0-384308726010.1016/j.automatica.2004.05.001DingF.ChenT.Parameter estimation of dual-rate stochastic systems by using an output error method2005509143614412-s2.0-2584444458010.1109/TAC.2005.854654DingF.ChenT.Hierarchical identification of lifted state-space models for general dual-rate systems2005526117911872-s2.0-2214449890510.1109/TCSI.2005.849144DingF.LiuP. X.ShiY.Convergence analysis of estimation algorithms for dual-rate stochastic systems200617612452612-s2.0-3364612561710.1016/j.amc.2005.09.048DingF.LiuP. X.YangH.Parameter identification and intersample output estimation for dual-rate systems20083849669752-s2.0-4664911885710.1109/TSMCA.2008.923030LiJ.DingF.Maximum likelihood stochastic gradient estimation for Hammerstein systems with colored noise based on the key term separation technique20116211417041772-s2.0-8075517591910.1016/j.camwa.2011.09.067LiX. L.ZhouL. C.DingR.ShingJ.Recursive least-squares estimation for Hammerstein nonlinear systems with nonuniform
sampling20132013824092910.1155/2013/240929DingF.LiuX. P.LiuG.Identification methods for Hammerstein nonlinear systems20112122152382-s2.0-7955147238110.1016/j.dsp.2010.06.006DingF.Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling201337416941704LiJ. H.Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration20132619196WangD.ChuY.DingF.Auxiliary model-based RELS and MI-ELS algorithm for Hammerstein OEMA systems2010599309230982-s2.0-7795086248210.1016/j.camwa.2010.02.030VörösJ.Modeling and parameter identification of systems with multisegment piecewise-linear characteristics20024711841882-s2.0-003624762410.1109/9.981742VörösJ.Modeling and identification of systems with backlash2010462369374DingF.ShiY.ChenT.Auxiliary model-based least-squares identification methods for Hammerstein output-error systems20075653733802-s2.0-3394723331310.1016/j.sysconle.2006.10.026DingF.LiuP. X.LiuG.Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises20098910188318902-s2.0-6734917702410.1016/j.sigpro.2009.03.020DingF.GuY.Performance analysis of the auxiliary model based least squares identification algorithm for
one-step state delay systems2012891520192028DingF.GuY.Performance analysis of the auxiliary model-based stochastic gradient parameter estimation
algorithm for state space systems with one-step state delay2013322585599DingF.LiuY.BaoB.Gradient-based and least-squares-based iterative estimation algorithms for multi-input multi-output systems2012226143552-s2.0-8485629864210.1177/0959651811409491DingF.LiuP. X.LiuG.Gradient based and least-squares based iterative identification methods for OE and OEMA systems20102036646772-s2.0-7794948666710.1016/j.dsp.2009.10.012DingF.Decomposition based fast least squares algorithm for output error systems201393512351242LiuY.WangD.DingF.Least squares based iterative algorithms for identifying Box-Jenkins models with finite measurement data2010205145814672-s2.0-7795417485110.1016/j.dsp.2010.01.004WangD. Q.Least squares-based recursive and iterative estimation for output error moving average systems using data filtering2011514164816572-s2.0-8075516409010.1049/iet-cta.2010.0416DingF.ChenT.Performance analysis of multi-innovation gradient type identification methods20074311142-s2.0-3375123675610.1016/j.automatica.2006.07.024DingF.Several multi-innovation identification methods2010204102710392-s2.0-7795417591810.1016/j.dsp.2009.10.030DingF.ChenH.LiM.Multi-innovation least squares identification methods based on the auxiliary model for MISO systems200718726586682-s2.0-3424719706110.1016/j.amc.2006.08.090HanL.DingF.Multi-innovation stochastic gradient algorithms for multi-input multi-output systems20091945455542-s2.0-6734909093610.1016/j.dsp.2008.12.002WangD.DingF.Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systems20102037507622-s2.0-7794948603310.1016/j.dsp.2009.09.002XieL.LiuY. J.YangH. Z.DingF.Modelling and identification for non-uniformly periodically sampled-data systems2010457847942-s2.0-7795417492010.1049/iet-cta.2009.0064LiuY.YuL.DingF.Multi-innovation extended stochastic gradient algorithm and its performance analysis20102946496672-s2.0-7795456410610.1007/s00034-010-9174-8HanL.DingF.Identification for multirate multi-input systems using the multi-innovation identification theory2009579143814492-s2.0-6304912573510.1016/j.camwa.2009.01.005DingF.ChenT.Hierarchical gradient-based identification of multivariable discrete-time systems20054123153252-s2.0-2214443432610.1016/j.automatica.2004.10.010DingF.ChenT.Hierarchical least squares identification methods for multivariable systems20055033974022-s2.0-1624439744310.1109/TAC.2005.843856HanH.XieL.DingF.LiuX.Hierarchical least-squares based iterative identification for multivariable systems with moving average noises2010519-10121312202-s2.0-7764918743810.1016/j.mcm.2010.01.003ZhangZ.DingF.LiuX.Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average systems20116136726822-s2.0-7895148721710.1016/j.camwa.2010.12.014WangD. Q.DingR.DongX. Z.Iterative parameter estimation for a class of multivariable systems based on
the hierarchical identification principle and the gradient search201231621672177DingF.ChenT.On iterative solutions of general coupled matrix equations2006446226922842-s2.0-3375118575110.1137/S0363012904441350DingF.LiuP. X.DingJ.Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle2008197141502-s2.0-3894915620410.1016/j.amc.2007.07.040DingF.ChenT.Gradient based iterative algorithms for solving a class of matrix equations2005508121612212-s2.0-2624444832110.1109/TAC.2005.852558DingF.ChenT.Iterative least-squares solutions of coupled Sylvester matrix equations2005542951072-s2.0-1044424750410.1016/j.sysconle.2004.06.008DingF.Transformations between some special matrices2010598267626952-s2.0-7795018954810.1016/j.camwa.2010.01.036XieL.DingJ.DingF.Gradient based iterative solutions for general linear matrix equations2009587144114482-s2.0-6874911097110.1016/j.camwa.2009.06.047DingJ.LiuY.DingF.Iterative solutions to matrix equations of the form AiXBi=Fi20105911350035072-s2.0-7795313539710.1016/j.camwa.2010.03.041XieL.LiuY.YangH.Gradient based and least squares based iterative algorithms for matrix equations AXB+CXTD=F20102175219121992-s2.0-7795729393810.1016/j.amc.2010.07.019DingF.Two-stage least squares based iterative estimation algorithm for CARARMA system modeling201337747984808DingF.DuanH. H.Two-stage parameter estimation algorithms for Box-Jenkins systems201378646654DingJ.DingF.Bias compensation-based parameter estimation for output error moving average systems20112512110011112-s2.0-8075518266410.1002/acs.1266ZhangY.Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methods2011539-10181018192-s2.0-7995194868610.1016/j.mcm.2010.12.059ZhangY.CuiG.Bias compensation methods for stochastic systems with colored noise2011354170917162-s2.0-7865026524910.1016/j.apm.2010.10.003DingF.Combined state and least squares parameter estimation algorithms for dynamic systems2014381403412DingF.LiuX. M.ChenH. B.YaoG. Y.Hierarchical gradient based and hierarchical least squares based
iterative parameter identification for CARARMA systems2014973139HuP. P.DingF.Multistage least squares based iterative estimation for feedback nonlinear systems with
moving average noises using the hierarchical identification principle2013731-2583592LiJ.DingF.YangG.Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average systems2012553-44424502-s2.0-8075515480210.1016/j.mcm.2011.08.023DingF.ChenT.Identification of Hammerstein nonlinear ARMAX systems2005419147914892-s2.0-2234444991510.1016/j.automatica.2005.03.026DingF.ShiY.ChenT.Gradient-based identification methods for hammerstein nonlinear ARMAX models2006451-231432-s2.0-3374591583510.1007/s11071-005-1850-zLiJ. H.DingF.HuaL.Maximum likelihood Newton recursive and the Newton iterative estimation algorithms
for Hammerstein CARAR systems2014751-2234245LuanX.ShiP.LiuF.Stabilization of networked control systems with random delays2011589432343302-s2.0-8005174132610.1109/TIE.2010.2102322LuanX. L.ZhaoS. Y.LiuF.H-infinity control for discrete-time markov jump systems with uncertain transition probabilities201358615661572ChenJ.LuL. X.DingR.Parameter identification of systems with preload nonlinearities based on the finite
impulse response model and negative gradient search2012219524982505ChenJ.LvL.DingR. F. Multi-innovation stochastic gradient algorithms for dual-rate sampled systems with preload nonlinearity20132611241292-s2.0-8486057486710.1016/j.aml.2012.04.007