This paper presents two methods for dual-rate sampled-data nonlinear output-error systems. One
method is the missing output estimation based stochastic gradient identification algorithm and the other
method is the auxiliary model based stochastic gradient identification algorithm. Different from the
polynomial transformation based identification methods, the two methods in this paper can estimate
the unknown parameters directly. A numerical example is provided to confirm the effectiveness of the
proposed methods.
1. Introduction
System identification plays an important part in many engineering applications [1–6]. Many identification methods assume that the input-output data at every sampling instant are available for linear systems [7–11] and nonlinear systems [12–20], which is usually not the case in practice. When the input and output signals of the systems have different sampling rates, these systems are usually called irregularly sampled-data systems [21–27], for example, dual-rate or multirate systems [28–30]. Dual-rate/multirate systems in which the input and the output are sampled at different frequencies arise widely in robust filtering and control [31–33], adaptive control [34–37], and system identification [38–43]. In the literature of dual-rate system identification, the so-called polynomial transformation technique is often used to transform the dual-rate model [44, 45].
As far as we know, the identification methods based on the polynomial transformation technique cannot directly estimate the parameters of the dual-rate system and the number of the unknown parameters to be estimated is more than the number of the unknown parameters of the original dual-rate system.
The nonlinear system consisting of a static nonlinear block followed by a linear dynamic system is called a Hammerstein system [46–49]. The nonlinearity of the Hammerstein system is usually expressed by some known basis functions [50, 51] or by a piece-wise polynomial function [52, 53]. When the Hammerstein system is a dual-rate system and has a preload nonlinearity, to the best of our knowledge, there is no work on identification of such systems. The main contributions of this paper are presenting the two methods directly for estimating the parameters of the dual-rate system. The proposed methods of this paper can combine the auxiliary model identification methods [54–57], the iterative identification methods [58–62], the multi-innovation identification methods [63–70], the hierarchical identification methods [71–83], and the two-stage or multistage identification methods [84, 85] to study identification problems for other linear systems [86–90] or nonlinear systems [91–97].
The rest of this paper is organized as follows. Section 2 introduces the dual-rate nonlinear output-error systems. Section 3 gives a missing output identification model based stochastic gradient algorithm. Section 4 provides an auxiliary model based stochastic gradient algorithm. Section 5 introduces an illustrative example. Finally, concluding remarks are given in Section 6.
2. Problem Formulation
Let “A=:X” or “X:=A” stand for “A is defined as X,” let the norm of a column vector X be ∥X∥2:=tr[XTX], and let the superscript T denote the matrix transpose.
Consider the following dual-rate nonlinear output-error system with colored noise:
(1)y(t)=B(z)A(z)f(u(t))+v(t),
where y(t) is the system output, u(t) is the system input, v(t) is a stochastic white noise with zero mean, A(z) and B(z) are the polynomials in the unit backward shift operator [z-1y(t)=y(t-1)],
(2)A(z)=1+a1z-1+a2z-2+⋯+anz-n,B(z)=b1z-1+b2z-2+⋯+bnz-n,
and f(u(t)) is a preload nonlinearity shown in Figure 1 and can be expressed as [98, 99]
(3)f(u(t))={u(t)+m1,u(t)>0,0,u(t)=0,u(t)-m2,u(t)<0,
where m1 and -m2 are two preload points.
The preload characteristics.
For the dual-rate sampled-data system, all the input data {u(t), t=0,1,2,…} and only the scarce output data {y(tq), t=0,1,2,…,(q⩾2)} are known. The intersample outputs or missing outputs y(tq+j), j=1,2,…,q-1 are unavailable.
Define a sign function
(4)sgn(u(t)):={1,ifu(t)>0,0,ifu(t)=0,-1,ifu(t)<0.
Then the function f(u(t)) can be expressed as
(5)f(u(t))=u(t)+m1+m22sgn(u(t))+m1-m22sgn(u2(t)).
Let
(6)g1=m1+m22,g2=m1-m22.
Hence, we have
(7)f(u(t))=u(t)+g1sgn(u(t))+g2sgn(u2(t)).
Once g1 and g2 are estimated, the parameters m1 and m2 can be computed by m1=g1+g2, m2=g1-g2.
3. The Missing Outputs Identification Model Based Stochastic Gradient Algorithm
Substituting (7) into (1) gets
(8)A(z)y(t)=B(z)(u(t)+g1sgn(u(t))+g2sgn(u2(t)))+A(z)v(t).
Define the parameter vector θ and information vector φ1(t) as
(9)θ≔[a1,a2,…,an,b1,b2,…,bn,b1g1,b2g1,…,bng1,b1g2,b2g2,…,bng2]T∈ℝ4n,(10)φ1(t)≔[(u2(t-n))-y(t-1)+v(t-1),-y(t-2)+v(t-2),…,-y(t-n)+v(t-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T∈ℝ4n.
From (9) and (10), we get
(11)y(t)=φ1T(t)θ+v(t)
or
(12)y(tq)=φ1T(tq)θ+v(tq).
Let θ^(t) be the estimate of θ. Defining and minimizing the cost function
(13)J(θ):=[y(tq)-φ1T(tq)θ]2
give the following stochastic gradient (SG) algorithm for estimating θ:
(14)θ^(tq)=θ^(tq-q)+φ^1(tq)r1(tq)e1(tq),(15)θ^(tq-i)=θ^(tq-q),i=q-1,q-2,…,1,e1(tq)=y(tq)-φ^1T(tq)θ^(tq-q),(16)φ^1(tq)=[(u2(t-n))-y(tq-1)+v^(tq-1),-y(tq-2)+v^(tq-2),…,-y(tq-n)+v^(tq-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T,(17)v^(tq-i)=y(tq-i)-φ^1T(tq-i)θ^(tq-i),(18)r1(tq)=r1(tq-q)+∥φ^1(tq)∥2,r(0)=1.
Since the information φ^1(tq) on the right-hand sides of (16) contains the unknown variables -y(tq-i)+v^(tq-i), i=q-1,q-2,…,1, the SG algorithm in (14)–(18) is impossible to implement. In this section, we use the missing outputs identification model (MOI) to overcome this difficulty; these unknown -y(tq-i)+v^(tq-i) are replaced with the output estimates -y^(tq-i)+v^(tq-i) of an MOI model,
(19)-y^(tq-i)+v^(tq-i)=-φ^1T(tq-i)θ^(tq-i),i=q-1,q-2,…,1,φ^1(tq-i+1)=[(u2(tq-i+1-n))-y^(tq-i)+v^(tq-i),-y^(tq-i-1)+v^(tq-i-1),…,-y^(tq-q+1)+v^(tq-q+1),-y(tq-q)+v^(tq-q),…,-y^(tq-i+1-n)+v^(tq-i+1-n),u(tq-i),u(tq-i-1),…,u(tq-i+1-n),sgn(u(tq-i)),sgn(u(tq-i-1)),…,sgn(u(tq-i+1-n)),sgn(u2(tq-i)),sgn(u2(tq-i-1)),…,sgn(u2(tq-i+1-n))]T,
where -y^(tq-i)+v^(tq-i) represents the estimate of -y(tq-i)+v(tq-i) at time tq-i, θ^(tq-i) represents the estimate of θ at time tq-i, and φ^1(tq-i) represents the estimate of φ1(q-i).
Thus, we have the following missing output estimates based SG (MOE-SG) algorithm for estimating the parameter vector θ in (9):
(20)θ^(tq)=θ^(tq-q)+φ^1(tq)r1(tq)e2(tq),(21)θ^(tq-i)=θ^(tq-q),i=q-1,q-2,…,1,(22)-y^(tq-i)+v^(tq-i)=-φ^1T(tq-i)θ^(tq-i),(23)φ^1(tq-i+1)=[(u2(tq-i+1-n))-y^(tq-i)+v^(tq-i),-y^(tq-i-1)+v^(tq-i-1),…,-y^(tq-q+1)+v^(tq-q+1),-y(tq-q)+v^(tq-q),…,-y^(tq-i+1-n)+v^(tq-i+1-n),u(tq-i),u(tq-i-1),…,u(tq-i+1-n),sgn(u(tq-i)),sgn(u(tq-i-1)),…,sgn(u(tq-i+1-n)),sgn(u2(tq-i)),sgn(u2(tq-i-1)),…,sgn(u2(tq-i+1-n))]T,(24)e1(tq)=y(tq)-φ^1T(tq)θ^(tq-q),(25)r1(tq)=r1(tq-q)+∥φ^1(tq)∥2,r(0)=1.
The steps of computing the parameter estimate θ^(tq) by the MOE-SG algorithm are listed as follows.
Let u(-j)=0, y(-j)=0, j=0,1,2,…,n-1, and give a small positive number ε.
Let t=1, r(0)=1, and θ^(0)=1/p0 with 1 being a column vector whose entries are all unity and p0=106.
Collect the input data u(tq),u(tq-1),…,u(tq-n), and collect the output data y(tq).
Let i=q-1 and compute -y^(tq-i)+v^(tq-i) by (22).
Form φ^1(tq-i+1) by (23).
Decrease i by 1; if i⩾1, go to step (4); otherwise, go to the next step.
Compute e1(tq) and r1(tq) by (24) and (25), respectively.
Update the parameter estimation vector θ^(tq) by (20).
Compare θ^(tq) and θ^(tq-q); if ∥θ^(tq)-θ^(tq-q)∥⩽ε, then terminate the procedure and obtain the θ^(tq); otherwise, increase t by 1 and go to step (3).
The flowchart of computing the MOE-SG parameter estimate θ^(tq) is shown in Figure 2.
The flowchart of computing the estimate θ^(tq).
4. The Auxiliary Model Based Stochastic Gradient Algorithm
Define
(26)x(t)=B(z)A(z)(u(t)+g1sgn(u(t))+g2sgn(u2(t))).
From (8) and (26), we have
(27)y(t)=x(t)+v(t).
Define the information vector φ2(t) as
(28)φ2(t)≔[(u2(t-n))-x(t-1),-x(t-2),…,-x(t-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T∈ℝ4n.
Then we get
(29)x(t)=φ2T(t)θ,(30)y(t)=φ2T(t)θ+v(t).
Assume t is an integer multiple of q and rewrite (30) as
(31)y(tq)=φ2T(tq)θ(tq)+v(tq).
Let θ^(t) be the estimate of θ. Defining and minimizing the cost function
(32)J(θ):=[y(tq)-φ2T(tq)θ]2
give the following SG algorithm of estimating θ:
(33)θ^(tq)=θ^(tq-q)+φ2(tq)r2(tq)e2(tq),(34)e2(tq)=y(tq)-φ2T(tq)θ^(tq-q),(35)φ2(tq)=[(u2(t-n))-x(tq-1),-x(tq-2),…,-x(tq-n),u(t-1),u(t-2),…,u(t-n),sgn(u(t-1)),sgn(u(t-2)),…,sgn(u(t-n)),sgn(u2(t-1)),sgn(u2(t-2)),…,sgn(u2(t-n))]T,(36)r2(tq)=r2(tq-q)+∥φ2(tq)∥2,r(0)=1.
Because of the unknown variables x(tq-i) in (33), the SG algorithm in (33)–(36) is impossible to implement. In this section, we use the auxiliary model; these unknown x(tq-i) are replaced with the outputs xa(tq-i) of an auxiliary model,
(37)xa(tq-i)=θaT(tq-i)φa(tq-i),
where θa(tq-i) is the estimate θ^(tq-i) of θ and φa(tq-i) is the estimate φ^2(tq-i) of φ2(tq-i). We can obtain an auxiliary model based stochastic gradient (AM-SG) algorithm:
(38)θ^(tq)=θ^(tq-q)+φ^2(tq)r2(tq)e2(tq),(39)θ^(tq-i)=θ^(tq-q),i=q-1,q-2,…,1,(40)xa(tq-i)=θ^T(tq-i)φ^2(tq-i),(41)φ^2(tq-i+1)=[(u2(t-i+1-n))-xa(tq-i),-xa(tq-i-1),…,-xa(tq-i+1-n),u(t-i),u(t-i-1),…,u(t-i+1-n),sgn(u(t-i)),sgn(u(t-i-1)),…,sgn(u(t-i+1-n)),sgn(u2(t-i)),sgn(u2(t-i-1)),…,sgn(u2(t-i+1-n))]T,(42)e2(tq)=y(tq)-φ^2T(tq)θ^(tq-q),(43)r2(tq)=r2(tq-q)+∥φ^2(tq)∥2,r(0)=1.
The steps of computing the parameter estimate θ^(tq) by the AM-SG algorithm are listed as follows.
Let u(-j)=0, y(-j)=0, x(-j)=0, j=0,1,2,…,n-1, and give a small positive number ε.
Let t=1, r(0)=1, and θ^(0)=1/p0 with 1 being a column vector whose entries are all unity and p0=106.
Collect the input data u(tq),u(tq-1),…,u(tq-n), and collect the output data y(tq).
Let i=q-1 and compute xa(tq-i) by (40).
Form φ^2(tq-i+1) by (41).
Decrease i by 1; if i⩾1, go to step (4); otherwise, go to next step.
Compute e2(tq) and r2(tq) by (42) and (43), respectively.
Update the parameter estimation vector θ^(tq) by (38).
Compare θ^(tq) and θ^(tq-q); if ∥θ^(tq)-θ^(tq-q)∥⩽ε, then terminate the procedure and obtain the θ^(tq); otherwise, increase t by 1 and go to step (3).
The flowchart of computing the AM-SG parameter estimate θ^(tq) is shown in Figure 3.
The flowchart of computing the estimate θ^2(tq).
Remark 1.
Compared with the polynomial transformation technique, the MOE-SG method and the AM-SG method can estimate the unknown parameters directly.
5. Example
Consider the following nonlinear output-error system with the updating period q=2:
(44)y(t)=B(z)A(z)f(u(t))+v(t),A(z)=1+a1z-1+a2z-2=1+0.49z-1-0.2z-2,B(z)=b1z-1+b2z-2=0.2z-1+0.4z-2,f(u(t))=u(t)+m1+m22sgn(u(t))+m1-m22sgn(u2(t))=u(t)+0.5+0.32sgn(u(t))+0.5-0.32sgn(u2(t))=u(t)+g1sgn(u(t))+g2sgn(u2(t))=u(t)+0.4sgn(u(t))+0.1sgn(u2(t));
the input {u(t)} is taken as a persistent excitation signal sequence with zero mean and unit variance and {v(t)} is a white noise sequence with zero mean and variance σ2=0.102. The unknown parameters are as follows:
(45)θ=[a1,a2,b1,b2,b1g1,b2g1,b1g2,b2g2]T=[0.49,-0.2,0.2,0.4,0.08,0.16,0.02,0.04]T.
Applying the MOE-SG algorithm and the AM-SG algorithm to estimate the parameters, the parameter estimates and their errors based on the MOE-SG algorithm and the AM-SG algorithm are shown in Tables 1 and 2 and the parameter estimation errors δ:=∥θ^-θ∥/∥θ∥ versus t are shown in Figures 4 and 5.
The MOE-SG algorithm estimates and errors.
t
1000
2000
3000
4000
5000
True values
a1
0.30790
0.43409
0.48162
0.49513
0.49505
0.49000
a2
−0.16601
−0.20319
−0.20626
−0.20656
−0.20341
−0.20000
b1
0.19508
0.19548
0.19462
0.19665
0.19816
0.20000
b2
0.36487
0.39043
0.39879
0.40105
0.39987
0.40000
b1g1
0.09729
0.09384
0.08995
0.08769
0.08705
0.08000
b2g1
0.13565
0.14818
0.15401
0.15931
0.15867
0.16000
b1g2
0.02161
0.02602
0.02558
0.02764
0.02770
0.02000
b2g2
0.02641
0.03181
0.03127
0.03378
0.03385
0.04000
δ (%)
26.70140
8.46344
2.72656
2.15284
1.91759
The AM-SG algorithm estimates and errors.
t
1000
2000
3000
4000
5000
True values
a1
0.39201
0.46141
0.50310
0.49802
0.48917
0.49000
a2
−0.18980
−0.19696
−0.19784
−0.20113
−0.20307
−0.20000
b1
0.18974
0.19349
0.19872
0.20192
0.20281
0.20000
b2
0.40122
0.41674
0.39648
0.40109
0.40350
0.40000
b1g1
0.09799
0.08924
0.08427
0.08475
0.08276
0.08000
b2g1
0.14716
0.15484
0.15489
0.16514
0.16040
0.16000
b1g2
0.02005
0.02781
0.02034
0.02761
0.02600
0.02000
b2g2
0.02674
0.03708
0.02712
0.03682
0.03467
0.04000
δ (%)
14.27547
5.08770
2.79209
1.91002
1.41209
The parameter estimation errors δ versus t (MOE-SG).
The parameter estimation errors δ versus t (AM-SG).
From Tables 1 and 2 and Figures 4 and 5, we can draw the following conclusions.
Both the MOE-SG algorithm and the AM-SG algorithm can estimate the unknown parameters directly.
The parameter estimation errors become smaller and smaller and go to zero with t increasing.
6. Conclusions
Two identification methods for dual-rate nonlinear output-error systems are presented to estimate the unknown parameters directly and can avoid estimating more parameters than the original systems. Furthermore, the two methods can also be extended to other systems such as
(46)y(t)=B(z)A(z)f(u(t))+D(z)C(z)v(t),A(z)y(t)=B(z)f(u(t))+D(z)v(t).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China and supported by the Natural Science Foundation of Jiangsu Province (no. BK20131109).
DingF.DingF.LiuY.XiaoY.ZhaoX.Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary modelLiuY.XieL.DingF.An auxiliary model based on a recursive least-squares parameter estimation algorithm for non-uniformly sampled multirate systemsLiuY.ShengJ.DingR. F. Convergence of stochastic gradient estimation algorithm for multivariable ARX-like systemsLiuY. J.DingR.Consistency of the extended gradient identification algorithm for multi-input multi-output
systems with moving average noisesDingF.ChenT.Performance bounds of forgetting factor least-squares algorithms for time-varying systems with finite measurement dataDingF.ShiY.ChenT.Performance analysis of estimation algorithms of nonstationary ARMA processesDingF.ChenT.QiuL.Bias compensation based recursive least-squares identification algorithm for MISO systemsDingF.YangH.LiuF.Performance analysis of stochastic gradient algorithms under weak conditionsDingF.Coupled-least-squares identification for multivariable systemsDingF.LiuX. G.ChuJ.Gradient-based and least-squares-based iterative algorithms for Hammerstein
systems using the hierarchical identification principleWangD.DingF.Least squares based and gradient based iterative identification for Wiener nonlinear systemsWangD. Q.DingF.Hierarchical least squares estimation algorithm for Hammerstein-Wiener systemsWangD. Q.DingF.ChuY. Y.Data filtering based recursive least squares algorithm for Hammerstein systems
using the key-term separation principleWangD. Q.DingF.LiuX. M.Least squares algorithm for an input nonlinear system with a dynamic subspace
state space modelChenJ.ZhangY.DingR. F. Auxiliary model based multi-innovation algorithms for multivariable nonlinear systemsChenJ.WangX.DingR. F. Gradient based estimation algorithm for Hammerstein systems with saturation and dead-zone nonlinearitiesChenJ.DingF.Least squares and stochastic gradient parameter estimation for multivariable nonlinear
Box-Jenkins models based on the auxiliary model and the multi-innovation identification theoryChenJ.ZhangY.DingR. F.Gradient-based parameter estimation for input nonlinear systems with ARMA
noises based on the auxiliary modelDingF.QiuL.ChenT.Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systemsDingF.DingJ.Least-squares parameter estimation for systems with irregularly missing dataDingF.LiuP. X.LiuG.Multiinnovation least-squares identification for system modelingLiuY. J.DingF.ShiY.Least squares estimation for a class of non-uniformly sampled systems based on the
hierarchical identification principleDingF.LiuG.LiuX. P.Partially coupled stochastic gradient identification methods for non-uniformly sampled systemsDingJ.DingF.LiuX. P.LiuG.Hierarchical least squares identification for linear SISO systems with dual-rate sampled-dataDingF.LiuG.LiuX. P.Parameter estimation with scarce measurementsChenJ.Several gradient parameter estimation algorithms for dual-rate sampled systemsChenJ.DingR.An auxiliary-model-based stochastic gradient algorithm for dual-rate sampled-data Box-Jenkins
systemsDingJ.ShiY.WangH.DingF.A modified stochastic gradient based parameter estimation algorithm for dual-rate sampled-data systemsShiY.YuB.Output feedback stabilization of networked control systems with random delays modeled by Markov chainsShiY.FangH.Kalman filter-based identification for systems with randomly missing measurements in a network environmentShiY.YuB.Robust mixed H2/H_{∞} control of networked control systems with random time delays in both forward and backward communication linksDingF.ChenT.Least squares based self-tuning control of dual-rate systemsDingF.ChenT.A gradient based adaptive control algorithm for dual-rate systemsDingF.ChenT.IwaiZ.Adaptive digital control of Hammerstein nonlinear systems with limited output samplingZhangJ.DingF.ShiY.Self-tuning control based on multi-innovation stochastic gradient parameter estimationShiY.DingF.ChenT.Multirate crosstalk identification in xDSL systemsLiuY. J.DingF.ShiY.An efficient hierarchical identification method for general dual-rate sampled-data
systemsDingF.ChenT.Identification of dual-rate systems based on finite impulse response modelsDingF.ChenT.Combined parameter and output estimation of dual-rate systems using an auxiliary modelDingF.ChenT.Parameter estimation of dual-rate stochastic systems by using an output error methodDingF.ChenT.Hierarchical identification of lifted state-space models for general dual-rate systemsDingF.LiuP. X.ShiY.Convergence analysis of estimation algorithms for dual-rate stochastic systemsDingF.LiuP. X.YangH.Parameter identification and intersample output estimation for dual-rate systemsLiJ.DingF.Maximum likelihood stochastic gradient estimation for Hammerstein systems with colored noise based on the key term separation techniqueLiX. L.ZhouL. C.DingR.ShingJ.Recursive least-squares estimation for Hammerstein nonlinear systems with nonuniform
samplingDingF.LiuX. P.LiuG.Identification methods for Hammerstein nonlinear systemsDingF.Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modelingLiJ. H.Parameter estimation for Hammerstein CARARMA systems based on the Newton iterationWangD.ChuY.DingF.Auxiliary model-based RELS and MI-ELS algorithm for Hammerstein OEMA systemsVörösJ.Modeling and parameter identification of systems with multisegment piecewise-linear characteristicsVörösJ.Modeling and identification of systems with backlashDingF.ShiY.ChenT.Auxiliary model-based least-squares identification methods for Hammerstein output-error systemsDingF.LiuP. X.LiuG.Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noisesDingF.GuY.Performance analysis of the auxiliary model based least squares identification algorithm for
one-step state delay systemsDingF.GuY.Performance analysis of the auxiliary model-based stochastic gradient parameter estimation
algorithm for state space systems with one-step state delayDingF.LiuY.BaoB.Gradient-based and least-squares-based iterative estimation algorithms for multi-input multi-output systemsDingF.LiuP. X.LiuG.Gradient based and least-squares based iterative identification methods for OE and OEMA systemsDingF.Decomposition based fast least squares algorithm for output error systemsLiuY.WangD.DingF.Least squares based iterative algorithms for identifying Box-Jenkins models with finite measurement dataWangD. Q.Least squares-based recursive and iterative estimation for output error moving average systems using data filteringDingF.ChenT.Performance analysis of multi-innovation gradient type identification methodsDingF.Several multi-innovation identification methodsDingF.ChenH.LiM.Multi-innovation least squares identification methods based on the auxiliary model for MISO systemsHanL.DingF.Multi-innovation stochastic gradient algorithms for multi-input multi-output systemsWangD.DingF.Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systemsXieL.LiuY. J.YangH. Z.DingF.Modelling and identification for non-uniformly periodically sampled-data systemsLiuY.YuL.DingF.Multi-innovation extended stochastic gradient algorithm and its performance analysisHanL.DingF.Identification for multirate multi-input systems using the multi-innovation identification theoryDingF.ChenT.Hierarchical gradient-based identification of multivariable discrete-time systemsDingF.ChenT.Hierarchical least squares identification methods for multivariable systemsHanH.XieL.DingF.LiuX.Hierarchical least-squares based iterative identification for multivariable systems with moving average noisesZhangZ.DingF.LiuX.Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average systemsWangD. Q.DingR.DongX. Z.Iterative parameter estimation for a class of multivariable systems based on
the hierarchical identification principle and the gradient searchDingF.ChenT.On iterative solutions of general coupled matrix equationsDingF.LiuP. X.DingJ.Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principleDingF.ChenT.Gradient based iterative algorithms for solving a class of matrix equationsDingF.ChenT.Iterative least-squares solutions of coupled Sylvester matrix equationsDingF.Transformations between some special matricesXieL.DingJ.DingF.Gradient based iterative solutions for general linear matrix equationsDingJ.LiuY.DingF.Iterative solutions to matrix equations of the form AiXBi=FiXieL.LiuY.YangH.Gradient based and least squares based iterative algorithms for matrix equations AXB+CXTD=FDingF.Two-stage least squares based iterative estimation algorithm for CARARMA system modelingDingF.DuanH. H.Two-stage parameter estimation algorithms for Box-Jenkins systemsDingJ.DingF.Bias compensation-based parameter estimation for output error moving average systemsZhangY.Unbiased identification of a class of multi-input single-output systems with correlated disturbances using bias compensation methodsZhangY.CuiG.Bias compensation methods for stochastic systems with colored noiseDingF.Combined state and least squares parameter estimation algorithms for dynamic systemsDingF.LiuX. M.ChenH. B.YaoG. Y.Hierarchical gradient based and hierarchical least squares based
iterative parameter identification for CARARMA systemsHuP. P.DingF.Multistage least squares based iterative estimation for feedback nonlinear systems with
moving average noises using the hierarchical identification principleLiJ.DingF.YangG.Maximum likelihood least squares identification method for input nonlinear finite impulse response moving average systemsDingF.ChenT.Identification of Hammerstein nonlinear ARMAX systemsDingF.ShiY.ChenT.Gradient-based identification methods for hammerstein nonlinear ARMAX modelsLiJ. H.DingF.HuaL.Maximum likelihood Newton recursive and the Newton iterative estimation algorithms
for Hammerstein CARAR systemsLuanX.ShiP.LiuF.Stabilization of networked control systems with random delaysLuanX. L.ZhaoS. Y.LiuF.H-infinity control for discrete-time markov jump systems with uncertain transition probabilitiesChenJ.LuL. X.DingR.Parameter identification of systems with preload nonlinearities based on the finite
impulse response model and negative gradient searchChenJ.LvL.DingR. F. Multi-innovation stochastic gradient algorithms for dual-rate sampled systems with preload nonlinearity