This paper deals with the parameter estimation problem for multivariable nonlinear systems described by MIMO state-space Wiener models. Recursive parameters and state estimation algorithms are presented using the least squares technique, the adjustable model, and the Kalman filter theory. The basic idea is to estimate jointly the parameters, the state vector, and the internal variables of MIMO Wiener models based on a specific decomposition technique to extract the internal vector and avoid problems related to invertibility assumption. The effectiveness of the proposed algorithms is shown by an illustrative simulation example.
1. Introduction
Over the last years, modeling, identification, and parameter estimation theories have received much attention by various research teams [1–4]. Blocks-oriented nonlinear models, in particular, which consist of interconnected linear dynamic subsystems and memory less nonlinear elements, have been widely used for modeling a large variety of nonlinear systems in such different fields as mechanical dynamics [5], chemical process [6], biotechnologies [7], signal filtering [8], and so on. In fact, this class of nonlinear models is able to describe the dynamic of complex systems, with a relatively simple structure. They can even simplify the identification, control, or diagnostic problems [9–12]. Added to that, several approaches developed in the linear case can be applied with an appropriate practical implementations [13–19]. In recent years, many identification methods have been studied for blocks-oriented systems and a large amount of works have been published in the literature. For example, Vörös [20] proposed a least squares based iterative algorithm for Hammerstein-Wiener systems with a backlash output; Hu et al. [21] developed an extended least squares parameter estimation algorithm for Wiener systems based on the overparameterization method; Ding et al. [22] used the hierarchical identification principal to identify Hammerstein systems; Mao and Ding [8] proposed a multi-innovation stochastic gradient algorithm for Hammerstein systems using the key term separation principal; Li [23] studied the maximum likelihood estimation algorithm for Hammerstein CARARMA systems; Guo and Bretthauer [24] proposed a recursive identification method for Wiener models, based on the prediction error method. Furthermore, Chaudhary et al. [25] explored an adaptive algorithm based on fractional signal processing for parameter estimation of Hammerstein autoregressive models; Wu et al. [26] presented a robust Hammerstein adaptive filtering algorithm based on the Maximum Correntropy Criterion, which aims at maximizing the similarity between the model output and the reel response; Falck et al. [27] proposed an identification method of NARX Wiener-Hammerstein models using kernel-based estimation technique; Kibangou and Favier [28] developed a new approach for estimating a Parallel-Cascade Wiener System using a joint diagonalization of the pth-order Volterra kernel slices to identify linear subsystems and using the least square algorithm to identify nonlinear subsystems. This approach has been extended to other blocks-oriented models with polynomials nonlinearities [29].
Recently, much attention has been paid to blocks-oriented state-space systems which have been successfully used for control algorithms, identification schemes, and signal filtering [30, 31]. However, the parameter estimation has become more difficult because the blocks-oriented models not only include the unknown parameter of linear and nonlinear subsystems but also include the unmeasurable state variables [32–35]. In this framework, Wang and Ding proposed a recursive parameter and state estimation for Hammerstein state-space systems [36] and for Hammerstein-Wiener state-space systems [37], using the hierarchical principal; Wang et al. [38] discussed an iterative identification algorithm Hammetstein state-space system, by combining the iterative least square and the hierarchical identification method. However, for Wiener state-space models, there is a little contribution in the literature that addresses the parameter estimation problems or the state estimation problems. In fact, Westwick and Verhaegen [39] proposed a subspace identification method for MIMO Wiener subsystems with odd and even nonlinearities and a Gaussian input system; Bruls et al. [40] derived separable least squares algorithms for a state-space Wiener model with Chebyshev polynomials nonlinearity; Lovera et al. [41] developed a recursive subspace identification method for Wiener state-space models using the singular-value decomposition technique; Glaria Lopez and Sbarbaro [42] proposed an observer design for a Wiener model with known parameters.
The main difficulty in the identification of Wiener models is that the internal variables, acting between linear and nonlinear blocks, are almost unavailable and the input-output available data are not enough to provide all information on these unknown variables. To overcome this difficulty, most published works, addressing the identification of Wiener systems, assume one of these assumptions: the invertibility of the unknown nonlinear element [43], an a priori knowledge of the nonlinearity [44], an approximation of the nonlinearity as a piecewise linear function [44], and a specific input signal [39]. However, these assumptions, and especially the invertibility assumption, severely limit the applicability of Wiener models because the output nonlinearity, in several real cases, is noninvertible or is quite complicated to find the inverse nonlinearity especially for multivariable systems.
This paper introduces a recursive identification method for MIMO Wiener model. This model is characterised by a linear dynamic block as an observer state-space model and a nonlinear block as combined and arbitrary (reversible or irreversible) nonlinearities. A recursive algorithm which combines the least squares technique, the adjustable model, and the Kalman filter principle is developed to resolve the parameters and state estimation problem with less computational effort and a fast convergence rate. Indeed, in the proposed method, the parameters of the linear part and nonlinear part of the MIMO Wiener model are estimated separately in order to decrease the dimension of the unknown parameters matrices and reduce the parameter redundancy. Moreover, a modified Kalman filter and a specific decomposition technique are developed to extract and estimate the unknown internal vector without any research of the inverse nonlinear functions.
The remainder of this paper is organized as follows. Section 2 describes the problem formulation for MIMO Wiener state-space models. The least squares based and adjustable model based recursive parameter estimation algorithm and a new recursive state estimation algorithm based on Kalman filter theorem are presented in Section 3. Section 4 provides an illustrative example to show the efficiency of the proposed algorithms. Finally, some concluding remarks are given in Section 5.
2. Problem Formulation
Consider the MIMO discrete-time Wiener model Figure 1 where the linear dynamic part is given by the following state-space equation:(1)xk+1=Akxk+BkUk+WkZk=Ckxk+DkUk+Vk,where xT(k)=x1(k)x2(k)⋯xnX(k), UT(k)=u1(k)u2(k)⋯unU(k), and ZT(k)=z1(k)z2(k)⋯znZ(k) are, respectively, the state vector, the input vector, and the internal vector at the discrete-time k, WT(k)=w1(k)⋯wnX(k) and VT(k)=v1(k)⋯vnZ(k) are two noise vectors, A(k)∈RnX×nX, B(k)∈RnX×nU, and C(k)∈RnZ×nX and D(k)∈RnZ×nU are defined, respectively, by(2)Ak=a11ka12k⋯a1nXka21ka22k⋯a2nXk⋮⋮⋱⋮anX1kanX2k⋯anXnXk,Bk=b11kb12k⋯b1nUkb21kb22k⋯b2nUk⋮⋮⋱⋮bnX1kbnX2k⋯bnXnUk,Ck=c11kc12k⋯c1nXkc21kc22k⋯c2nXk⋮⋮⋱⋮cnZ1kcnZ2k⋯cnZnXk,Dk=d11kd12k⋯d1nUkd21kd22k⋯d2nUk⋮⋮⋱⋮dnZ1kdnZ2k⋯dnZnUk.
The MIMO Wiener model.
Assume that the degrees nX, nU, and nZ are known and the internal vector Zk and the state vector x(k) are unmeasurable. The static nonlinear function of the MIMO Wiener model is defined as(3)Yk=fZk+Ek=f1z1k,z2k,…,znZkf2z1k,z2k,…,znZk⋮fnYz1k,z2k,…,znZk+e1ke2k⋮enYk,where YTk=y1(k)y2(k)⋯ynY(k) is the output system vector, fZk is the nonlinear function vector that depends on the unknown internal vector Z(k), and E(k) is an error vector.
In the rest of this paper, we propose to rewrite the system output vector Y(k) into two submodels.
The first submodel is given by the following equation:(4)Yk=ΘΨk,Zk+Ek,where Θ and Ψ(k,Z(k)) are defined by(5)Θ=θ11⋯θ1nΨθ21⋯θ2nΨ⋮⋱⋮θnY1⋯θnYnΨ,Ψk,Zk=ψ1k,z1k;…;znZkψ2k,z1k;…;znZk⋮ψpk,z1k;…;znZk.
However, the second submodel is based on a decomposition technique of nonlinear functions fi·, given in (3):(6)fiz1k,z2k,…,znZk=zjk+fi∗z1k,z2k,…,znZk,with i=1,…,nY and j=1,…,nZ.
Using (6), the system output vector Y(k) can be written as(7)Yk=ΓZk+Θ⋆Ψ⋆k+Ek,where Γ∈RnY×nZ (if nY=nZ, Γ=InY×nY; I is an identity matrix) and Θ⋆Ψ⋆(k) is defined by(8)Θ⋆Ψ⋆k=θ11∗⋯θ1nΨ∗θ21∗⋯θ2nΨ∗⋮⋱⋮θnY1∗⋯θnYnΨ∗ψ1∗k,z1k;…;znZkψ2∗k,z1k;…;znZk⋮ψp∗k,z1k;…;znZk.It is worth noting that the application of this decomposition technique to Wiener model has led to useful modelisation because it allows the extraction of the inaccessible internal vector Z(k) and then the identification and control schemes become easier and more efficient.
From (1) and (4), it is clear that the number of unknown parameters is quite large, causing difficulties in the identification algorithm implementation. In order to reduce these difficulties and have a precise estimation quality, we propose to estimate the parameters of the linear dynamic part and those of the static nonlinear part, in two separate and recursive steps. The objective of this paper is to present a new recursive method to estimate jointly the parameters (aij, bij, cij, dij), the state vector xk, and the intermediate vector Zk using the measured input-output data Uk,Yk.
3. The Recursive Parametric and State Estimation Algorithm
In order to simplify the formulation of the parametric and state estimation problem, this section is divided into two subsections. The basic idea is to estimate recursively the dynamic linear part parameters and the static nonlinear part parameters of the considered MIMO Wiener model and then to estimate the state vector xk and the internal vector Zk.
3.1. The Parameter Estimation Algorithm
If the state vector x(k) and the internal vector Z(k) are known, then we can apply the following adjustable model to generate the linear dynamic subsystem estimate (1):(9)xpk+1=A^k+1xk+B^k+1UkZpk=C^kxk+D^kUk.Using the least squares principle and minimizing the prediction errors functions(10)δxk=xk-xpk,δZk=Zk-Zpk,we can obtain the following recursive algorithm:(11)A^k=A^k-1+ξxk-1Gxδxk-1xTk-2B^k=B^k-1+ξxk-1Gxδxk-1UTk-2C^k=C^k-1+ξZk-1GZδZk-1xTk-1D^k=D^k-1+ξZk-1GZδZk-1UTk-1δxk-1=xk-1-A^k-1xk-2-B^k-1Uk-2δZk-1=Zk-1-C^k-1xk-1-D^k-1Uk-1ξxk-1=lxλGxxk-2xTk-2+Uk-2UTk-2ξZk-1=lZλGZxk-1xTk-1+Uk-1UTk-1,where Gx and GZ are definite positive symmetrical matrices and λGx and λGZ are, respectively, the maximum eigenvalues of Gx and GZ. In order to guarantee the convergence of the parameter estimate matrices A^k, B^(k), C^(k), and D^(k), the gains parameter lx and lZ must be chosen such that 0<lx<2 and 0<lZ<2. It should be noted that these gains can be chosen as time-varying parameters in order to improve the parameter estimation quality.
To avoid the parametric redundancy problem and construct the estimate parameters of the static nonlinear part (4), we suggest using the following recursive least squares (RLS) algorithm:(12)Θ^k=Θ^k-1+PkΨk,ZkΞkPk=Pk-1-Pk-1Ψk,ZkΨTk,ZkPk-11+ΨTk,ZkPk-1Ψk,ZkΞk=Yk-Θ^k-1Ψk,Zk,where Θ^k represent the estimate of Θ in the regression equation (4) and P(k)=p0Ip×p is a definite positive symmetrical matrix where Ip×p is an identity matrix and p0 is generally taken as a large positive number, for example, p0=105.
In reality, the regression matrix Ψ(k,Zk) contains the unmeasurable internal vector Z(k), so the parametric estimation algorithms (11) and (12) cannot be implemented directly. Added to that, the state vector x(k) is also supposed to be unknown. The proposed solution is to replace x(k) and Z(k) in (11) and (12) with their estimated vectors x^(k) and Z^(k) and then to define(13)x^k=x^1kx^2k⋮x^nXk,Z^k=z^1kz^2k⋮z^nZk.
3.2. Modified Kalman Filter
In the area of state estimation algorithms, the extended Kalman filter (EKF) is the most widely used nonlinear estimation method [45]. The EKF is based on the linearization of nonlinear models and calculation of the Jacobian matrices with each iteration, which may cause significant implementation difficulties and high estimation errors. Therefore, we propose to use a new filtering technique, developed in our previous work [15], based on the linear Kalman filter (KF). This filter yields the same performance as KF and can be applied to blocks-oriented models without any linearization steps. Let x^(k) and x^0(k) represent, respectively, the a priori and a posteriori estimate of x(k) at a discrete-time k and Z^(k) represents the a priori estimate of Z(k).
This part is based essentially on the Kalman filter principle, which is defined by the following.
Theorem 1.
Define α and β as two random vectors, with Λ=αβ a multinomial distribution vector with a mean vector α¯β¯ and a variance-covariance matrix ΣαΣαβΣαβTΣβ. Thus, the conditional probability density p(α,β) is a Gaussian function, where its mean and variance are given, respectively, by(14)α^=α-+ΣαβΣβ-1β-β-Σα∖β=Σα-ΣαβΣβ-1ΣαβT.
In a first step, applying this theorem to the linear state-space equation (1) yields(15)x^0k=x^k+KxkZk-C^kx^k-D^kUkKxk=PxkC^kQZ+C^kPxkC^TkPx0k=Pxk-KxkC^kPxkx^k+1=A^kx^0k+B^kUkPxk+1=A^kPx0kA^Tk+Qx,where Qx and QZ are variance-covariance matrices of noise vectors W(k) and V(k), respectively, and Px(k)=E[(x(k)-x^(k))(x(k)-x^(k))T] is the variance-covariance matrices.
However, the a posteriori estimate x^0(k) contains the unknown internal vector Z(k), so the algorithm (15) is impossible to implement. The solution is to replace the unknown internal vector Z(k) with its estimate Z^(k). For this reason, we propose to use the second form (7) of the system output vector Y(k) and apply the KF to the following state-space model:(16)Zk+1=CkXk+1+DkUk+1+Wk+1Yk=ΓZk+Θ⋆Ψ⋆k+Ek.Using Theorem 1 gives the following equations to generate the recursive estimate vector Z(k):(17)Z^k+1=C^kA^kx^0k+C^kB^kUk+D^kUk+1+KZkYk-ΓC^kx^k-ΓD^kUk-Θ^⋆kΨ~⋆kKZk=C^kA^kPxkΓC^TkΓC^kPxkΓC^Tk+QY+Θ^⋆kGkΘ^⋆Tk,where QY is the variance-covariance matrix of the noise vector E(k) and G(k)=E[(Y(k)-Y^(k))(Y(k)-Y^(k))T] is the variance-covariance matrix.
Combining (15) and (17), we can summarize the recursive algorithm to generate the estimates x^(k) and Z^(k) of the state vector x(k) and the internal vector Z(k):(18)x^0k=x^k+KxkZ^k-C^kx^k-D^kUkKxk=PxkC^kQZ+C^kPxkC^TkPx0k=Pxk-KxkC^kPxkx^k+1=A^kx^0k+B^kUkPxk+1=A^kPx0kA^Tk+QxZ^k+1=C^kA^kx^0k+C^kB^kUk+D^kUk+1+KZkYk-ΓC^kx^k-ΓD^kUk-Θ^⋆kΨ~⋆kKZk=C^kA^kPxkΓC^TkΓC^kPxkΓC^Tk+QY+Θ^⋆kGkΘ^⋆Tk.The details and the convergence analysis of (18) is treated in [15].
Thus, we can replace x(k) and Z(k) in (11) and (12) with their estimates x^(k) and Z^(k).
Combining (11), (12), and (18), we can form a recursive state and parameter estimation algorithm for state-space MIMO Wiener systems (which is abbreviated as the RPSE algorithm). To initialize the RPSE algorithm, we take x^(0), Z^(0), A^(0), B^(0), C^(0), D^(0), and Θ^(0) as small real matrices and vectors; for example, A^(0)=10-3ınx×nx with ınx×nx is a matrix whose all elements are equal to 1. We also take Gx(k)=gxI, GZ(k)=gZI, P(k)=p0I, Px(k)=pxI with gx, gZ, p0, and px as large positive numbers and I is an identity matrix with appropriate sizes. It should be noted that the choice of the initial conditions and the different gains must be chosen adequately in order to improve the estimation quality and the convergence rapidity of the various parameters.
The procedure for computing the parameter, the state, and the internal vector estimates using the RPSE algorithm is listed as follows:
To initialize, let k=1, x^(1), x^2, Z^(1), A^(1), B^(1), C^(1), D^(1), Θ^(1), Gx, GZ, P(1), lx, lZ, Qx, QZ, Px(1), and G.
Collect the input-output vectors U(k) and Y(k).
Form ξxk-1 and ξZk-1, compute δxk-1 and δZk-1, and update A^k, B^k, C^k, D^k using (11).
Compute the covariance matrix P(k) and construct Θ^k using (12).
Compute the state gains Kx(k) and KZ(k) and the covariance matrix Px(k+1) and update the state estimate x^(k+1) and the internal vector estimate Z^(k+1) using (18).
Increase k by 1 and go to step 2.
The flowchart of the recursive learning algorithms which are used for the parameters estimation of nonlinear MIMO Wiener models is shown in Figure 2.
Flowchart for recursive estimation of MIMO Wiener models.
4. Example
Consider the following state-space model:(19)xk+1=00.5-0.40xk+00.85-0.60Uk+WkZk=1001xk+Vkand the system outputs are defined as(20)y1k=z1k+0.3z2k+e1k.y2k=0.3z12k+z2k+1+e2kThese two outputs can be grouped into the following matrix form:(21)Yk=ΘΨk+Ek,where Θ and Ψ(k) are given by(22)Θ=10.3000000.311,ΨTk=z1kz2kz12kz2k1 Using the decomposition technique, (21) can be written in a second matrix form:(23)Yk=Zk+Θ∗Ψ∗k+Ek,where Θ∗ and Ψ∗(k) are defined as(24)Θ∗=00.300.301Ψ∗Tk=z12kz2k1 In simulation, the inputs u1(k) and u2(k) are taken as two square sequences of levels -1.21.2 and -1.51.5, respectively; the variance-covariance matrices are Qx=0.005000.004, QZ=0.015000.015, and QY=σe21001. Applying the RPSE algorithm to estimate the parameters, the state, and internal variables of this system, the gain parameters and the initial conditions are chosen properly. The internal variables z1(k) and z2(k) and their estimates z^1(k) and z^2(k) and the outputs y1(k) and y2(k) with the predicted outputs y~1(k) and y~2(k) are shown in Figure 3. The estimation errors δz1(k), δz2(k), δy1(k), and δy2(k) are shown in Figure 4. The evolution curve of the variance σy12(k) and σy22(k) of the system outputs y1(k) and y2(k) is given in Figure 5, where(25)σyi2k=Σδyik-m¯δyi2kf-ki+1, with m¯δyi the statistical mean of the output prediction error and ki and kf the initial and final discrete time. The parameter estimates and their estimation errors with different data length and different noise variances are shown in Tables 1 and 2, where the parameter estimation error is defined by (26)δk=∑i=12∑j=12aij-a^ijk2+∑i=12∑j=12bij-b^ijk2+∑i=12∑j=12cij-c^ijk2+∑r=15∑s=12θrs-θ^rsk2∑i=12∑j=12aij2+∑i=12∑j=12bij2+∑i=12∑j=12cij2+∑r=12∑s=12θrs20.5×100.
The recursive parameter estimates and errors with σe2=0.01.
M
100
200
500
1000
a11k=0.0000
−0.0166
−0.0008
−0.0002
−0.0001
a12k=0.5000
0.4769
0.4989
0.5000
0.5001
a21k=-0.4000
−0.3965
−0.3998
−0.4000
−0.4000
a22k=0.0000
0.0048
0.0002
0.0001
0.0000
b11k=0.0000
0.001
0.0022
0.0001
0.0000
b12k=0.9220
0.8976
0.8991
0.9108
0.9219
b21k=-0.7746
−0.2654
−0.5653
−0.7731
−0.7744
b22k=0.0000
−0.1022
0.0101
0.0105
−0.0002
c11k=1.0000
1.0012
1.0000
1.0000
1.0000
c12k=0.0000
0.0019
0.0000
0.0000
0.0000
c21k=0.0000
−0.0007
−0.0000
−0.0000
−0.0000
c22k=1.0000
0.9989
1.0000
1.0000
1.0000
θ11k=1.0000
0.9573
0.9902
1.0027
1.000
θ12k=0.3000
0.1502
0.1853
0.2889
0.3001
θ13k=0.0000
0.0167
−0.0027
0.0016
0.0010
θ14k=0.0000
0.1202
0.1153
0.1108
0.0979
θ15k=0.0000
0.0027
0.00620
0.0015
0.0001
θ21k=0.0000
−0.0793
−0.0046
0.0003
0.0000
θ22k=0.0000
0.0036
0.0043
0.0014
0.0002
θ23k=0.3000
0.2306
0.2490
0.2913
0.3001
θ24k=1.0000
0.3797
0.5472
0.9894
0.9989
θ25k=1.0000
0.9941
0.9526
1.0008
1.0001
δk (%)
31.6697
19.9767
4.2708
3.6902
The recursive parameter estimates and errors with σe2=0.05.
M
100
200
500
1000
a11k=0.0000
−0.0166
−0.0008
0.0000
0.0000
a12k=0.5000
0.4769
0.4989
0.5002
0.5000
a21k=-0.4000
−0.3965
−0.3998
−0.4000
−0.4000
a22k=0.0000
0.0048
0.0002
0.0000
0.0001
b11k=0.0000
0.0012
0.0011
0.0010
0.0010
b12k=0.9220
0.8974
0.8996
0.8999
0.9105
b21k=-0.7746
−0.2654
−0.5651
−0.7659
−0.7699
b22k=0.0000
−0.1022
−0.0101
−0.0101
−0.0091
c11k=1.0000
1.0012
1.0000
1.0000
1.0000
c12k=0.0000
0.0019
0.0000
0.0000
0.0000
c21k=0.0000
−0.0007
0.0000
0.0000
0.0000
c22k=1.0000
0.9989
0.9995
1.0000
1.0000
θ11k=1.0000
0.9071
0.9567
1.0082
0.9976
θ12k=0.3000
0.1710
0.1773
0.2705
0.2995
θ13k=0.0000
0.0123
−0.0052
0.0013
0.001
θ14k=0.0000
0.1410
0.0473
0.1205
0.0992
θ15k=0.0000
−0.0597
0.00731
−0.00384
−0.0025
θ21k=0.0000
−0.0016
−0.0075
−0.0010
0.0013
θ22k=0.0000
0.0044
0.0044
0.0014
0.0001
θ23k=0.3000
0.4817
0.3529
0.2921
0.2997
θ24k=1.0000
0.4528
0.5773
0.9541
0.9903
θ25k=1.0000
0.9679
0.9486
1.0023
1.0010
δk (%)
31.4375
18.7664
5.1010
3.8043
The internal variables z1(k) and z2(k) and their estimates z^1(k) and z^2(k) and the outputs y1(k) and y2(k) with the predicted outputs y~1(k) and y~2(k).
The estimation errors δz1(k), δz2(k), δy1(k), and δy2(k).
The output variances σy12(k) and σy22(k) with σe2=0.01 and σe2=0.05.
From the simulation results in Tables 1, 2, and 3 and Figures 3, 4, 5, and 6, we can draw the following conclusions:
The estimated parameters converge to real ones and the parameter estimation errors given by the RPSE algorithm become smaller when k increases and the output variances decrease; see Tables 1 and 2.
The estimated internal outputs z^1(k) and z^2(k) and the predicted system outputs y~1(k) and y~2(k) can track the actual outputs without the computation step of the inverse nonlinear function and with small estimation errors; see Figures 3 and 4.
The output variances σy12(k) and σy22(k) rapidly drop to law values when the noise variances decrease; see Figure 5.
The estimation quality is better when the parametric gains lx and lZ are chosen as a time-varying parameters; see Figure 6.
The variances of the parameters estimates, using the Monte Carlo simulation, are small which improves the effectiveness of the RPSE algorithm; see Table 3.
The proposed algorithm can achieve a satisfactory estimation quality through appropriately choosing the parametric gains and the innovation length.
The Monte Carlo estimates and variances with σe2=0.01.
M
a^11
a^12
a^21
a^22
b^11
b^12
b^21
500
−0.0002 ± 0.0001
0.5000 ± 0.0045
−0.4000 ± 0.0135
0.0001 ± 0.0006
0.0001 ± 0.0004
0.9108 ± 0.0089
−0.7731 ± 0.0077
1000
−0.0001 ± 0.00005
0.5001 ± 0.0026
−0.4000 ± 0.00256
0.0000 ± 0.0002
0.0000 ± 0.00001
0.9219 ± 0.006897
−0.7744 ± 0.00755
True values
0.0000
0.5000
−0.4000
0.0000
0.0000
0.9220
−0.7746
M
b^22
c^11
c^12
c^21
c^22
θ^11
θ^12
500
0.0105 ± 0.0006
1.0000 ± 0.0090
0.0000 ± 0.00018
−0.0000 ± 0.0004
1.0000 ± 0.009814
1.0027 ± 0.02173
0.2889 ± 0.03178
1000
−0.0002 ± 0.0001
1.0000 ± 0.0090
0.0000 ± 0.0001
−0.0000 ± 0.00008
1.0000 ± 0.00968
1.0000 ± 0.0211
0.3001 ± 0.0233
True values
0.0000
1.0000
0.0000
0.0000
1.0000
1.0000
0.3000
M
θ^13
θ^14
θ^15
θ^21
θ^22
θ^23
θ^24
θ^25
500
0.0016 ± 0.0008
0.1108 ± 0.0086
0.0015 ± 0.0160
0.0003 ± 0.0039
0.0014 ± 0.0005
0.2913 ± 0.0253
0.9894 ± 0.099
1.0008 ± 0.0195
1000
0.0010 ± 0.0002
0.0979 ± 0.0081
0.0001 ± 0.0100
0.0000 ± 0.0033
0.0002 ± 0.0002
0.3001 ± 0.0230
0.9989 ± 0.017
1.0001 ± 0.0130
True values
0.0000
0.0000
0.0000
0.0000
0.0000
0.3000
1.0000
1.0000
The state estimates x^1(k) and x^2(k) with constant gains (lx, lZ) and variable parametric gains (lx(k), lZ(k)).
5. Conclusions
This paper presents a recursive parameter and state estimation algorithm by combining the least square technique, the adjustable model, and the Kalman filter principal for estimating jointly the parameters, the state vector, and the internal variables of MIMO Wiener state-space models. By estimating the parameters of the linear and nonlinear parts separately and using a specific decomposition technique, we can remove the redundant parameters and avoid problems related to computing the inverse nonlinear functions. The proposed algorithm can be combined with adaptive control schemes and extended to other blocks-oriented models.
Competing Interests
The authors declare that there are no competing interests regarding the publication of this paper.
Acknowledgments
This work was supported by the ministry of higher education and scientific research of Tunisia.
HanH.XieL.DingF.LiuX.Hierarchical least-squares based iterative identification for multivariable systems with moving average noises2010519-101213122010.1016/j.mcm.2010.01.0032-s2.0-77649187438FuentesR. Q.ChairezI.PoznyakA.PoznyakT.3D nonparametric neural identification201220121061840310.1155/2012/618403MR2874986DingF.QiuL.ChenT.Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems200945232433210.1016/j.automatica.2008.08.007MR25273282-s2.0-58349105805LiuY.DingF.ShiY.An efficient hierarchical identification method for general dual-rate sampled-data systems201450396297010.1016/j.automatica.2013.12.025MR31739982-s2.0-84895924313FathiA.MozaffariA.Identification of a dynamic model for shape memory alloy actuator using Hammerstein-Wiener gray box and mutable smart bee algorithm20136432835710.1108/IJICC-02-2013-0003MR30994902-s2.0-84889073100KalafatisA. D.WangL.CluettW. R.Linearizing feedforward-feedback control of pH processes based on the Wiener model200515110311210.1016/j.jprocont.2004.03.0062-s2.0-4444305329BhattacharjeeA.SutradharA.Online identification and internal model control for regulating hemodynamic variables in congestive heart failure patient2015428589MaoY.DingF.Multi-innovation stochastic gradient identification for Hammerstein controlled autoregressive autoregressive systems based on the filtering technique20157931745175510.1007/s11071-014-1771-92-s2.0-84925517571YuW.WilsonD.YoungB.Control performance assessment for block-oriented nonlinear systemsProceedings of the 8th IEEE International Conference on Control and Automation (ICCA '10)June 2010Xiamen, China1151115610.1109/icca.2010.55241732-s2.0-77957859801BiagiolaS. I.FigueroaJ. L.Wiener and Hammerstein uncertain models identification200979113296331310.1016/j.matcom.2009.05.004MR2549774ZBL1167.934042-s2.0-67649421596GuoF.2004Karlsruhe UniversitySalimifardM.JafariM.DehghaniM.Identification of nonlinear MIMO block-oriented systems with moving average noises using gradient based and least squares based iterative algorithms201294223110.1016/j.neucom.2012.01.0392-s2.0-84863441957SalhiH.KamounS.EssounbouliN.HamzaouiA.Adaptive discrete-time sliding-mode control of nonlinear systems described by Wiener models201689361162210.1080/00207179.2015.1088964MR3450841ZBL1332.930792-s2.0-84945242164SalhiH.KamounS.A recursive parametric estimation algorithm of multivariable nonlinear systems described by Hammerstein mathematical models201539164951496210.1016/j.apm.2015.03.050MR33548792-s2.0-84930756625SalhiH.KamounS.State and parametric estimation of nonlinear systems described by wiener sate-space mathematical models2014chapter 410714710.4018/978-1-4666-7248-2.ch004BiagiolaS. I.FigueroaJ. L.Identification of uncertain MIMO Wiener and Hammerstein models201135122867287510.1016/j.compchemeng.2011.05.0132-s2.0-80053915462LakshminarayananS.ShahS. L.NandakumarK.Identification of Hammerstein models using multivariate statistical tools199550223599361310.1016/0009-2509(95)00182-52-s2.0-0029407056DingF.Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling20133741694170410.1016/j.apm.2012.04.039MR30022722-s2.0-84870248582ZhangZ.DingF.LiuX.Hierarchical gradient based iterative parameter estimation algorithm for multivariable output error moving average systems201161367268210.1016/j.camwa.2010.12.014MR27640622-s2.0-78951487217VörösJ.Iterative identification of nonlinear dynamic systems with output backlash using three-block cascade models20157932187219510.1007/s11071-014-1804-4MR33054242-s2.0-84925541222HuY.LiuB.ZhouQ.YangC.Recursive extended least squares parameter estimation for Wiener nonlinear systems with moving average noises201433265566410.1007/s00034-013-9652-xMR31624602-s2.0-84897710253DingF.LiuX. G.ChuJ.Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification principle20137217618410.1049/iet-cta.2012.0313MR30874552-s2.0-84875377098LiJ. H.Parameter estimation for Hammerstein CARARMA systems based on the Newton iteration2013261919610.1016/j.aml.2012.03.038MR29714062-s2.0-84866035663GuoF.BretthauerG.Identification of MISO Wiener and Hammerstein systemsProceedings of the 7th European Control Conference (TEE '03)September 2003University of Cambridge214421492-s2.0-84929891664ChaudharyN. I.RajaM. A. Z.KhanJ. A.AslamM. S.Identification of input nonlinear control autoregressive systems using fractional signal processing approach201320131346727610.1155/2013/4672762-s2.0-84880320130WuZ.PengS.ChenB.ZhaoH.Robust Hammerstein adaptive filtering under maximum correntropy criterion2015171071497166MR3420794FalckT.DreesenP.De BrabanterK.PelckmansK.De MoorB.SuykensJ. A. K.Least-squares support vector machines for the identification of Wiener-Hammerstein systems201220111165117410.1016/j.conengprac.2012.05.0062-s2.0-84866853810KibangouA. Y.FavierG.Identification of parallel-cascade Wiener systems using joint diagonalization of third-order Volterra kernel slices200916318819110.1109/lsp.2008.20117062-s2.0-60849119281KibangouA. Y.FavierG.Tensor analysis-based model structure determination and parameter estimation for block-oriented nonlinear systems20104351452510.1109/JSTSP.2009.20391752-s2.0-77952661058WangX.DingF.Modelling and multi-innovation parameter identification for Hammerstein nonlinear state space systems using the filtering technique201622211314010.1080/13873954.2016.1142455MR3475858DingF.LiuX.MaX.Kalman state filtering based least squares iterative parameter estimation for observer canonical state space systems using decomposition201630113514310.1016/j.cam.2016.01.042MR3464997ZBL06549043DingF.Combined state and least squares parameter estimation algorithms for dynamic systems201438140341210.1016/j.apm.2013.06.007MR31277262-s2.0-84887623184MaX.DingF.Gradient-based parameter identification algorithms for observer canonical state space systems using state estimates20153451697170910.1007/s00034-014-9911-5MR33374812-s2.0-84928622937DingF.ChenT.Hierarchical identification of lifted state-space models for general dual-rate systems20055261179118710.1109/tcsi.2005.849144MR21474792-s2.0-22144498905MaX.DingF.Recursive and iterative least squares parameter estimation algorithms for observability canonical state space systems2015352124825810.1016/j.jfranklin.2014.10.024MR3292327ZBL1307.934052-s2.0-84919420203WangX.DingF.Recursive parameter and state estimation for an input nonlinear state space system using the hierarchical identification principle201511720821810.1016/j.sigpro.2015.05.0102-s2.0-84935890800WangD.-Q.DingF.Hierarchical least squares estimation algorithm for hammerstein-wiener systems2012191282582810.1109/LSP.2012.22217042-s2.0-84867831752WangD.DingF.XimeiL.Least squares algorithm for an input nonlinear system with a dynamic subspace state space model2014751-2496110.1007/s11071-013-1048-8MR31448352-s2.0-84891556804WestwickD.VerhaegenM.Identifying MIMO Wiener systems using subspace model identification methods199652223525810.1016/0165-1684(96)00056-4ZBL0875.930932-s2.0-0030197651BrulsJ.ChouC. T.HaverkampB. R. J.VerhaegenM.Linear and non-linear system identification using separable least-squares19995111612810.1016/S0947-3580(99)70146-92-s2.0-0032624479LoveraM.GustafssonT.VerhaegenM.Recursive subspace identification of linear and non-linear Wiener state-space models200036111639165010.1016/s0005-1098(00)00103-5MR18317202-s2.0-0034326121Glaria LopezT. Á.SbarbaroD.Observer design for nonlinear processes with Wiener structureProceedings of the 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC '11)December 2011Orlando, Fla, USAIEEE2311231610.1109/cdc.2011.61611422-s2.0-84860697084BaiE.-W.Identification of linear systems with hard input nonlinearities of known structure200238585386010.1016/s0005-1098(01)00281-3MR21333602-s2.0-0036568296VörösJ.Parameter identification of Wiener systems with discontinuous nonlinearities200144536337210.1016/s0167-6911(01)00155-4MR20219552-s2.0-0035861356XiongK.WeiC. L.LiuL. D.Robust Kalman filtering for discrete-time nonlinear systems with parameter uncertainties2012181152410.1016/j.ast.2011.03.0122-s2.0-84858800025