Instrumental Variable-Based OMP Identification Algorithm for Hammerstein Systems

Hammerstein systems are formed by a static nonlinear block followed by a dynamic linear block. To solve the parameterizing difficulty caused by parameter coupling between the nonlinear part and the linear part in a Hammerstein system, an instrumental variable method is studied to parameterize the Hammerstein system. To achieve in simultaneously identifying parameters and orders of the Hammerstein system and to promote the computational efficiency of the identification algorithm, a sparsity-seeking orthogonal matching pursuit (OMP) optimization method of compressive sensing is extended to identify parameters and orders of the Hammerstein system. The idea is, by the filtering technique and the instrumental variable method, to transform the Hammerstein system into a simple form with a separated nonlinear expression and to parameterize the system into an autoregressive model, then to perform an instrumental variable-based orthogonal matching pursuit (IV-OMP) identification method for the Hammerstein system. Simulation results illustrate that the investigated method is effective and has advantages of simplicity and efficiency.


Introduction
Nonlinear system modeling and identification are very important in theory and application [1][2][3][4][5][6]; block-oriented nonlinear systems, which combine nonlinear and linear blocks in various styles, are the typical representation of nonlinear systems.Hammerstein systems, a static nonlinear block plus a dynamic linear block, are a type of commonly used block-oriented nonlinear systems, and a lot of work has contributed in identification methods for Hammerstein systems.The traditional identification methods for Hammerstein systems mainly include the overparameterization model-based methods [7][8][9][10] and the recursive/iterative identification methods [11][12][13][14].The highly efficient identification methods include the key-term separation principlebased identification methods [15][16][17], the hierarchical identification methods [18,19], the filtering technique based-identification methods [20][21][22], the maximum likelihood estimation methods [23][24][25], and the evolution optimization methods [26][27][28].Recently, Chen et al. investigated a particle swarm optimization method [29], Krishnanathan et al. discussed a continuous-time nonlinear systems using approximate Bayesian computation [30], and Wang et al. studied a model recovery for Hammerstein systems using the auxiliary model-based orthogonal matching pursuit method [31] and so on.
It is difficult to parameterize Hammerstein systems due to existing parameter coupling between the nonlinear part and the linear part of Hammerstein systems.In order to solve this problem, an instrumental variable-based method is studied to parameterize the Hammerstein systems.Further, to achieve in simultaneously identifying parameters and orders and to promote the computational efficiency of the estimated method, a sparsity-seeking orthogonal matching pursuit optimization method of compressive sensing is extended to identify parameters and orders of the Hammerstein systems.The idea is, by the filtering technique and the instrumental variable method, to transform the Hammerstein system into a simple form with a separated nonlinear expression, so as to easily parameterize the system into autoregressive form, and to perform the instrumental variable-based orthogonal matching pursuit (IV-OMP) identification method for the Hammerstein systems.
In the compressive sensing theory [32][33][34], there mainly exist two sparse signal recovery methods: the orthogonal matching pursuit algorithms and the basis pursuit algorithms.The orthogonal matching pursuit (OMP) algorithm is a kind of greedy parameter recovering method [35][36][37], it selects the best fitting column of the measurement matrix and the corresponding sparse signal in each selected step.Due to the selection being orthogonal, the OMP algorithm has a lower computational complexity compared with the basis pursuit algorithms [38][39][40].
Recently, Mao et al. investigated parameter estimation algorithms for Hammerstein time-delay systems based on the OMP scheme to estimate the system parameters and the time delay [41] by means of the compressed sensing recovery theory and the auxiliary model identification idea.In this paper, an instrumental variable-based orthogonal matching pursuit (IV-OMP) algorithm is investigated to simultaneously estimate the orders and parameters of a Hammerstein system.The contributions lie in four aspects: (i) To solve the parameterizing difficulty caused by the coupling between the nonlinear part and the linear part in Hammerstein systems, a filtering techniquebased instrumental variable method is studied to parameterize the Hammerstein systems.
(ii) To achieve in simultaneously identifying parameters and orders and to promote the computational efficiency of the system, an instrumental variablebased orthogonal matching pursuit (IV-OMP) optimization method of compressive sensing is extended to identify parameters and orders of the systems.
(iii) The advantage of the proposed IV-OMP method over the traditional methods is that it is not necessary to collect a lot of data and invest a lot of power into the parameter identification based on the sparse principle.
(iv) Simulation results illustrate that the investigated method is effective and has advantages of simplicity and efficiency.
The rest of the paper is organized as follows.Section 2 demonstrates the identification problem of a Hammerstein system.Section 3 studies the parameterizing method of the Hammerstein system.Section 4 presents the IV-OMP identification algorithm by using the instrumental variables.Section 5 provides a numerical example for the proposed method.Finally, the concluding remarks are involved in Section 6.

The Problem Formulation
For the narrative convenience, we define some notation."M ≕ N" represents "M is defined as N"; z represents a unit forward shift operator: zy t = y t + 1 and z −1 y t = y t − 1 ; x t stand for the estimate of x at time t.The input nonlinear and output linear functions of a Hammerstein CARMA system in Figure 1 are expressed as where u t and y t are the system input and output, x t is an internal variable, and v t is stochastic white noise with zero mean; the input nonlinearity f is modeled as a linear combination of basis functions f k ; n c is the number of the basis functions; the linear block is a CARMA model; A z , B z , and D z are polynomials in the unit backward shift operator z −1 z −1 y t = y t − 1 and defined by If the order n a is known and the orders n b , n c , and n d are unknown, then we face two parameterizing problems for the Hammerstein system: (i) The coupling between the nonlinear part and the linear part in a Hammerstein system makes the parametrization of the Hammerstein system more difficult.
(ii) Simultaneous and efficient identification of system parameters and orders is difficult to be achieved, due to existing unknown orders.

The Parametrization of The Hammerstein CARMA System
Divide both sides of (2) by the filtering function B z , we get Define the instrumental variables z t and w t as Then, (4) can be written as 2 Complexity Replacing the expressions of A z and B z and B z and D z into ( 5) and ( 6), respectively, we can get Substituting z t and w t in ( 8) and ( 9) and x t in (1) into (7) gives Since the orders n b , n c , and n d are unknown, we assume a sufficient length of order L for n b , n c , and Then, (10) can be rewritten as Define the information vectors φ t , φ u t , and φ v t as and the parameter vectors Θ, θ u and θ v , as Then, (11) can be written as Sampling m sets of data t = 1, 2, … , m and substituting them into (14) get Define the accumulated information vectors and the matrix, then (15) can be described as If there are enough measurements, that is, m reaches several thousands, we can get the least squares estimate of Θ, as follows: But with an assumed big order length L (≥5) and N, the above least squares algorithm leads a big computational burden and is not suitable for solving system parameters and orders on line.To avoid this problem and get an efficient 3 Complexity identification algorithm, we choose a more advantageous method over the traditional least squares method; the aim is to identify the parameter vector Θ with less observations (K < m < n) by using the OMP theory based on the mentioned instrumental variable method.
According to the CS theory, we can regard the parameter vector Θ as a sparse signal.Let Θ 0 = K be the number of the nonzero entries in Θ, that is, the sparsity level of Θ, then the identification problem can be described as an orthogonal matching pursuit (OMP) method: where Θ is the estimate of Θ and ε ε > 0 is the error tolerance.

The IV-OMP Identification Algorithm
For the sparse parameter vector Θ, if K = Θ 0 K < N , there will be K nonzero scalar components in Θ.What we should do in this identification method is to pick out and recover valid data in Θ from Φ by using the OMP theory.
Define ϕ i as the ith column vector of Φ (a ϕ i is also called an atom), and θ i as the ith element of Then, the vector Y in (17) can be written as Obviously, the output vector Y contains a linear combination of all atoms ϕ i i = 1, 2, … , N .The main idea is to find the K nonzero items and corresponding valid atoms at the right-hand side of (21).
To describe the IV-OMP recovery algorithm, we need to give some notations.Let k = 1, 2, … be the iterative number and λ k be the index of the solution support ϕ λ k in Φ at the kth iteration; Λ k is a set composed of λ i , i = 1, 2, … , k; Φ Λk is the submeasurement matrix composed by the k columns of Φ indexed by Λ k ; r k denotes the residual at the kth iteration; and Θ k is the parameter estimation at the kth iteration.The algorithm is initialized as r 0 = Y, Λ 0 = ∅ and Θ Λ 0 = 0.
Define a cost function at the kth iteration Remark 1. Minimizing J θ i with respect to θ i means that the derivation of J θ i with respect to θ i is equal to 0, that is, and we get Substitute the above θ i into (22) to get a minimized J θ i as Remark 2. The result says that minimizing J θ i is equivalent to the quest for the largest inner product between the residual r k−1 and the normalized column vector ϕ i of Φ.
The next step is to find the index i and the column ϕ i corresponding to the largest inner product between the residual r k−1 and the normalized column vectors ϕ i and assign them into λ k and ϕ λk at the kth step: Remark 3. Minimizing the least squares cost function It means that the residual r k is orthogonal to the subinformation matrix Φ Λ k .Therefore, the index corresponding to the largest inner product between the residual r k−1 and the normalized column vector ϕ i is computed by Update the support set Λ k and the subinformation matrix With the obtained index set Λ k and the corresponding subinformation matrix Φ Λ k , the least squares estimation for the nonzero parameters at the kth iteration can be achieved.Define a cost function: Minimizing J Θ Λ k should get the least squares estimates of the nonzero parameters.But there exist the unknown instrumental variables z t − i and w t − i and the unmeasurable noise term v t − i in the information vector φ t of Φ; the parameter vector Θ Λ k cannot be estimated by using the standard OMP methods in the CS theory.The method is to use their estimates ẑ t − i , ŵ t − i , and v t − i to replace them; the computation of the estimates ẑ t − i , ŵ t − i , and v t − i is as follows: (i) Expand the orders n b , n c , and n d in ( 1), ( 2), (8), and ( 9).
(ii) ẑ t − i is computed by replacing a i and b i with their estimates âi,k and bi,k in ( 8).
(iii) v t − i is computed by replacing a i , b i , c i , and d i with their estimates âi,k , bi,k , ĉi,k , and di,k in ( 2) with (1) being inserted.
(iv) ŵ t − i is computed by replacing bi and di with their estimates bi,k and di,k in (9).The results are and the estimates of φ v t , φ t , and Φ at iteration k can be written as 32 With replacing the above unknown variables with their estimates, the least squares estimate Θ Λ k for the parameter vector Θ Λ k at the k step is given: Remark 4. At the beginning, the IV-OMP algorithm with the estimates ẑ t − 1 , ŵ t − i , and v t − i in the subinformation matrix Φ Λ k causes an inaccurate support atom selecting.
With the iteration k increasing, these estimated variables become more accurate, the misselected support atoms certainly unmeet the threshold requirement, and the corresponding element in the estimated parameter support set will be a small nonzero value.Thus, we set an appropriate small threshold ε to filter the parameter estimate Then we have Briefly, the proposed IV-OMP algorithm can be implemented as in Algorithm 1.

and let Φ
until k = a fixed positive integer k > K , s,.t.r k < ε ; .Estimated parameter vector: Θ Λ kε and Θ k with Θ Λ kε = 0 Algorithm 1: (IV-OMP algorithm).6 Complexity Table 1: the recursive parameter estimates and errors by using different algorithms.In the simulation, the input u t is taken as an uncorrelated persistently excited signal vector sequence with zero mean and unit variance, and v t is taken as a white noise sequence with zero mean and variance σ 2 = 0 10 2 .Applying the proposed IV-OMP algorithm to estimate the parameters and the orders of this system, the parameter estimates and their errors are shown in Table 1 and Figures 2 and 3.The relative error of the parameters is

Algorithm
From Table 1 and Figures 2-3, we can get the following: (1) The parameter estimation errors become (generally) smaller and smaller with the iteration k increasing.
(2) There exist 3 wrongly selected atoms corresponding to parameters ĉ4,k , ĉ5,k , and d3,k (black-colored lines in Figure 3).With the iteration k increasing, the wrongly chosen atoms are deleted, and the parameter estimation errors become smaller.

Conclusions
This paper is aimed at exploring the identification of both the parameters and orders of the Hammerstein CARMA system with limited sampling data.To solve the parameterizing difficulty caused by parameter coupling between the nonlinear part and the linear part in a Hammerstein system, firstly, by filtering the equation of the linear block with the coefficient function of the controlled term, we separate the parameter coupling between the linear block and the nonlinear block.Moreover, by using two instrumental variables, the Hammerstein system is parameterized into an autoregressive form.To achieve in simultaneously identifying parameters and orders and to promote the computational efficiency of the identification algorithm, an instrumental variablebased orthogonal matching pursuit (IV-OMP) optimization method of compressive sensing is extended to identify parameters and orders of the Hammerstein system.Simulation results illustrate that the investigated method is effective and has advantages of simplicity and efficiency.The proposed IV-OMP optimization method can be extended to the colored noise systems, the networked dynamic systems [42][43][44][45][46], and so on.

Figure 1 :
Figure 1: A system described by the Hammerstein CARMA model.