Least-Squares-Based Iterative Identification Algorithm for Wiener Nonlinear Systems

This paper focuses on the identification problem ofWiener nonlinear systems.The application of the key-term separation principle provides a simplified form of the estimated parameter model. To solve the identification problem of Wiener nonlinear systems with the unmeasurable variables in the information vector, the least-squares-based iterative algorithm is presented by replacing the unmeasurable variables in the information vector with their corresponding iterative estimates. The simulation results indicate that the proposed algorithm is effective.


Introduction
Wiener systems are typical nonlinear systems [1], which can represent a nonlinear dynamic system with a dynamic linear block followed by a nonlinear static function.Wiener systems have been used in modeling a glutamate fermentation process [2].Recently, great attention has been paid to the identification issues for Wiener systems, and many studies have been performed.In much existing work, some assume that the nonlinear part of Wiener systems has an invertible function representation over the operating range of interest [3].Wang and Ding presented least squares-based and gradient-based iterative identification algorithms for Wiener nonlinear systems [4]; Chen studied identification problems for Wiener systems with saturation and dead-zone nonlinearities [5].Zhou et al. derived an auxiliary model-based gradient iterative algorithm for Wiener nonlinear output error systems by using the key-term decomposition principle [6].
Ding et al. developed a least-squares-based iterative algorithm to estimate the parameters for a multi-input multioutput system with colored ARMA noise from input-output data [23].On the basis of the work in [6,[24][25][26], this paper presents a least-squares-based iterative estimation algorithm for Wiener nonlinear systems.
The rest of this paper is organized as follows.Section 2 derives the identification model of Wiener nonlinear systems.Section 3 presents a least squares based iterative algorithm for Wiener nonlinear systems.Section 4 provides an example to illustrate the effectiveness of the proposed algorithm.The conclusions of the paper are summarized in Section 5.

Problem Formation
Let us firstly introduce some notations [27,28].The superscript T denotes the matrix transpose; I stands for an identity matrix of appropriate sizes; 1  represents an -dimensional column vector whose elements are 1; the norm of a matrix X is defined by ‖X‖ 2 := tr[XX T ]; X() stands for the estimate of X at time .
The Wiener nonlinear system consists of a linear dynamic subsystem followed by a static nonlinear block as shown in Figure 1 [25,26].The linear dynamic subsystem can be given as where () and () are the input and the inner output, respectively, and () and () are polynomials in  −1 , defined as The static nonlinear block is generally assumed to be the sum of the nonlinear basis functions of a known basis f := ( 1 ,  2 , . . .,   ) as follows: In this paper, we assume that the nonlinear function (⋅) can be expressed by the polynomial of the order   as follows: and the polynomial orders   are known.Without loss of generality, we introduce a stochastic white noise V() with zero mean and variance  2 to system output and have The linear block output () is identical with the nonlinear block input.A direct substitution of () from ( 1) into ( 5) would result in a very complex expression containing crossmultiplied parameters and variables.To simplify this problem, the key-term separation principle can be applied [29].
We fix a coefficient of the nonlinear blocks.For example, let the first coefficient of  be unity; that is,  1 = 1.Equation ( 1) can be rewritten to be and then substituting ( 6) into (5) for the separated (), the system output is given in the form Define the information vectors and the parameter vectors Thus, ( 6) can be written in a vector form as Using ( 9), from (7), we can obtain the following identification model: The objective of this paper is to present a least squares based iterative algorithm to estimate the parameters   ,   ,   for the Wiener nonlinear system.

The Least-Squares-Based Iterative Algorithm
Based on the methods in [18,20,24] for linear systems and Hammerstein nonlinear systems, we derive a least-squaresbased iterative algorithm for the Wiener model.Define the stacked output vector Y(), the stacked information vector Φ(), and the white noise vector V() as Define a quadratic criterion function To minimize (), letting its partial differential with respect to  be zero gives the least squares estimate of  as follows: Since Φ() in ( 13) containing unknown inner variable () leads to a difficulty that the iterative solution θ of  is impossible to compute.In order to solve this difficulty, the approach here is based on the auxiliary model idea [30][31][32].
The flowchart of computing the parameter estimate θ is shown in Figure 2. The steps involved in computing the parameter estimate θ in the LSI algorithm using a fixed data batch with the data length  are listed as follows.(1) Collect the input-output data {(), () :  = 0, 1, 2, . . ., } and form () by ( 21).
(6) For some preset small , if ‖ θ − θ−1 ‖ 2 ⩽ , then terminate the procedure and obtain the iterative times  and estimate   ; otherwise, increment  by 1 and go to step 3.

Example
Consider the following Wiener nonlinear system with the linear subsystem and the nonlinearity is given by For this example system, {()} are taken as persistent excitation signal sequences with zero mean and unit variance and {V()} as a white noise process with zero mean and constant variances  2 = 0.10 2 and  2 = 0.20 2 , separately.Taking the data length  = 1000, we apply the proposed LSI algorithm in ( 19)- (24) to estimate the parameters (  ,   ,   ) of this system the parameter estimates and their errors with different noise variances are shown in Tables 1 and 2, and the parameter estimation errors  versus  are shown in Figure 3, where  := ‖ θ () − ‖/‖‖.From Tables 1 and 2 and Figure 3, we can draw the following conclusions.
(i) The parameter estimation errors given by the proposed approach gradually become smaller as the iteration  increases; see the error curves of the algorithm in Figure 3 and the estimation errors of the last columns of Tables 1 and 2.
(ii) As the variance of the noise decreases, the parameter estimation errors given by the proposed approach become smaller; see Tables 1 and 2.