COMPLEXITYComplexity1099-05261076-2787Hindawi10.1155/2021/55158885515888Research ArticleTwo Identification Methods for a Nonlinear Membership Functionhttps://orcid.org/0000-0002-6580-5315JiYuejianghttps://orcid.org/0000-0002-1571-6007LvLixinZhuQuanminWuxi Vocational College of Science and TechnologyWuxi 214122Chinawuxi.gov.cn20213520212021622021283202131320213520212021Copyright © 2021 Yuejiang Ji and Lixin Lv.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper proposes two parameter identification methods for a nonlinear membership function. An equation converted method is introduced to turn the nonlinear function into a concise model. Then a stochastic gradient algorithm and a gradient-based iterative algorithm are provided to estimate the unknown parameters of the nonlinear function. The numerical example shows that the proposed algorithms are effective.

Natural Science Foundation of Jiangsu ProvinceBK20131109
1. Introduction

Parameter estimation has many applications in system modelling and signal processing . Recently, there exist many parameter estimation approaches, such as recursive least squares (RLS) methods , stochastic gradient (SG) methods , and iterative identification methods . For example, Chen et al. proposed an iterative method for Hammerstein systems with saturation and dead-zone nonlinearities . Li et al. presented a gradient iterative estimation algorithm and a Newton iterative estimation algorithm for a nonlinear function .

In the past years, the approximation capability of fuzzy systems has received much attention especially for control purposes . The approximation theory of fuzzy systems is often applied to approximate the unknown functions of control systems. It is shown that the identification of membership functions plays an essential role to ensure the system capability. There are many different types of membership functions in the approximation theory of fuzzy systems, such as the Gaussian membership functions  and other membership functions [23, 24].

Since the membership functions usually have complex nonlinear structures, the traditional least squares may be infeasible for the identification of the functions because their derivative functions sometimes do not have analytical solutions . Thanks to the gradient descent (GD) algorithm, which avoids solving an analytical function, thus it can be extended to complex nonlinear membership functions. The GD algorithm generates a sequence estimate using an iterative function, where the iterative function consists of two parts: one is the negative direction and the other is the step size [28, 29]. With a correct direction and an optimal step size, the GD algorithm can ensure the results converge to the true values.

In this paper, we propose two methods to estimate the unknown parameters of a nonlinear membership function; these two methods are both based on the GD method. First, a gradient iterative algorithm is proposed, which can estimate the parameters based on all the collected data, and thus, it has heavy computational efforts. In order to reduce the computational efforts, we turn the model of the nonlinear function into a regression model and then use two identification algorithms to estimate the unknown parameters.

The rest of the paper is organized as follows. Section 2 introduces the nonlinear function and the gradient-based iterative algorithm. Section 3 develops the model transformation-based stochastic gradient method and iterative method. Section 4 provides an illustrative example. Finally, concluding remarks are given in Section 5.

2. The Nonlinear Membership Function and the Gradient-Based Iterative Algorithm

Let us introduce some notations first. The symbol I stands for an identity matrix of the appropriate sizes; the norm of a matrix X is defined as X:=XTX; the superscript T denotes the matrix transpose.

Consider a nonlinear membership function in (1)yi=11+exia/b11+exia/b+a,where xi and yi,i=1,2,3,,nn3, are the measured data contaminated with noises and a and b are unknown parameters to be estimated. When the parameters a and b are already known, this nonlinear membership function is often used in a single-input and single-output fuzzy system.

Define the cost function and the parameter vector θ1 as(2)J1a,b12i=1nyi11+exia/b+11+exia/b+a2,θ1a,bT.

The gradient of J1θ1=J1a,b with respect to θ1 is(3)J1θ1=J1θ1aJ1θ1b=i=1nexi+a/bb1+exia/b21bexi+a/b+ab1+exia/b+a2yi11+exia/b+11+exia/b+ai=1nxiaexi+a/bb21+exia/b2xiaexi+a/b+ab21+exia/b+a2yi11+exia/b+11+exia/b+a.

Let k be the iterative variable and θ^1k=a^kb^k be the estimates of θ1 at iteration k. We can get the following gradient-based iterative algorithm:(4)θ^1k+1=θ^1kμ1kJ1θ^1k,J1θ^1k=i=1nexi+a^k/b^kb^k1+exia^k/b^k21b^kexi+a^k/b^k+ab^k1+exia^k/b^k+a2ɛ1iki=1nxia^kexi+a^k/b^kb^k21+exia^k/b^k2xia^kexi+a^k/b^k+ab^k21+exia^k/b^k+a^k2ɛ1ik,ɛ1ik=yi11+exia^k/b^k+11+exia^k/b^k+a^k,P1k=J1θ^1k,μ1k=argminμ10J1θ^1k+μ1P1k.

There exist many methods which can compute the step size μ1k, e.g., the steepest descent method, the stochastic gradient method, and the projection method .

Define(5)1b=a1,ab=a2,ab+a=a3.

Then, equation (1) can be simplified as(6)yt=11+ea1x+a211+ea1x+a3.

Define the cost function and the parameter vector θ2 as(7)J2a1,a2,a312i=1nyi11+exia/b+11+exia/b+a2,θ2a1,a2,a3T.

The gradient of J2θ2=J2a1,a2,a3 with respect to θ2 is(8)J2θ2=J2θ2a1J2θ2a2J2θ2a3=i=1nxiea1x+a21+ea1xi+a22xiea1xi+a31+ea1xi+a32yi11+ea1xi+a211+ea1xi+a3i=1nea1xi+a21+ea1xi+a22yi11+ea1xi+a211+ea1xi+a3i=1nea1xi+a31+ea1xi+a32yi11+ea1xi+a211+ea1xi+a3.

Let k be the iterative variable and θ^2k=a^1ka^2ka^3k be the estimates of θ2 at iteration k. We can get the following gradient-based iterative algorithm:(9)θ^2k+1=θ^2kμ2kJ2θ^2k,ɛ2ik=yi11+ea^1kxi+a^2k11+ea^1kxi+a^3k,J2θ^2k=i=1nxiea^1kxi+a^2k1+ea^1kxi+a^2k2xiea^1kxi+a^3k1+ea^1kxi+a^3k2ɛiki=1nea^1kxi+a^2k1+ea^1kxi+a^2k2ɛiki=1nea^1kxi+a^3k1+ea^1kxi+a^3k2ɛik,P2k=J2θ^2k,μ2k=argminμ20J2θ^2k+μ2P2k.

The above iterative algorithms are difficult and have heavy computational burden because they update the parameters with all the collected data. In order to reduce the computational effort, we propose two modified methods in the next section.

3. The Model Transformation-Based Stochastic Gradient Method and Iterative Method

Convert the nonlinear function (6) to an identification model:(10)y=ea2+ea3yea1x+ea3ea2ea1xea2+a3ye2a1x.

Then define the parameter vector θ3 and the information vector φ3t as(11)θ3ea2+ea3ea3ea2ea2+a3,φ3tytea1xtea1xtyte2a1xt.

Without loss of generality, a noise term vt with zero mean is introduced to the identification model in (10), and the identification model is put into a concise form(12)yt=φ3Ttθ3+vt.

Let θ^3 (i) be the ith element of the vector θ^3; we have a^2=lnθ^31θ^32/2, a^3=lnθ^31+θ^32/2, and a^=a^3a^2. At last, with the estimated parameters a^ and a^2, we obtain b^=a^/a^2.

Let θ^3t and φ^3t be the estimate of θ3 and φ3, and θ^3t and φ^3t are defined as(13)θ^3t=ea^2t+ea^3tea^3tea^2tea^2t+a^3t,φ^3t=ytea^1t1xtea^1t1xtyte2a^1t1xt.

The steps of computing the parameter estimation vector θ^3t by the MT-SG algorithm in (15)–(18) are listed as follows:

Collect the measured data xt,yt:t=0,1,2,,

To initialize, let t=1 and θ^30=1/p0

Compute a^1t1 according to (16)

Compute φ^3t by (17)

Choose μ3t according to (18)

Update the parameter estimation vector θ^3t by (15), and compare θ^3t and θ^3t1: if they are sufficiently close, or for some preset small ɛ, if θ^3tθ^3t1ɛ, then terminate the procedure and obtain the estimate θ^3t; otherwise, increase t by 1 and go to step 3

In general, the MT-SG algorithm is suitable for online identification and the iterative algorithm is used for offline identification. The iterative algorithm employs the idea of updating the estimate θ^3 using a fixed data batch with a finite length L and thus has a higher estimation accuracy than the SG algorithm. Next, we use finite measurement input-output data xi,yi,i=0,1,,L and iterative with subscript k.

Define the stacked output vector YL and the stacked information matrix ΦL as(19)YLyL,yL1,yL2,,y1TL,ΦLφL,φL1,φL2,,φ1TL×3.

Let θ^3k and φ^3k be the iterative estimate of θ3 and φ3 at iteration k=1,2,3,, and θ^3k, φ^3k, and Φ3kL are defined as(20)θ^3k=ea^3k+ea^3k,ea^3kea^2k,ea^2k+a^3kT,φ^3kL=yLea^1k1xL,ea^1k1xL,yLe2a^1k1xLT,Φ3kL=φ3kL,φ3kL1,φ3kL2,,φ3k1TL×3.

Minimizing J4θ3 and using the negative gradient search lead to the model transformation-based gradient iterative algorithm (MT-GI) of computing θ3:(22)θ^3k=θ^3k1+μ3kΦ3kTLYLΦ3kLθ^3k1,(23)a^1k1=a^2k1a^3k1a^2k1,(24)φ^3kL=yLea^1k1xL,ea^1k1xL,yLe2a^1k1xLT,(25)Φ3kL=φ3kL,φ3kL1,φ3kL2,,φ3k1TL×3,(26)0<μ3k<2λmaxΦ^3kTLΦ^3kL.

The steps of computing the parameter estimation matrix θ^3k by the MT-GI algorithm are listed as follows:

Collect the input data xt,yt:t=0,1,2,,L

To initialize, let k=1 and θ^30=1/p0, with 1 being a column vector whose entries are all unity and p0=106.

Compute a^1k1 by (23)

Form φ^3kt according to (24) and Φ^3kL according to (25)

Choose a maximum μ3k by (26)

Compute θ^3k by (22)

Compare θ^3k and θ^3k1: if they are sufficiently close, or for some preset small ɛ, if θ^3kθ^3k1ɛ, then terminate the procedure and obtain the iterative times k, and estimate θ^3k; otherwise, increase k by 1 and go to step 3

4. Example

Consider a nonlinear function(27)y=11+ex+a/b11+ex+a/b+a+v=11+ex+0.6/1.211+ex+0.6/1.2+0.6+v.where a1=1/b=0.83,a2=a/b=0.5, and a3=a/b+a=0.1, and then, we can conclude(28)θ3=ea2+ea3ea3ea2ea2+a3=1.7120.4990.67.

Define(29)c1c2c3c4ea2+ea3ea3ea2ea2+a3a1=1.7120.4990.670.83.

Assume xt is the input and is taken as a persistent excitation signal with zero mean and unit variances and vt is taken as white noise with a zero mean and variance σ12=0.102. Applying the MT-SG algorithm and MT-GI algorithm to estimate the parameters of this system, the parameter estimates and their errors are shown in Tables 1 and 2, and the parameter estimation errors δ:=θ^3θ3/θ3 versus t and k are shown in Figures 1 and 2.

The MT-SG estimates and errors.

tc1c2c3c4δ%
100.714480.482370.95045−0.8329349.87382
201.135110.482371.07447−0.8329333.91638
301.203070.482370.83073−0.8329325.69778
501.447680.482370.77121−0.8329313.64527
1001.675390.482370.68940−0.832932.15313
1501.705690.482370.67428−0.832930.89147
2001.711240.482370.67039−0.832930.81363
2501.711760.482370.67022−0.832930.81274
3001.711760.482370.67022−0.832930.81274
True values1.712000.499000.67000−0.83000

The MT-GI estimates and errors with L=20.

kc1c2c3c4δ%
101.547560.053910.49679−0.8329324.30909
201.693110.084020.65245−0.8329320.00987
301.713530.109390.67933−0.8329318.75623
501.716810.157970.68337−0.8329316.42697
1001.715640.276160.68055−0.8329310.73834
1501.714410.390410.67768−0.832935.24201
2001.713220.500850.67491−0.832930.29525
True values1.712000.499000.67000−0.83000

The parameter estimation errors δ versus t (MT-SG).

The parameter estimation errors δ versus k (MT-GI).

From Tables 1 and 2 and Figures 1 and 2, we can draw the following conclusions:

The MT-GI algorithm has higher estimation accuracy than the MT-SG algorithm

The parameter estimation errors by the MT-SG algorithm become smaller and smaller and go to zero with t increasing

The parameter estimation errors by the MT-GI algorithm become smaller and smaller and go to zero with the subscript k increasing

5. Conclusions

This paper presents two identification methods for a nonlinear membership function. An equation converted method is proposed to convert the nonlinear function to a concise model, and then an MT-SG algorithm and an MT-GI algorithm are provided to identify the nonlinear function. The simulation results verify the proposed algorithms.

Data Availability

The simulation data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of Jiangsu Province (no. BK20131109).