The robust stability of uncertain discrete-time recurrent neural networks with time-varying delay is investigated. By decomposing some connection weight matrices, new Lyapunov-Krasovskii functionals are constructed, and serial new improved stability criteria are derived. These criteria are formulated in the forms of linear matrix inequalities (LMIs). Compared with some previous results, the new results are less conservative. Three numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed method.
1. Introductionn
In recent years, recurrent neural networks (see [1–7]), such as Hopfield neural networks, cellular neural networks, and other networks have been widely investigated and successfully applied in all kinds of science areas such as pattern recognition, image processing, and fixed-point computation. However, because of the finite switching speed of neurons and amplifiers, time delay is unavoidable in nature and technology. It can make important effects on the stability of dynamic systems. Thus, the studies on stability are of great significance. There has been a growing research interest on the stability analysis problems for delayed neural networks, and many excellent papers and monographs have been available. On the other hand, during the design of neural network and its hardware implementation, the convergence of a neural network may often be destroyed by its unavoidable uncertainty due to the existence of modeling error, the deviation of vital data, and so on. Therefore, the studies on robust convergence of delayed neural network have been a hot research direction. Up to now, many sufficient conditions, either delay-dependent or delay-independent, have been proposed to guarantee the global robust asymptotic or exponential stability for different class of delayed neural networks (see [8–13]).
It is worth pointing out that most neural networks have been assumed to be in continuous time, but few in discrete time. In practice, the discrete-time neural networks are more applicable to problems that are inherently temporal in nature or related to biological realities. And they can ideally keep the dynamic characteristics, functional similarity, and even the physical or biological reality of the continuous-time networks under mild restriction. Thus, the stability analysis problems for discrete-time neural networks have received more and more interest, and some stability criteria have been proposed in literature (see [14–25]). In [14], Liu et al. researched a class of discrete-time RNNs with time-varying delay, and proposed a delay-dependent condition guaranteeing the global exponential stability. By using a similar technique to that in [21], the result obtained in [14] has been improved by Song and Wang in [15]. The results in [15] are further improved by Zhang et al. in [16] by introducing some useful terms. In [17], Yu et al. proposed a new less conservative result than that obtained in [16] via constructing a new augment Lyapunov-Krasovskii functional.
In this paper, the connection weight matrix C is decomposed, and some new Lyapunov-Krasovskii functionals are constructed. Combined with linear matrix inequality (LMI) technique, serial new improved stability criteria are derived. Numerical examples show that these new criteria are less conservative than those obtained in [14–17].
Notation 1.
The notations are used in our paper except where otherwise specified. ∥·∥ denotes a vector or a matrix norm; ℝ,ℝn are real and n-dimension real number sets, respectively; ℕ+ is nonnegative integer set. I is identity matrix; * represents the elements below the main diagonal of a symmetric block matrix; Real matrix P>0<(0) denotes that P is a positive definite (negative definite) matrix; ℕ[a,b]={a,a+1,…,b}; λmin(λmax) denotes the minimum and maximum eigenvalue of a real matrix.
2. Preliminaries
Consider a discrete-time recurrent neural network with time-varying delays [17] described by
Σ:x(k+1)=C(k)x(k)+A(k)f(x(k))+B(k)f(x(k-τ(k)))+J,k=1,2,…,
where x(k)=[x1(k),x2(k),…,xn(k)]T∈ℝn denotes the neural state vector; f(x(k))=[f1(x1(k)),f2(x2(k)),…,fn(xn(k))]T, f(x(k-τ(k)))=[f1(x1(k-τ(k))),f2(x2(k-τ(k))),…,fn(xn(k-τ(k)))]T are the neuron activation functions; J=[J1,J2,…,Jn]T is the external input vector; Positive integer τ(k) represents the transmission delay that satisfies 0<τm≤τ(k)≤τM, where τm,τM are known positive integers representing the lower and upper bounds of the delay. C(k)=C+ΔC(k), A(k)=A+ΔA(k), B(k)=B+ΔB(k). C=diag(c1,c2,…,cn) with |ci|<1 describes the rate with which the ith neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; C,A,B∈ℝn×n represent the weighting matrices; ΔC(k),ΔA(k),ΔB(k) denote the time-varying structured uncertainties which are of the following form:
[ΔC(k),ΔA(k),ΔB(k)]=KF(k)[Ec,Ea,Eb],
where K,Ec,Ea,Eb are known real constant matrices with appropriate dimensions, F(k) is unknown time-varying matrix function satisfying FT(k)F(k)≤I,forallk∈ℕ+.
The nominal Σ0 of Σ can be defined as
Σ0:x(k+1)=Cx(k)+Af(x(k))+Bf(x(k-τ(k)))+J,k=1,2,…,
To obtain our main results, we need to introduce the following assumption, definition and lemmas.
Assumption 1.
For any x,y∈ℝ, x≠y,
σi-≤fi(x)-fi(y)x-y≤σi+,i=1,2,…,n,
where σi-,σi+ are known constant scalars.
As pointed out in [16] under Assumption 1, system (2.3) has equilibrium points. Assume that x*=[x1*,x2*,…,xn*]T is an equilibrium point of (2.3) and let yi(k)=xi(k)-xi*, gi(yi(k))=fi(yi(k)+xi*)-fi(xi*). Then, system (2.3), can be transformed into the following form:
y(k+1)=Cy(k)+Ag(y(k))+Bg(y(k-τ(k))),k=1,2,…,
where y(k)=[y1(k),y2(k),…,yn(k)]T, g(y(k))=[g1(y1(k)),g2(y2(k)),…,gn(yn(k))]T, g(y(k-τ(k)))=[g1(y1(k-τ(k))),g2(y2(k-τ(k))),…,gn(yn(k-τ(k)))]T. From Assumption 1, for any x,y∈ℝ, x≠y, functions gi(·) satisfy σi-≤(gi(x)-gi(y))/(x-y)≤σi+,i=1,2,…,n, and gi(0)=0.
Remark 2.1.
Assumption 1 is widely used for dealing with the stability problem for neural networks. As pointed out in [13, 14, 16, 17, 26, 27], constants σi-,σi+(i=1,2,…,n) can be positive, negative, and zero. Thus, this assumption is less restrictive than traditional Lipschitz condition.
Definition 2.2.
The delayed discrete-time recurrent neural network in (2.5) is said to be globally exponentially stable if there exist two positive scalars α>0 and 0<β<1 such that
∥y(k)∥≤α·βksups∈ℕ[-τM,0]∥y(s)∥,∀k≥1.
Lemma 2.3 (Tchebychev Inequality [28]).
For any given vectors vi∈ℝn,i=1,2,…,n, the following inequality holds:
[∑i=1nvi]T[∑i=1nvi]≤n∑i=1nviTvi.
Lemma 2.4 (see [29]).
For given matrices Q=QT,H,E and R=RT>0 of appropriate dimensions, then
Q+HFE+ETFTHT<0,
for all F satisfying FTF≤R, if and only if there is an ε>0, such that
Q+ε-1HHT+εETRE<0.
Lemma 2.5 (see [16]).
If Assumption 1 holds, then for any positive-definite diagonal matrix D=diag(d1,d2,…,dn)>0, the following inequality holds:
g(y(k))TDg(y(k))-yT(k)D(∏1+∏2)g(y(k))+yT(k)∏1D∏2y(k)≤0,k∈ℕ+,
where ∏1=diag(σ1-,σ2-,…,σn-), ∏2=diag(σ1+,σ2+,…,σn+).
Lemma 2.6 (see [30]).
Given constant symmetric matrices Σ1,Σ2,Σ3 where Σ1T=Σ1 and 0<Σ2=Σ2T, then Σ1+Σ3TΣ2-1Σ3<0 if and only if
(Σ1Σ3TΣ3-Σ2)<0,or,(-Σ2Σ3Σ3TΣ1)<0.
Lemma 2.7 (see [13]).
Let N and E be real constant matrices with appropriate dimensions, matrix F(k) satisfying FT(k)F(k)≤I, then, for any ϵ>0, EF(k)N+NTFT(k)ET≤ϵ-1EET+ϵNTN,k∈ℕ+.
3. Main Results
To obtain our main results, we decompose the connection weight matrix C as follows:
C=C1+C2.
Then, we can get the following stability results.
Theorem 3.1.
For any given positive scalars 0<τm<τM, then, under Assumption 1, system (2.5) without uncertainty is globally exponentially stable for any time-varying delay τ(k) satisfying τm≤τ(k)≤τM, if there exist positive-definite matrices P1,Q1,Q2,Q3, positive-definite diagonal matrices D1,D2,Q4,Q5,Q6, and arbitrary matrices Pi,Hi,i=2,3,…,21 with appropriate dimensions, such that the following LMI holds:
Ξ≜(Ξ11Ξ12Ξ13Ξ14Ξ15Ξ16Ξ17Ξ18Ξ19Ξ1,10*Ξ22Ξ23Ξ24Ξ25Ξ26Ξ27Ξ28Ξ29Ξ2,10**Ξ33Ξ34Ξ35Ξ36Ξ37Ξ38Ξ39Ξ3,10***Ξ44Ξ45Ξ46Ξ47Ξ48Ξ49Ξ4,10****Ξ55Ξ56Ξ57Ξ58Ξ59Ξ5,10*****Ξ66Ξ67Ξ68Ξ69Ξ6,10******Ξ77Ξ78Ξ79Ξ7,10*******Ξ88Ξ89Ξ8,10********Ξ99Ξ9,10*********Ξ10,10)<0,
where
Ξ11=2C1TP1C1-C1TP12C2-C2TP12TC1-2P1+H12C2+C2TH12T+Q2+Q3+(τM-τm+1)Q1+(1+τm)Q4+(1+τM)Q5+(τM-τm)Q6-2∏1D1∏2,Ξ12=C2T(H13-P13)T,Ξ13=C1T(P1+P12)-H12+C2T(H14-P14-P12)T+C1TP1T,Ξ14=C1TP2-H2+C2T(H15-P15)T,Ξ15=-C1TP2+H2+C2T(H16-P16)T,Ξ16=-C1TP12A+H12A+C2T(H17-P17)T+D1(∏1+∏2),Ξ17=-C1TP12B+H12B+C2T(H18-P18)T,Ξ18=-C1TP2+H2+C2T(H19-P19)T,Ξ19=C1TP2-H2+C2T(H20-P20)T,Ξ1,10=C1TP2-H2+C2T(H21-P21)T,Ξ22=-Q1-2∏1D2∏2,Ξ23=P13-H13,Ξ24=P3-H3,Ξ25=-P3+H3,Ξ26=-P13A+H13A,Ξ27=-P13B+H13B+D2(∏1+∏2),Ξ28=-P3+H3,Ξ29=P3-H3,Ξ2,10=P3-H3,Ξ33=P12+P14-H14+P12T+P14T-H14T+2P1,Ξ34=P2+P4-H4+P15T-H15TΞ35=H4-P2-P4+P16T-H16T,Ξ36=(H14-P12-P14)A+P17T-H17T,Ξ37=(H14-P12-P14)B+P18T-H18T,Ξ38=H4-P2-P4+P19T-H19T,Ξ39=P4+P2-H4+P20T-H20T,Ξ3,10=P4+P2-H4+P21T-H21T,Ξ44=P5-H5+P5T-H5T-Q3,Ξ45=H5-P5+P6T-H6T,Ξ46=H15A-P15A+P7T-H7T,Ξ47=H15B-P15B+P8T-H8T,Ξ48=H5-P5+P9T-H9T,Ξ49=P5-H5+P10T-H10T,Ξ49=P5-H5+P11T-H11T,Ξ55=H6-P6+H6T-P6T-Q2,Ξ56=H16A-P16A+H7T-P7T,Ξ57=H16B-P16B+H8T-P8T,Ξ58=H6-P6+H9T-P9T,Ξ59=P6-H6+H10T-P10T,Ξ5,10=P6-H6+H11T-P11T,Ξ66=H17A-P17A+ATH17T-ATP17T-D1-D1T,Ξ67=H17B-P17B+ATH18T-ATP18T,Ξ68=H7-P7+ATH19T-ATP19T,Ξ69=P7-H7+ATH20T-ATP20T,Ξ6,10=P7-H7+ATH21T-ATP21T,Ξ77=H18B-P18B+BTH18T-BTP18T-D2-D2T,Ξ78=H8-P8+BTH19T-BTP19T,Ξ79=P8-H8+BTH20T-BTP20T,Ξ7,10=P8-H8+BTH21T-BTP21T,Ξ88=H9-P9+H9T-P9T-(1+τM)-1Q5,Ξ89=P9-H9+H10T-P10T,Ξ8,10=P9-H9+H11T-P11T,Ξ9,9=P10-H10+P10T-H10T-(1+τm)-1Q4,Ξ9,10=P10-H10+H11T-P11T,Ξ10,10=P11-H11+P11T-H11T-(τM-τm)-1Q6.
Proof.
Construct a new augmented Lyapunov-Krasovskii functional candidate as follows:
V(k)=V1(k)+V2(k)+V3(k)+V4(k)+V5(k)+V6(k),
where
V1(k)=2YT(k)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)10n×10nY(k),YT(k)=[yT(k),yT(k-τ(k)),ηT(k),yT(k-τM),yT(k-τm),gT(y(k)),gT(y(k-τM)),∑i=k-τMkyT(i),∑i=k-τmkyT(i),∑i=k-τM+1k-τmyT(i)]T,η(k)=y(k+1)-C1y(k); 0 is zero matrix with appropriate dimensions:
V2(k)=∑i=k-τ(k)k-1yT(i)Q1y(i),V3(k)=∑i=k-τmk-1yT(i)Q2y(i)+∑i=k-τMk-1yT(i)Q3y(i),V4(k)=∑j=k-τmk-1∑i=jk-1yT(i)Q4y(i)+∑j=k-τMk-1∑i=jk-1yT(i)Q5y(i),V5(k)=∑j=k-τM+1k-τm∑i=jk-1yT(i)Q1y(i),V6(k)=∑j=k-τM+1k-τm∑i=jk-1yT(i)Q6y(i).
Set ỸT(k+1)=[yT(k+1),yT(k-τ(k)),ηT(k),yT(k-τM),yT(k-τm),gT(y(k)),gT(y(k-τM)),∑i=k-τMkyT(i), ∑i=k-τmkyT(i), ∑i=k-τM+1k-τmyT(i)]T=[yT(k)C1T+ηT(k),ηT(k),yT(k-τM), yT(k-τm), gT(y(k)),gT(y(k-τM)),∑i=k-τMkyT(i), ∑i=k-τmkyT(i),∑i=k-τM+1k-τmyT(i)]T. Define ΔV(k)=V(k+1)-V(k). Then along the solution of system (2.5) we have
ΔV1(k)=2YT(k+1)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Y(k+1)-2YT(k)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Y(k)=2ỸT(k+1)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Ỹ(k+1)-2YT(k)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Y(k)≜2I1-2I2,I1=YT(k)(C1T0000000000I00000000I0I0000000000I0000000000I0000000000I0000000000I0000000000I0000000000I0000000000I)(P1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000)Ỹ(k+1)=YT(k)(C1T0000000000I00000000I0I0000000000I0000000000I0000000000I0000000000I0000000000I0000000000I0000000000I)(P1P2P120P3P130P4P140P5P150P6P160P7P170P8P180P9P190P10P200P11P21)(y(k+1)00).
On the other hand, since η(k)-C2y(k)-Ag(y(k))-Bg(y(k-τ(k)))=0, ∑i=k-τmky(i)-∑i=k-τMky(i)+∑i=k+1-τMk-τmy(i)-yT(k-τm)+yT(k-τM)=0, we have
(y(k+1)00)=(C1y(k)+η(k)∑i=k-τmky(i)-∑i=k-τMky(i)+∑i=k+1-τMk-τmy(i)-yT(k-τm)+yT(k-τM)η(k)-C2y(k)-Ag(y(k))-Bg(y(k-τ(k))))=(C10I0000000000I-I00-III-C20I00-A-B000)Y(k),I2=YT(k)(P1H2H120H3H130H4H140H5H150H6H160H7H170H8H180H9H190H10H200H11H21)(y(k)00)=YT(k)(P1H2H120H3H130H4H140H5H150H6H160H7H170H8H180H9H190H10H200H11H21)(I000000000000I-I00-III-C20I00-A-B000)Y(k).ΔV2(k)≤yT(k)Q1y(k)-yT(k-τ(k))Q1y(k-τ(k))+∑i=k+1-τMk-τmyT(i)Q1y(i),ΔV3(k)=yT(k)(Q2+Q3)y(k)-yT(k-τm)Q2y(k-τm)-yT(k-τM)Q3y(k-τM).
From Lemma 2.3 we can obtain
ΔV4(k)=∑j=k+1-τmk∑i=jkyT(i)Q4y(i)-∑j=k-τmk-1∑i=jk-1yT(i)Q4y(i)+∑j=k+1-τMk∑i=jkyT(i)Q5y(i)-∑j=k-τMk-1∑i=jk-1yT(i)Q5y(i)=∑j=k-τmk-1∑i=j+1kyT(i)Q4y(i)-∑j=k-τmk-1∑i=jk-1yT(i)Q4y(i)+∑j=k-τMk-1∑i=j+1kyT(i)Q5y(i)-∑j=k-τMk-1∑i=jk-1yT(i)Q5y(i)=∑j=k-τmk-1(yT(k)Q4y(k)-yT(j)Q4y(j))+∑j=k-τMk-1(yT(k)Q5y(k)-yT(j)Q5y(j))≤(1+τm)yT(k)Q4y(k)-∑j=k-τmkyT(j)Q4y(j)+(1+τM)yT(k)Q5y(k)-∑j=k-τMkyT(j)Q5y(j)≤(1+τm)yT(k)Q4y(k)-11+τm[∑j=k-τmky(j)]TQ4[∑j=k-τmky(j)]+(1+τM)yT(k)Q5y(k)-11+τM[∑j=k-τMky(j)]TQ5[∑j=k-τMky(j)],ΔV5(k)=∑j=k+2-τMk+1-τm∑i=jkyT(i)Q1y(i)-∑j=k+1-τMk-τm∑i=jk-1yT(i)Q1y(i)=∑j=k+1-τMk-τm∑i=j+1kyT(i)Q1y(i)-∑j=k+1-τMk-τm∑i=jk-1yT(i)Q1y(i)=∑j=k+1-τMk-τm(yT(k)Q4y(k)-yT(j)Q1y(j))=(τM-τm)yT(k)Q1y(k)-∑j=k+1-τMkyT(j)Q1y(j).
Similarly,
ΔV6(k)=(τM-τm)yT(k)Q6y(k)-∑j=k+1-τMk-τmyT(j)Q6y(j)≤(τM-τm)yT(k)Q6y(k)-1τM-τm[∑j=k+1-τMk-τmy(j)]TQ6[∑j=k+1-τMk-τmy(j)].
From Lemma 2.5, for any positive diagonal matrix D1,D2, it follows that
2yT(k-τ(k))D2(∏1+∏2)g(y(k-τ(k)))-2gT(y(k-τ(k)))D2g(y(k-τ(k)))-2yT(k-τ(k))∏1D2∏2y(k-τ(k))≥02yT(k)D1(∏1+∏2)g(y(k))-2gT(y(k))D1g(y(k))-2yT(k)∏1D1∏2y(k)≥0.
Combining (3.7)–(3.17), we get
ΔV(k)≤YT(k)ΞY(k).
If the LMI (3.2) holds, it follows that there exists a sufficient small scalar ε>0 such that
ΔV(k)≤-ε∥y(k)∥2.
On the other hand, it can easily to get that
V(k)≤2λm(P1)∥y(k)∥2+λmax(Q1)∑i=k-τ(k)k-1∥y(i)∥2+λmax(Q2)∑i=k-τmk-1∥y(i)∥2+λmax(Q3)∑i=k-τMk-1∥y(i)∥2+λmax(Q4)∑j=k-τmk∑i=jk-1∥y(i)∥2+λmax(Q5)∑j=k-τMk∑i=jk-1∥y(i)∥2+λmax(Q1)∑j=k-τMk-τm∑i=jk-1∥y(i)∥2+λmax(Q6)∑j=k+1-τMk-τm∑i=jk-1∥y(i)∥2≤2λmax(P)∥y(k)∥2+λ∑i=k-τMk-1∥y(i)∥2,
where λ=λmax(Q1)+λmax(Q2)+λmax(Q3)+(1+τm)λmax(Q4)+(1+τM)λmax(Q5)+(1+τM-τm)(λmax(Q1)+λmax(Q6)). Choose a scalar θ>1 such that -εθ+2(θ-1)λmax(P1)+(θ-1)λ·τMθτM=0. Then by (3.19) and (3.20), we get
θk+1V(k+1)-θkV(k)=θk+1ΔV(k)+θk(θ-1)V(k)≤ε1θk∥y(k)∥2+ε2θk∑i=k-τMk-1∥y(i)∥2,
where ε1=-εθ+2λmax(P)(θ-1), ε2=λ(θ-1). Therefore, for arbitrary positive integer N≥τM+1, summing up both sides of (3.21) from 0 to N-1, we can obtain
θNV(N)-V(0)≤ε1∑k=0N-1θk∥y(k)∥2+ε2∑k=0N-1∑i=k-τMk-1θk∥y(i)∥2≤ε2τM(τM+1)θτMsupi∈ℕ[-τM,0]∥y(i)∥2+(ε1+ε2τMθτM)∑k=0N-1θk∥y(k)∥2.
Noting that
V(N)≥λmin(P1)∥y(N)∥2,V(0)≤(λτM+2λmax(P1))supi∈ℕ[-τM,0]∥y(i)∥2.
It follows that ∥y(N)∥≤α·βNsupi∈ℕ[-τM,0]∥y(i)∥, where β=(θ)-1, α=(λτM+2λmax(P)+ε2τM(τM+1)θτM)/λmin(P). By Definition 2.2, system (2.5) is globally exponentially stable, which completes the proof of Theorem 3.1.
Remark 3.2.
By constructing the new augmented Lyapunov functional, free-weighting matrices Pi,Hi,i=2,3,…,21 are introduced so as to reduce the conservatism of the delay-dependent result. Moreover, the decomposition of matrix C=C1+C2 makes the conservatism of the stability criterion reduce further, since the elements of matrices C1,C2 are not restricted to (-1,1) any more.
Remark 3.3.
Since Theorem 3.1 holds for arbitrary matrices C1,C2 satisfying C1+C2=C, then, when C1=0 or C2=0, respectively, we can easily obtains the following simplified useful corollaries.
Corollary 3.4.
For any given positive scalars 0<τm<τM, then, under Assumption 1, system (2.5) is globally exponentially stable for any time-varying delay τ(k) satisfying τm≤τ(k)≤τM, if there exist positive-definite matrices P1,Q1,Q2,Q3, positive-definite diagonal matrices D1,D2,Q4,Q5,Q6, and arbitrary matrices Pi,Hi,i=2,3,…,21 with appropriate dimensions, such that the following LMI holds:
Ξ̃≜(Ξ̃11Ξ̃12Ξ̃13Ξ̃14Ξ̃15Ξ̃16Ξ̃17Ξ̃18Ξ̃19Ξ̃1,10*Ξ22Ξ23Ξ24Ξ25Ξ26Ξ27Ξ28Ξ29Ξ2,10**Ξ33Ξ34Ξ35Ξ36Ξ37Ξ38Ξ39Ξ3,10***Ξ44Ξ45Ξ46Ξ47Ξ48Ξ49Ξ4,10****Ξ55Ξ56Ξ57Ξ58Ξ59Ξ5,10*****Ξ66Ξ67Ξ68Ξ69Ξ6,10******Ξ77Ξ78Ξ79Ξ7,10*******Ξ88Ξ89Ξ8,10********Ξ99Ξ9,10*********Ξ10,10)<0,
where
Ξ̃11=-2P1+H12C+CTH12T+Q2+Q3+(τM-τm+1)Q1+(1+τm)Q4+(1+τM)Q5+(τM-τm)Q6-2∏1D1∏2,Ξ̃12=CT(H13-P13)T,Ξ̃13=CT(H14-P14-P12)T-H12,Ξ̃14=CT(H15-P15)T-H2,Ξ̃15=H2+CT(H16-P16)T,Ξ̃16=H12A+CT(H17-P17)T+D1(∏1+∏2),Ξ̃17=H12B+CT(H18-P18)T,Ξ̃18=H2+CT(H19-P19)T,Ξ̃19=CT(H20-P20)T-H2,Ξ̃1,10=CT(H21-P21)T-H2.
Corollary 3.5.
For any given positive scalars 0<τm<τM, then, under Assumption 1, system (2.5) is globally exponentially stable for any time-varying delay τ(k) satisfying τm≤τ(k)≤τM, if there exist positive-definite matrices P1,Q1,Q2,Q3, positive-definite diagonal matrices D1,D2,Q4,Q5,Q6, and arbitrary matrices Pi,Hi,i=2,3,…,21 with appropriate dimensions, such that the following LMI holds:
Ξ̂≜(Ξ̂110Ξ̂13Ξ̂14Ξ̂15Ξ̂16Ξ̂17Ξ̂18Ξ̂19Ξ̂1,10*Ξ22Ξ23Ξ24Ξ25Ξ26Ξ27Ξ28Ξ29Ξ2,10**Ξ33Ξ34Ξ35Ξ36Ξ37Ξ38Ξ39Ξ3,10***Ξ44Ξ45Ξ46Ξ47Ξ48Ξ49Ξ4,10****Ξ55Ξ56Ξ57Ξ58Ξ59Ξ5,10*****Ξ66Ξ67Ξ68Ξ69Ξ6,10******Ξ77Ξ78Ξ79Ξ7,10*******Ξ88Ξ89Ξ8,10********Ξ99Ξ9,10*********Ξ10,10)<0,
where
Ξ̂11=2CTP1C-2P1+Q2+Q3+(τM-τm+1)Q1+(1+τm)Q4+(1+τM)Q5+(τM-τm)Q6-2∏1D1∏2,Ξ̂13=CT(P1+P12)-H12+CTP1T,Ξ̂14=CTP2-H2,Ξ̂15=-CTP2+H2,Ξ̂16=-CTP12A+H12A+D1(∏1+∏2),Ξ̂17=-CTP12B+H12B,Ξ̂18=-CTP2+H2,Ξ̂19=CTP2-H2,Ξ̂1,10=CTP2-H2.
Remark 3.6.
It is worth pointing out that Theorem 3.1 and Corollary 3.4 can be easily extended to robust exponential stability conditions. As for the stability of system (2.1), according to Lemma 2.4, we can obtain the following robust stability results.
Theorem 3.7.
For any given positive scalars 0<τm<τM, then, under Assumption 1, system (2.1) is robustly, globally, exponentially stable for any time-varying delay τ(k) satisfying τm≤τ(k)≤τM, if there exist positive-definite matrices P1,Q1,Q2,Q3, positive-definite diagonal matrices D1,D2,Q4,Q5,Q6, arbitrary matrices Pi,Hi,i=2,3,…,21 with appropriate dimensions, and a positive scalar ϵ such that the following LMI holds:
Ξ′≜(Ξξ1ϵξ2T-ϵI0*-ϵI)<0,
where ξ1T=[KT(H12-C1TP12)T,KT(H13-P13)T,KT(H14-P12-P14)T,KT(H15-P15)T,KT(H16-P16)T,KT(H17-P17)T,KT(H18-P18)T,KT(H19-P19)T,KT(H20-P20)T,KT(H21-P21)T], ξ2=[Ec,0,0,0,0,Ea,Eb,0,0,0].
Proof.
Replacing A,B,C2 in inequality (3.2) with A+KF(t)Ea, B+KF(t)Eb and C2+KF(t)Ec respectively, inequality (3.2) for system (2.1) is equivalent to Ξ+ξ1F(t)ξ2+ξ2TFT(t)ξ1T<0. From Lemmas 2.6 and 2.7, we can easily obtain this result, this complete the proof. Similarly, we have.
Theorem 3.8.
For any given positive scalars 0<τm<τM, then, under Assumption 1, system (2.1) is robustly, globally, exponentially stable for any time-varying delay τ(k) satisfying τm≤τ(k)≤τM, if there exist positive-definite matrices P1,Q1,Q2,Q3, positive-definite diagonal matrices D1,D2,Q4,Q5,Q6, arbitrary matrices Pi,Hi,i=2,3,…,21 with appropriate dimensions, and a positive scalar ϵ such that the following LMI holds:
Ξ̃′≜(Ξ̃ξ1ϵξ2T*-ϵI0**-ϵI)<0.
Theorem 3.9.
For any given positive scalars 0<τm<τM, then, under Assumption 1, system (2.1) is robustly, globally, exponentially stable for any time-varying delay τ(k) satisfying τm≤τ(k)≤τM, if there exist positive-definite matrices P1,Q1,Q2,Q3, positive-definite diagonal matrices D1,D2,Q4,Q5,Q6, and arbitrary matrices Pi,Hi,P¯j,H¯j,i=2,3,…,21,j=1,2,…,6 with appropriate dimensions, such that the following LMI holds:
Ξ′′≜(Ξ11′′Ξ12Ξ13Ξ14Ξ15Ξ16Ξ17Ξ18Ξ19Ξ1,10Ξ1,11′′*Ξ22Ξ23Ξ24Ξ25Ξ26Ξ27Ξ28Ξ29Ξ2,10Ξ2,11′′**Ξ33Ξ34Ξ35Ξ36Ξ37Ξ38Ξ39Ξ3,10Ξ3,11′′***Ξ44Ξ45Ξ46Ξ47Ξ48Ξ49Ξ4,10Ξ4,11′′****Ξ55Ξ56Ξ57Ξ58Ξ59Ξ5,10Ξ5,11′′*****Ξ66′′Ξ67Ξ68Ξ69Ξ6,10Ξ6,11′′******Ξ77′′Ξ78Ξ79Ξ7,10Ξ7,11′′*******Ξ88Ξ89Ξ8,10Ξ8,11′′********Ξ99Ξ9,10Ξ9,11′′*********Ξ10,10Ξ10,11′′**********Ξ11,11′′)13n×13n<0,
where
Ξ11′′=Ξ11+EcTEc,Ξ66=Ξ66′′+EaTEa,Ξ77′′=Ξ77+EbTEb,Ξ1,11′′=(H12-C1TP12)K¯+C2T(H¯23-P¯23)T,Ξ2,11′′=(H13-P13)K¯,Ξ3,11′′=(H14-P14-P12)K¯+P¯23T-H¯23T,Ξ4,11′′=(H15-P15)K¯+P¯22T-H¯22T,Ξ5,11′′=(H16-P16)K¯-P¯22T-H¯22T,Ξ6,11′′=(H17-P17)K¯-ATP¯23T+ATH¯23T,Ξ7,11′′=(H18-P18)K¯-BTP¯23T+BTH¯23T,Ξ8,11′′=(H19-P19)K¯-P¯22T+H¯22T,Ξ9,11′′=(H20-P20)K¯+P¯22T-H¯22T,Ξ10,11′′=(H21-P21)K¯+P¯22T-H¯22T,Ξ11,11′′=(H¯23-P¯23)K¯+K¯T(H¯23-P¯23)T-I¯,K¯=[KKK],P¯22=(P¯1P¯2P¯3),P¯23=(P¯4P¯5P¯6),H¯22=(H¯1H¯2H¯3),H¯23=(H¯4H¯5H¯6),I¯=(I000I000I).
Proof.
Replacing A,B,C in system (2.5) with A+KF(t)Ea, B+KF(t)Eb and C+KF(t)Ec, respectively. Then, system (2.5) can be transformed into the following equivalent form:
y(k+1)=Cy(k)+Ag(y(k))+Bg(y(k-τ(k)))+KF(t)Ecy(k)+KF(t)Eag(y(k))+KF(t)Ebg(y(k-τ(k)))=C1y(k)+C2y(k)+Ag(y(k))+Bg(y(k-τ(k)))+K¯Υ(k),
where
Υ(k)=(F(t)Ecy(k)F(t)Eag(y(k))F(t)Ebg(y(k-τ(k))))=diag(F(t),F(t),F(t))diag(Ec,Ea,Eb)(y(k)g(y(k))g(y(k-τ(k)))).
Constructing a new augmented Lyapunov-Krasovskii functional candidate as follows:
V(k)=V¯1(k)+V2(k)+V3(k)+V4(k)+V5(k)+V6(k),
where
V¯1(k)=2Y¯T(k)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)13n×13nY¯(k),Y¯T(k)=[yT(k),yT(k-τ(k)),ηT(k),yT(k-τM),yT(k-τm),gT(y(k)),gT(y(k-τM)),∑i=k-τMkyT(i),∑i=k-τmkyT(i),∑i=k-τM+1k-τmyT(i),ΥT(k)]T, η(k)=y(k+1)-C1y(k); V2(k),V3(k),…,V6(k) are the same as in Theorem 3.1.
Set Y¯¯T(k+1)=[yT(k+1),yT(k-τ(k)),ηT(k),yT(k-τM),yT(k-τm),gT(y(k)),gT(y(k-τM)),∑i=k-τMkyT(i),∑i=k-τmkyT(i),∑i=k-τM+1k-τmyT(i),ΥT(k)]T=[yT(k)C1T+ηT(k),ηT(k),yT(k-τM),yT(k-τm),gT(y(k)),gT(y(k-τM)),∑i=k-τMkyT(i),∑i=k-τmkyT(i),∑i=k-τM+1k-τmyT(i),ΥT(k)]T. Define ΔV(k)=V(k+1)-V(k). Then along the solution of system (3.32) we have
ΔV¯1(k)=2Y¯T(k+1)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Y¯(k+1)-2Y¯T(k)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Y¯(k)=2Y¯¯T(k+1)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Y¯¯(k+1)-2Y¯T(k)(P10⋯000⋯0⋮⋮⋮⋮00⋯0)Y¯(k)≜2I¯1-2I¯2.I¯1=Y¯T(k)(C1T00000000000I000000000I0I00000000000I00000000000I00000000000I00000000000I00000000000I00000000000I00000000000I00000000000I¯)(P1P2P120P3P130P4P140P5P150P6P160P7P170P8P180P9P190P10P200P11P210P¯22P¯23)(y(k+1)00).
On the other hand, since η(k)-C2y(k)-Ag(y(k))-Bg(y(k-τ(k)))-K¯Υ(k)=0, ∑i=k-τmky(i)-∑i=k-τMky(i)+∑i=k+1-τMk-τmy(i)-yT(k-τm)+yT(k-τM)=0, we have
(y(k+1)00)=(C1y(k)+η(k)∑i=k-τmky(i)-∑i=k-τMky(i)+∑i=k+1-τMk-τmy(i)-yT(k-τm)+yT(k-τM)η(k)-C2y(k)-Ag(y(k))-Bg(y(k-τ(k)))-K¯Υ(k))=(C10I00000000000I-I00-III0-C20I00-A-B000-K¯)Y¯(k),I¯2=Y¯T(k)(P1H2H120H3H130H4H140H5H150H6H160H7H170H8H180H9H190H10H200H11H210H¯22H¯23)(I0000000000000I-I00-III0-C20I00-A-B000-K¯)Y¯(k).
Noting that
ΥT(k)Υ(k)≤[yT(k),gT(y(k)),gT(y(k-τ(k)))]T×(EcTEc000EaTEa000EbTEb)(y(k)g(y(k))g(y(k-τ(k)))).
Combining (3.12)–(3.17), (3.36)–(3.40), similar to the proof of Theorem 3.1, one can easily obtain this result, which completes the proof.
Remark 3.10.
Compared with the augmented Lyapunov functional constructed in Theorem 3.1, this new augmented Lyapunov functional include the term Υ(k), which makes the conservatism of the stability criterion be reduced further (details for more, see Example 4.2).
4. Numerical Examples
In this section, three numerical examples will be presented to show the validity of the main results derived above.
Example 4.1.
For the convenience of comparison, let us consider a delayed discrete-time recurrent neural network in (2.5) with parameters given by
C=(0.8000.9),A=(0.001000.005),B=(-0.10.01-0.2-0.1).
The activation functions are given by g1(x)=g2(x)=tanh(x). It is easy to see that the activation functions satisfy Assumption 1 with σ1-=σ2-=0, σ1+=σ2+=1. For τm=2,4,6,8,10,20, references [15–17] gave out the allowable upper bound τM of the time-varying delay, respectively. Decompose matrix C as C=C1+C2, where
C1=(0.4000.5),C2=(0.4000.4).
Table 1 shows that our results are less conservative than these previous results.
Allowable upper bounds τM for given τm.
Cases
τm=2
τm=4
τm=6
τm=8
τm=10
τm=20
By [15]
11
11
12
13
14
21
By [16]
11
12
13
14
16
23
By [17]
13
13
17
19
21
31
By Theorem 3.1
τM>0
τM>0
τM>0
τM>0
τM>0
τM>0
Example 4.2.
Consider an uncertain delayed discrete-time recurrent neural network in (2.1) with parameters given by
C=(0.25000.1),A=(0.120.24-0.150.2),B=(-0.250.10.020.09),K=(0.2000.3),Ec=(0.150.10-0.7),Ea=(0.10.3-0.20.05),Eb=(0.130.06-0.050.15),J=(00).
The activation functions are given by f1(x)=tanh(0.55x)+sin(0.45x), f2(x)=tanh(0.65x)+sin(0.45x). It is easy to see that the activation functions satisfy Assumption 1 with σ1-=0.1,σ2-=0.2, σ1+=1,σ2+=1.1. For τm=2,4,6,8,10, references [16, 17] gave out the allowable upper bound τM of the time-varying delay, respectively. Decompose matrix C as C=C1+C2, where
C1=(0.2-10.0120.05),C2=(0.051-0.0120.05).
Set ϵ=50, by using the MATLAB toolbox, the allowable upper bounds τM for given τm are showed in Table 2. Obviously, our results are less conservative than these previous results.
Allowable upper bounds τM for given τm.
Cases
τm=2
τm=4
τm=6
τm=8
τm=10
By [16]
6
8
10
12
14
By [17]
10
12
14
16
18
By Theorem 3.7
17
19
21
23
25
By Theorem 3.9
22
24
26
28
30
Example 4.3.
Consider an uncertain delayed discrete-time recurrent neural network in (2.1) with parameters given by
C=(0.8000.9),A=(0.070.100.05),B=(-0.10.01-0.2-0.1),K=(0.02000.03),Ec=(0.150.10-0.7),Ea=(0.10.3-0.20.05),Eb=(0.130.06-0.050.15),J=(00).
And the activation functions are the same as given in Example 4.2. Decompose matrix C as C=C1+C2, where
C1=(0.4000.5),C2=(0.4000.4).
Set ϵ=50, by using the MATLAB toolbox, the allowable upper bounds τM for given τm are showed in Table 3.
The free-weighting matrices are obtained as follows when τm=2,τM=60:
Q1=(0.0074-0.0002-0.00020.0007),Q2=(0.0627-0.0191-0.01910.0094),Q3=(0.0627-0.0191-0.01910.0094),D1=(0.8703000.2672),Q4=(0.0079000.0012),Q5=1.0e-003(0.2326000.0303),Q6=1.0e-003(0.2493000.0320),P1=(6.3936-0.9854-0.98540.2599),P5=(-304.8096-16.0366-131.0952-930.9992),D2=(0.0612000.0083),P6=1.0e+003(1.19310.10970.33650.0269),P9=1.0e+003(-1.4764-0.1509-0.07911.0626),P10=(-350.0434-852.4925-26.8470904.2654),P11=1.0e+003(-0.5682-0.0257-0.04131.1255),P12=1.0e+004(-0.6548-1.62950.93994.7692),P13=1.0e+003(2.2001-3.7370-0.3034-0.3479),P14=1.0e+004(0.31860.7851-0.4972-2.3726),P17=(273.4789-200.5789-165.3251-31.3854),P18=1.0e+003(0.0888-3.84541.60804.3515),H5=(-304.4622-16.0335-131.0922-930.6433),H6=1.0e+003(1.19270.10970.33650.0265),H9=1.0e+003(-1.4768-0.1509-0.07911.0622),H10=(-349.6840-852.4933-26.8478904.6230),H11=1.0e+003(-0.5678-0.0257-0.04131.1259),H12=1.0e+004(-0.2616-0.65220.46982.3832),H13=1.0e+003(2.2001-3.7370-0.3034-0.3479),H14=1.0e+004(-0.3344-0.84430.44222.4005),H17=1.0e+004(274.5315-200.9016-167.3579-33.2092),H18=1.0e+003(0.0893-3.84401.59704.3553),P2=P3=P4=P7=P8=P15=P16=P19=P20=P21=H2=H3=H4=H7=H8=H15=H16=H19=H20=H21=0.
Allowable upper bounds τM for given τm.
Cases
τm=2
τm=4
τm=6
τm=8
τm=10
τm=20
By Theorem 3.7
88
90
92
94
96
105
5. Conclusion
By decomposing some connection weight matrices , combined with linear matrix inequality (LMI) technique, some new augmented Lyapunov-Krasovskii functionals are constructed, and serial new improved sufficient conditions ensuring exponential stability or robust exponential stability are obtained. Numerical examples show that the new criteria derived in this paper are less conservative than some previous results obtained in the references cited therein.
Acknowlegments
This work was supported by the program for New Century Excellent Talents in University (NCET-06-0811) and the Research Fund for the Doctoral Program of Guizhou College of Finance and Economics (200702).
YuJ.ZhangK.FeiS.LiT.Simplified exponential stability analysis for recurrent neural networks with discrete and distributed time-varying delays20082051465474MR246665210.1016/j.amc.2008.08.022WangJ.HuangL.GuoZ.Dynamical behavior of delayed Hopfield neural networks with discontinuous activations200933417931802MR2488248XiaY.HuangZ.HanM.Exponential p-stability of delayed Cohen-Grossberg-type BAM neural networks with impulses2008383806818MR242336510.1016/j.chaos.2007.01.009AliM. S.BalasubramaniamP.Stability analysis of uncertain fuzzy Hopfield neural networks with time delays200914627762783MR248388610.1016/j.cnsns.2008.09.024WuH.ShanC.Stability analysis for periodic solution of BAM neural networks with discontinuous neuron
activations and impulses20093325642574LiY.FanX.Existence and globally exponential stability of almost periodic solution for Cohen-Grossberg BAM neural networks with variable coefficients200933421142120MR2488268KwonO. M.ParkJ. H.Improved delay-dependent stability criterion for neural networks with time-varying delays20093735529535MR248890010.1016/j.physleta.2008.12.005XiongW.SongL.CaoJ.Adaptive robust convergence of neural networks with time-varying delays20089412831291MR242254310.1016/j.nonrwa.2007.02.017ZBL1154.93420ChoH. J.ParkJ. H.Novel delay-dependent robust stability criterion of delayed cellular neural networks200732311941200MR228655210.1016/j.chaos.2005.11.040ZBL1127.93352SongQ.CaoJ.Global robust stability of interval neural networks with multiple time-varying delays20077413846MR229418710.1016/j.matcom.2006.06.030ZBL1119.34054LiT.GuoL.SunC.Robust stability for neural networks with time-varying delays and linear fractional
uncertainties200771421427SinghV.Improved global robust stability of interval delayed neutral networks via split interval: genralizations20082061290297MR247497410.1016/j.amc.2008.08.036LiuY.WangZ.LiuX.Robust stability of discrete-time stochastic neural networks with time-varying
delays200871823833LiuY.Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis2007362480488SongQ.WangZ.A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural
networks with time-varying delays2007368134145ZhangB.XuS.ZouY.Improved delay-dependent exponential stability criteria for discrete-time recurrent
neural networks with time-varying delays200872321330YuJ.ZhangK.FeiS.Exponential stability criteria for discrete-time recurrent neural networks with time-varying delayNonlinear Analysis: Real World Applications. In press10.1016/j.nonrwa.2008.10.053ZhaoH.WangL.Stability and bifurcation for discrete-time Cohen-Grossberg neural network20061792787798MR229319110.1016/j.amc.2005.11.148ZBL1147.39303LiuX.Discrete-time BAM neural networks with variable delays2007367322330ZhaoH.WangL.MaC.Hopf bifurcation and stability analysis on discrete-time Hopfield neural network with delay200891103113MR237016610.1016/j.nonrwa.2006.09.005ZBL1136.93039GaoH.ChenT.New results on stability of discrete-time systems with time-varying state delay2007522328334MR229501710.1109/TAC.2006.890320LiaoX.GuoS.Delay-dependent asymptotic stability of Cohen-Grossberg models with multiple time-varying delays20072007172896010.1155/2007/28960MR2346520ZBL1146.37365ZhangQ.WeiX.XuJ.On global exponential stability of discrete-time Hopfield neural networks with variable delays2007200796767510.1155/2007/67675MR2306869ChenYBiW.WuY.Delay-dependent exponential stability for discrete-time BAM neural networks with time-varying delays200820081442161410.1155/2008/421614MR2457147ZBL1160.39304StevićS.Permanence for a generalized discrete neural network system2007200798941310.1155/2007/89413MR2293718WangZ.ShuH.LiuY.HoD. W. C.LiuX.Robust stability analysis of generalized neural networks with discrete and distributed time delays2006304886896MR224762910.1016/j.chaos.2005.08.166ZBL1142.93401LiuY.WangZ.LiuX.Global exponential stability of generalized recurrent neural networks with discrete
and distributed delays200619667675LeeT. N.RadovicU. L.General decentralized stabilization of large-scale linear continuous and discrete time-delay systems198746621272140MR92427710.1080/00207178708934039ZBL0634.93062XieL.Output feedback H∞ control of systems with parameter uncertainty1996634741750MR165081910.1080/00207179608921866ZBL0841.93014BoydS.El GhaouiL.FeronE.BalakrishnanV.199415Philadelphia, Pa, USASIAMxii+193SIAM Studies in Applied MathematicsMR1284712