The purpose of this paper is to investigate the delay-dependent stability analysis for discrete-time neural networks with interval time-varying delays. Based on Lyapunov method, improved delay-dependent criteria for the stability of the networks are derived in terms of linear matrix inequalities (LMIs) by constructing a suitable Lyapunov-Krasovskii functional and utilizing reciprocally convex approach. Also, a new activation condition which has not been considered in the literature is proposed and utilized for derivation of stability criteria. Two numerical examples are given to illustrate the effectiveness of the proposed method.
1. Introduction
Neural networks have received increasing attention of researches from various fields of science and engineering such as moving image reconstructing, signal processing, pattern recognition, and fixed-point computation. In the hardware implementation of systems, there exists naturally time delay due to the finite information processing speed and the finite switching speed of amplifiers. It is well known that time delay often causes undesirable dynamic behaviors such as performance degradation, oscillation, or even instability of the systems. Since it is a prerequisite to ensure stability of neural networks before its application to various fields such as information science and biological systems, the problem of stability of neural networks with time delay has been a challenging issue [1–10]. Also, these days, most systems use digital computers (usually microprocessor or microcontrollers) with the necessary input/output hardware to implement the systems. The fundamental character of the digital computer is that it takes computed answers at discrete steps. Therefore, discrete-time modeling with time delay plays an important role in many fields of science and engineering applications. With this regard, various approaches to delay-dependent stability criteria for discrete-time neural networks with time delay have been investigated in the literature [11–16].
In the field of delay-dependent stability analysis, one of the hot issues attracting the concern of the researchers is to increase the feasible region of stability criteria. The most utilized index to check the conservatism of stability criteria is to get maximum delay bounds for guaranteeing the globally exponential stability of the concerned networks. Thus, many researchers put time and efforts into some new approaches to enhance the feasible region of stability conditions. In this regard, Liu et al. [11] proposed a unified linear matrix inequality approach to establish sufficient conditions for the discrete-time neural networks to be globally exponentially stable by employing a Lyapunov-Krasovskii functional. In [12, 13], the existence and stability of the periodic solution for discrete-time recurrent neural network with time-varying delays were studied under more general description on activation functions by utilizing free-weighting matrix method. Based on the idea of delay partitioning, a new stability criterion for discrete-time recurrent neural networks with time-varying delays was derived [14]. Recently, some novel delay-dependent sufficient conditions for guaranteeing stability of discrete-time stochastic recurrent neural networks with time-varying delays were presented in [15] by introducing the midpoint of the time delay’s variational interval. Very recently, via a new Lyapunov functional, a novel stability criterion for discrete-time recurrent neural networks with time-varying delays was proposed in [16] and its improvement on the feasible region of stability criterion was shown through numerical examples. However, there are rooms for further improvement in delay-dependent stability criteria of discrete-time neural networks with time-varying delays.
Motivated by the above discussions, the problem of new delay-dependent stability criteria for discrete-time neural networks with time-varying delays is considered in this paper. It should be noted that the delay-dependent analysis has been paid more attention than delay-independent one because the sufficient conditions for delay-dependent analysis make use of the information on the size of time delay [17, 18]. That is, the former is generally less conservative than the latter. By construction of a suitable Lyapunov-Krasovskii functional and utilization of reciprocally convex approach [19], a new stability criterion is derived in Theorem 3.1. Based on the results of Theorem 3.1 and motivated by the work of [20], a further improved stability criterion will be introduced in Theorem 3.4 by applying zero equalities to the results of Theorem 3.1. Finally, two numerical examples are included to show the effectiveness of the proposed method.
Notation.
ℝn is the n-dimensional Euclidean space, and ℝm×n denotes the set of all m×n real matrices. For symmetric matrices X and Y, X>Y (resp., X≥Y) means that the matrix X-Y is positive definite (resp., nonnegative). X⊥ denotes a basis for the null space of X. I denotes the identity matrix with appropriate dimensions. ∥·∥ refers to the Euclidean vector norm or the induced matrix norm. diag{⋯} denotes the block diagonal matrix. ⋆ represents the elements below the main diagonal of a symmetric matrix.
2. Problem Statements
Consider the following discrete-time neural networks with interval time-varying delays:
(2.1)y(k+1)=Ay(k)+W0g(y(k))+W1g(y(k-h(k)))+b,
where n denotes the number of neurons in a neural network, y(k)=[y1(k),…,yn(k)]T∈ℝn is the neuron state vector, g(k)=[g1(k),…,gn(k)]T∈ℝn denotes the neuron activation function vector, b=[b1,…,bn]T∈ℝn means a constant external input vector, A=diag{a1,…,an}∈ℝn×n(0≤ai<1) is the state feedback matrix, Wi∈ℝn×n(i=0,1) are the connection weight matrices, and h(k) is interval time-varying delays satisfying
(2.2)0<hm≤h(k)≤hM,
where hm and hM are known positive integers.
In this paper, it is assumed that the activation functions satisfy the following assumption.
Assumption 2.1.
The neurons activation functions, gi(·), are continuous and bounded, and for any u,v∈ℝ, u≠v,
(2.3)ki-≤gi(u)-gi(v)u-v≤ki+,i=1,2,…,n,
where ki- and ki+ are known constant scalars.
As usual, a vector y*=[y1*,…,yn*]T is said to be an equilibrium point of system (2.1) if it satisfies y*=Ay*+W0g(y*)+W1g(y*)+b. From [10], under Assumption 2.1, it is not difficult to ensure the existence of equilibrium point of the system (2.1) by using Brouwer’s fixed-point theorem. In the sequel, we will establish a condition to ensure the equilibrium point y* of system (2.1) is globally exponentially stable. That is, there exist two constants α>0 and 0<β<1 such that ∥y(k)-y*∥≤αβksup-hM≤s≤0∥y(s)-y*∥. To confirm this, refer to [16]. For simplicity, in stability analysis of the network (2.1), the equilibrium point y*=[y1*,…,yn*]T is shifted to the origin by utilizing the transformation x(k)=y(k)-y*, which leads the network (2.1) to the following form:
(2.4)x(k+1)=Ax(k)+W0f(x(k))+W1f(x(k-h(k))),
where x(k)=[x1(k),…,xn(k)]T∈ℝn is the state vector of the transformed network, and f(x(k))=[f1(x1(k)),…,fn(xn(k))]T∈ℝn is the transformed neuron activation function vector with fi(xi(k))=gi(xi(k)+yi*)-gi(yi*) and fi(0)=0. From Assumption 2.1, it should be noted that the activation functions fi(·)(i=1,…,n) satisfy the following condition [10]:
(2.5)ki-≤fi(u)-fi(v)u-v≤ki+,∀u,v∈ℝ,u≠v,
which is equivalent to
(2.6)[fi(u)-fi(v)-ki-(u-v)][fi(u)-fi(v)-ki+(u-v)]≤0,
and if v=0, then the following inequality holds:
(2.7)[fi(u)-ki-(u)][fi(u)-ki+(u)]≤0.
Here, the aim of this paper is to investigate the delay-dependent stability analysis of the network (2.4) with interval time-varying delays. In order to do this, the following definition and lemmas are needed.
Definition 2.2 (see [16]).
The discrete-time neural network (2.4) is said to be globally exponentially stable if there exist two constants α>0 and 0≤β≤1 such that
(2.8)∥x(k)∥≤αβksup-hM≤s≤0∥x(s)∥.
Lemma 2.3 ((Jensen inequality) [21]).
For any constant matrix 0<M=MT∈ℝn×n, integers hm and hM satisfying 1≤hm≤hM, and vector function x(k)∈ℝn, the following inequality holds:
(2.9)-(hM-hm+1)∑k=hmhMxT(k)Mx(k)≤-(∑k=hmhMx(k))TM(∑k=hmhMx(k)).
Lemma 2.4 ((Finsler’s lemma) [22]).
Let ζ∈ℝn, Φ=ΦT∈ℝn×n, and Γ∈ℝm×n such that rank(Γ)<n. The following statements are equivalent:
ζTΦζ<0, ∀Γζ=0, ζ≠0,
Γ⊥TΦΓ⊥<0,
Φ+𝒳Υ+ΥT𝒳T<0, ∀𝒳∈ℝn×m.
3. Main Results
In this section, new stability criteria for the network (2.4) will be proposed. For the sake of simplicity on matrix representation, ei∈ℝ10n×n(i=1,…,10) are defined as block entry matrices (e.g., e2=[0,I,0,…,0︸8]T). The notations of several matrices are defined as
(3.1)hd=hM-hm,ζ(k)=[∑xT(k),xT(k-hm),xT(k-h(k)),xT(k-hM),ΔxT(k),ΔxT(k-hm),ΔxT(k-hM),fT(x(k)),fT(x(k-h(k))),fT(x(k+1))∑]T,χ(k)=[xT(k),xT(k-hm),xT(k-hM),fT(x(k))]T,ξ(k)=[xT(k),ΔxT(k)]T,Γ=[(A-I),0,0,0,-I,0,0,W0,W1,0],Π1=[e1+e5,e2+e6,e4+e7,e10],Π2=[e1,e2,e4,e8],Π3=[e1,e5],Π4=[e2,e6],Π5=[e4,e7],Π6=[e2-e3,e3-e4],Π7=[e1,e8],Π8=[e3,e9],Π9=[e1+e5,e10],Ξ1=Π1RΠ1T-Π2RΠ2T,Ξ2=Π3NΠ3T+Π4(M-N)Π4T-Π5MΠ5T,Ξ3=e5(hm2Q1)e5T+e5(hd2Q2)e5T+e1(hmP1)e1T-e2(hmP1)e2T+hd∑i=23(eiPieiT-ei+1Piei+1T),Ξ4=-(e1-e2)(Q1+P1)(e1-e2)T-Π6[Q2+P2S⋆Q2+P3]Π6T,Ξ5=Π3(hm2Q3)Π3T+Π3(hd2Q4)Π3T,Φ=∑i=15Ξi,Θ=∑i=13Π6+i[-2KmHiKp(Km+Kp)Hi⋆-2Hi]Π6+iT.
Now, the first main result is given by the following theorem.
Theorem 3.1.
For given positive integers hm and hM, diagonal matrices Km=diag{k1-,…,kn-} and Kp=diag{k1+,…,kn+}, the network (2.4) is globally exponentially stable for hm≤h(k)≤hM, if there exist positive definite matrices R∈ℝ4n×4n, M∈ℝ2n×2n, N∈ℝ2n×2n, Qi∈ℝn×n, Qi+2∈ℝ2n×2n(i=1,2), positive diagonal matrices Hi∈ℝn×n(i=1,2,3), any symmetric matrices Pi∈ℝn×n(i=1,2,3), and any matrix S∈ℝn×n satisfying the following LMIs:
(3.2)[Γ⊥]T(Φ+Θ)[Γ⊥]<0,(3.3)[Q2+P2S⋆Q2+P3]≥0,(3.4)Q3+[0P1⋆0]>0,Q4+[0P2⋆0]>0,Q4+[0P3⋆0]>0,
where Φ, Θ, and Γ are defined in (3.1).
Proof.
Define the forward difference of x(k) and V(k) as
(3.5)Δx(k)=x(k+1)-x(k),ΔV(k)=V(k+1)-V(k).
Let us consider the following Lyapunov-Krasovskii functional candidate as
(3.6)V(k)=V1(k)+V2(k)+V3(k)+V4(k),
where
(3.7)V1(k)=χT(k)Rχ(k),V2(k)=∑s=k-hmk-1ξT(s)Nξ(s)+∑s=k-hMk-hm-1ξT(s)Mξ(s),V3(k)=hm∑s=-hm-1∑u=k+sk-1ΔxT(u)Q1Δx(u)+hd∑s=-hM-hm-1∑u=k+sk-1ΔxT(u)Q2Δx(u),V4(k)=hm∑s=-hm-1∑u=k+sk-1ξT(u)Q3ξ(u)+hd∑s=-hM-hm-1∑u=k+sk-1ξT(u)Q4ξ(u).
The forward differences of V1(k) and V2(k) are calculated as
(3.8)ΔV1(k)=χT(k+1)Rχ(k+1)-χT(k)Rχ(k)=[x(k)+Δx(k)x(k-hm)+Δx(k-hm)x(k-hM)+Δx(k-hM)f(x(k+1))]TR[x(k)+Δx(k)x(k-hm)+Δx(k-hm)x(k-hM)+Δx(k-hM)f(x(k+1))]-χT(k)Rχ(k)=ζT(k)(Π1RΠ1T-Π2RΠ2T)ζ(k)=ζT(k)Ξ1ζ(k),(3.9)ΔV2(k)=ξT(k)Nξ(k)-ξT(k-hm)Nξ(k-hm)+ξT(k-hm)Mξ(k-hm)-ξT(k-hM)Mξ(k-hM)=ζT(k)(Π3NΠ3T+Π4(M-N)Π4T-Π5MΠ5T)ζ(k)=ζT(k)Ξ2ζ(k).
By calculating the forward differences of V3(k) and V4(k), we get
(3.10)ΔV3(k)=hm2ΔxT(k)Q1Δx(k)-hm∑s=k-hmk-1ΔxT(s)Q1Δx(s)+hd2ΔxT(k)Q2Δx(k)-hd∑s=k-hMk-hm-1ΔxT(s)Q2Δx(s),(3.11)ΔV4(k)=hm2ξT(k)Q3ξ(k)-hm∑s=k-hmk-1ξT(s)Q3ξ(s)+hd2ξT(k)Q4ξ(k)-hd∑s=k-hMk-hm-1ξT(s)Q4ξ(s).
For any matrix P, integers l1 and l2 satisfying l1<l2, and a vector function x(s):[k-l2,k-l1-1]→ℝn where k is the discrete time, the following equality holds:
(3.12)xT(k-l1)Px(k-l1)-xT(k-l2)Px(k-l2)=∑s=k-l2k-11-1(xT(s+1)Px(s+1)-xT(s)Px(s)).
It should be noted that
(3.13)xT(s+1)Px(s+1)-xT(s)Px(s)=(Δx(s)+x(s))TP(Δx(s)+x(s))-xT(s)Px(s)=ΔxT(s)PΔx(s)+2xT(s)PΔx(s).
From the equalities (3.12) and (3.13), by choosing (l1,l2) as (0,hm), (hm,h(k)) and (h(k),hM), the following three zero equations hold with any symmetric matrices P1, P2, and P3:
(3.14)0=xT(k)(hmP1)x(k)-xT(k-hm)(hmP1)x(k-hm)-hm∑s=k-hmk-1(ΔxT(s)P1Δx(s)+2xT(s)P1Δx(s)),(3.15)0=xT(k-hm)(hdP2)x(k-hm)-xT(k-h(k))(hdP2)x(k-h(k))-hd∑s=k-h(k)k-hm-1(ΔxT(s)P2Δx(s)+2xT(s)P2Δx(s)),(3.16)0=xT(k-h(k))(hdP3)x(k-h(k))-xT(k-hM)(hdP3)x(k-hM)-hd∑s=k-hMk-h(k)-1(ΔxT(s)P3Δx(s)+2xT(s)P3Δx(s)).
By adding three zero equalities into the results of ΔV3(k), we have
(3.17)ΔV3(k)=ζT(k)(∑e5(hm2Q1)e5T+e5(hd2Q2)e5T+e1(hmP1)e1T-e2(hmP1)e2T+hd∑i=23(eiPieiT-ei+1Piei+1T)∑)ζ(k)+Σ=ζT(k)Ξ3ζ(k)+Σ+Υ,
where
(3.18)Σ=-hm∑s=k-hmk-1ΔxT(s)(Q1+P1)Δx(s)-hd∑s=k-h(k)k-hm-1ΔxT(s)(Q2+P2)Δx(s)-hd∑s=k-hMk-h(k)-1ΔxT(s)(Q2+P3)Δx(s),(3.19)Υ=-hm∑s=k-hmk-12xT(s)P1Δx(s)-hd∑s=k-h(k)k-hm-12xT(s)P2Δx(s)-hd∑s=k-hMk-h(k)-12xT(s)P3Δx(s).
By Lemma 2.3, when hm<h(k)<hM, the sum term Σ in (3.18) is bounded as
(3.20)Σ≤-(∑s=k-hmk-1Δx(s))T(Q1+P1)(∑s=k-hmk-1Δx(s))-(∑s=k-h(k)k-hm-1Δx(s))T(11-α(k))(Q2+P2)(∑s=k-h(k)k-hm-1Δx(s))-(∑s=k-hMk-h(k)-1Δx(s))T(1α(k))(Q2+P3)(∑s=k-hMk-h(k)-1Δx(s))=-ζT(k)(e1-e2)(Q1+P1)(e1-e2)Tζ(k)-ζT(k)Π6[11-α(k)(Q2+P2)0⋆1α(k)(Q2+P3)]Π6Tζ(k),
where α(k)=(hM-h(k))/hd.
By reciprocally convex approach [19], if the inequality (3.3) holds, then the following inequality for any matrix S satisfies
(3.21)[-α(k)1-α(k)I0⋆1-α(k)α(k)I][Q2+P2S⋆Q2+P3][-α(k)1-α(k)I0⋆1-α(k)α(k)I]≥0,
which implies
(3.22)[11-α(k)(Q2+P2)0⋆1α(k)(Q2+P3)]≥[Q2+P2S⋆Q2+P3].
It should be pointed out that when h(k)=hm or h(k)=hM, we have ∑s=k-h(k)k-hm-1Δx(s)=x(k-hm)-x(k-h(k))=0 or ∑s=k-hMk-h(k)-1Δx(s)=x(k-h(k))-x(k-hM)=0, respectively. Thus, the following inequality still holds:
(3.23)Σ≤ζT(k)(-(e1-e2)(Q1+P1)(e1-e2)T-Π6[Q2+P2S⋆Q2+P3]Π6T)ζ(k)=ζT(k)Ξ4ζ(k).
Then, ΔV3+ΔV4 has an upper bound as follows:
(3.24)ΔV3+ΔV4≤ζT(k)(Ξ3+Ξ4+Π3(hm2Q3)Π3T+Π3(hd2Q4)Π3T︸Ξ5)ζ(k)-hm∑s=k-hmk-1ξT(s){Q3+[0P1⋆0]}ξ(s)-hd∑s=k-h(k)k-hm-1ξT(s){Q4+[0P2⋆0]}ξ(s)-hd∑s=k-hMk-h(k)-1ξT(s){Q4+[0P3⋆0]}ξ(s).
Here, if the inequalities (3.4) hold, then ΔV3+ΔV4 is bounded as
(3.25)ΔV3+ΔV4≤ζT(k)(Ξ3+Ξ4+Ξ5)ζ(k).
From (2.7), for any positive diagonal matrices Hi=diag{hi1,…,hin}(i=1,2,3), the following inequality holds:
(3.26)0≤-2∑i=1nh1i[fi(xi(k))-ki-xi(k)][fi(xi(k))-ki+xi(k)]-2∑i=1nh2i[fi(xi(k-h(k)))-ki-xi(k-h(k))][fi(xi(k-h(k)))-ki+xi(k-h(k))]-2∑i=1nh3i[fi(xi(k+1))-ki-xi(k+1)][fi(xi(k+1))-ki+xi(k+1)]=ζT(k)(∑i=13Π6+i[-2KmHiKp(Km+Kp)Hi⋆-2Hi]Π6+iT)ζ(k)=ζT(k)Θζ(k).
Therefore, from (3.8)–(3.16) and by application of the S-procedure [23], ΔV has a new upper bound as
(3.27)ΔV≤ζT(k)(∑i=15Ξi︸Φ+Θ)ζ(k),
where Φ and Θ are defined in (3.1).
Also, the system (2.4) with the augmented vector ζ(k) can be rewritten as
(3.28)Γζ(k)=0,
where Γ is defined in (3.1).
Then, a delay-dependent stability condition for the system (2.4) is
(3.29)ζT(k)(Φ+Θ)ζ(k)<0subjecttoΓζ(k)=0.
Finally, by utilizing Lemma 2.4, the condition (3.29) is equivalent to the following inequality
(3.30)[Γ⊥]T(Φ+Θ)[Γ⊥]<0.
From the inequality (3.30), if the LMIs (3.2)-(3.4) hold. From (ii) and (iii) of Lemma 2.4, if the stability condition (3.29) holds, then for any free maxrix 𝒳 with appropriate dimension, the condition (3.29) is equivalent to
(3.31)Φ+Θ+𝒳Γ+ΓT𝒳T︸Ψ<0.
Therefore, from (3.31), there exists a sufficient small scalar ρ>0 such that
(3.32)ΔV≤ζT(k)Ψζ(k)<-ρ∥x(k)∥2.
By using the similar method of [11, 12], the system (2.4) is globally exponentially stable for any time-varying delay hm≤h(k)≤hM from Definition 2.2. This completes our proof.
Remark 3.2.
In Theorem 3.1, the stability condition is derived by utilizing a new augmented vector ζ(k) including f(x(k+1)). This state vector f(x(k+1)) which may give more information on dynamic behavior of the system (2.4) has not been utilized as an element of augmented vector ζ(k) in any other literature. Correspondingly, the state vector f(x(k+1)) is also included in (3.26).
Remark 3.3.
As mentioned in [10], the activation functions of transformed system (2.4) also satisfy the condition (2.6). In Theorem 3.4, by choosing (u,v) in (2.6) as (x(k),x(k-h(k))) and (x(k-h(k)), f(x(k+1)), more information on cross-terms among the states f(x(k)), f(x(k-h(k))), f(x(k+1)), x(k), and x(k-h(k)) will be utilized, which may lead to less conservative stability criteria. In stability analysis for discrete-time neural networks with time-varying delays, this consideration has not been proposed in any other literature. Through two numerical examples, it will be shown that the newly proposed activation condition may enhance the feasible region of stability criterion by comparing maximum delay bounds with the results obtained by Theorem 3.1.
As mentioned in Remark 3.3, from (2.6), we add the following new inequality with any positive diagonal matrices Hi=diag{hi1,…,hin}(i=4,5,6) to be chosen as
(3.33)0≤-2∑i=1nh4i[fi(xi(k))-fi(xi(k-h(k)))-ki-(xi(k)-xi(k-h(k)))]×[fi(xi(k))-fi(xi(k-h(k)))-ki+(xi(k)-xi(k-h(k)))]-2∑i=1nh5i[fi(xi(k-h(k)))-fi(xi(k+1))-ki-(xi(k-h(k))-xi(k)-Δxi(k))]×[fi(xi(k-h(k)))-fi(xi(k+1))-ki+(xi(k-h(k))-xi(k)-Δxi(k))]-2∑i=1nh6i[fi(xi(k+1))-fi(xi(k))-ki-Δxi(k)]×[fi(xi(k+1))-fi(xi(k))-ki+Δxi(k)]-ζT(k)(∑i=13Π9+i[-2KmH3+iKp(Km+Kp)H3+i⋆-2H3+i]Π9+iT)ζ(k)=ζT(k)Ωζ(k),
where Π10=[e1-e3,e8-e9], Π11=[e3-e1-e5,e9-e10], and Π12=[e5,e10-e8]. We will add this inequality (3.33) in Theorem 3.4. Now, we have the following theorem.
Theorem 3.4.
For given positive integers hm and hM, diagonal matrices Km=diag{k1-,…,kn-} and Kp=diag{k1+,…,kn+}, the network (2.4) is globally exponentially stable for hm≤h(k)≤hM, if there exist positive definite matrices R∈ℝ4n×4n, M∈ℝ2n×2n, N∈ℝ2n×2n, Qi∈ℝn×n, Qi+2∈ℝ2n×2n(i=1,2), positive diagonal matrices Hi∈ℝn×n(i=1,…,6), any symmetric matrices Pi∈ℝn×n(i=1,2,3), and any matrix S∈ℝn×n satisfying the following LMIs:
(3.34)[Γ⊥]T(Φ+Θ+Ω)[Γ⊥]<0,(3.35)[Q2+P2S⋆Q2+P3]≥0,(3.36)Q3+[0P1⋆0]>0,Q4+[0P2⋆0]>0,Q4+[0P3⋆0]>0,
where Φ, Γ, and Ω are defined in (3.1) and Θ is in (3.33).
Proof.
With the same Lyapunov-Krasovskii functional candidate in (3.6), by using the similar method in (3.8)–(3.16), and considering inequality (3.36), the procedure of deriving the condition (3.34)–(3.36) is straightforward from the proof of Theorem 3.1, so it is omitted.
4. Numerical Examples
In this section, we provide two numerical examples to illustrate the effectiveness of the proposed criteria in this paper.
Example 4.1.
Consider the discrete-time neural networks (2.4) where
(4.1)A=[0.40000.30000.3],W0=[0.2-0.20.10-0.30.2-0.2-0.1-0.2],W1=[-0.20.10-0.20.30.10.1-0.20.3].
The activation functions satisfy Assumption 2.1 with
(4.2)Km=diag{0,-0.4,-0.2},Kp=diag{0.6,0,0}.
For various hm, the comparison of maximum delay bounds (hM) obtained by Theorems 3.1 and 3.4 with those of [12, 16] is conducted in Table 1. From Table 1, it can be confirmed that the results of Theorem 3.1 give a larger delay bound than those of [12] and are equal to the results of [16]. However, the results obtained by Theorem 3.4 are better than the results of [16] and Theorem 3.1, which supports the effectiveness of the proposed idea mentioned in Remark 3.3.
Maximum bounds hM with different hm (Example 4.1).
Methods
2
4
6
10
Song and Wang [12]
6
8
10
14
Wu et al. [16]
12
14
16
20
Theorem 3.1
12
14
16
20
Theorem 3.4
14
16
18
22
Example 4.2.
Consider the discrete-time neural networks (2.4) having the following parameters:
(4.3)A=[0.800a],W0=[0.001000.005],W1=[-0.10.01-0.2-0.1],Km=0,Kp=I.
When a=0.9, for different values of hm, maximum delay bounds obtained by [12–14, 16] and our Theorems are listed in Table 2. From Table 2, it can be confirmed that all the results of Theorems 3.1 and 3.4 provide larger delay bounds than those of [12–14]. Also, our results are better than or equal to the results of [16]. For the case of a=0.7, another comparison of our results with those of [15, 16] is conducted in Table 3, which shows all the results obtained by Theorems 3.1 and 3.4 give larger delay bounds than those of [15, 16].
Maximum bounds hM with different hm and a = 0.9 (Example 4.2).
Methods
2
4
6
8
10
15
Song and Wang [12]
11
11
12
13
14
17
Zhang et al. [13]
11
12
13
14
16
19
Song et al. [14]
15
16
17
18
19
22
Wu et al. [16]
16
18
18
20
20
22
Theorem 3.1
18
18
19
20
20
23
Theorem 3.4
18
18
19
20
21
23
Maximum bounds hM with different hm and a = 0.7 (Example 4.2).
Methods
2
4
6
8
10
15
20
100
1000
Zhang et al. [15]
20
22
24
26
28
33
38
118
1018
Wu et al. [16]
24
26
28
30
32
37
42
122
1022
Theorem 3.1
29
31
32
34
36
41
46
126
1026
Theorem 3.4
29
31
32
34
36
41
46
126
1026
5. Conclusions
In this paper, improved delay-dependent stability criteria were proposed for discrete-time neural networks with time-varying delays. In Theorem 3.1, by constructing the suitable Lyapunov-Krasovskii’s functional and utilizing some recent results introduced in [19, 20], the sufficient condition for guaranteeing the global exponential stability of discrete-time neural network having interval time-varying delays has been derived. Based on the results of Theorem 3.1, by constructing new inequalities of activation functions, the further improved stability criterion was presented in Theorem 3.4. Via two numerical examples, the improvement of the proposed stability criteria has been successfully verified.
Acknowledgments
This paper was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012-0000479), and by a grant of the Korea Healthcare Technology R & D Project, Ministry of Health & Welfare, Republic of Korea (A100054).
FaydasicokO.ArikS.Equilibrium and stability analysis of delayed neural networks under parameter uncertainties2012218126716672610.1016/j.amc.2011.12.0362880327ZBL1245.34075FaydasicokO.ArikS.Further analysis of global robust stability of neural networks with multiple time delays2012349381382510.1016/j.jfranklin.2011.11.0072899311EnsariT.ArikS.New results for robust stability of dynamical neural networks with discrete time delays2010378592559302-s2.0-7795120472710.1016/j.eswa.2010.02.013LiC.LiC.HuangT.Exponential stability of impulsive high-order Hopfield-type neural networks with delays and reaction-diffusion201188153150316210.1080/00207160.2011.5948842834512LiC. J.LiC. D.HuangT.LiaoX.Impulsive effects on stability of high-order BAM neural networks with time delays20117410154115502-s2.0-7995441933410.1016/j.neucom.2010.12.028LuD. J.LiC. J.Exponential stability of stochastic high-order BAM neural networks with time delays and impulsive effectsNeural Computing and Applications. In press2-s2.0-8485678461210.1007/s00521-012-0861-1LiC. D.LiC. J.LiuC.Destabilizing effects of impulse in delayed bam neural networks20092329350335132-s2.0-7124911442710.1142/S0217984909021569LiC. D.WuS.FengG. G.LiaoX.Stabilizing effects of impulses in discrete-time delayed neural networks20112223233292-s2.0-7995166971610.1109/TNN.2010.2100084WangH.SongQ.Synchronization for an array of coupled stochastic discrete-time neural networks with mixed delays20117410157215842-s2.0-7995442119510.1016/j.neucom.2011.01.014LiuY.WangZ.LiuX.Global exponential stability of generalized recurrent neural networks with discrete and distributed delays20061956676752-s2.0-3364651119710.1016/j.neunet.2005.03.015LiuY.WangZ.SerranoA.LiuX.Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis20073625-64804882-s2.0-3384640136210.1016/j.physleta.2006.10.073SongQ.WangZ.A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays20073681-21341452-s2.0-3454740201810.1016/j.physleta.2007.03.088ZhangB.XuS.ZouY.Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays2008721–33213302-s2.0-5594911389510.1016/j.neucom.2008.01.006SongC.GaoH.Xing ZhengW.A new approach to stability analysis of discrete-time recurrent neural networks with time-varying delay20097210-12256325682-s2.0-6734917471510.1016/j.neucom.2008.11.009ZhangY.XuS.ZengZ.Novel robust stability criteria of discrete-time stochastic recurrent neural networks with time delay20097213-15334333512-s2.0-7794990680710.1016/j.neucom.2009.01.014WuZ.SuH.ChuJ.ZhouW.Improved delay-dependent stability condition of discrete recurrent neural networks with time-varying delays20102146926972-s2.0-7795085944210.1109/TNN.2010.2042172XuS.LamJ.A survey of linear matrix inequality techniques in stability analysis of delay systems200839121095111310.1080/002077208023003702468715ZBL1156.93382ShaoH. Y.Improved delay-dependent stability criteria for systems with a delay varying in a range200844123215321810.1016/j.automatica.2008.09.0032531430ZBL1153.93476ParkP.KoJ. W.JeongC.Reciprocally convex approach to stability of systems with time-varying delays201147123523810.1016/j.automatica.2010.10.0142878269ZBL1209.93076KimS. H.Improved approach to robust ℋ∞ stabilization of discrete-time T–S fuzzy systems with time-varying delays2010185100810152-s2.0-7795776970210.1109/TFUZZ.2010.2062523ZhuX. L.YangG. H.Jensen inequality approach to stability analysis of discrete-time systems with time-varying delayProceedings of the American Control Conference (ACC '08)June 2008Seattle, Wash, USA164416492-s2.0-5244911602510.1109/ACC.2008.4586727de OliveiraM. C.SkeltonR. E.2001Berlin, GermanySpringerBoydS.El GhaouiL.FeronE.BalakrishnanV.199415Philadelphia, Pa, USASIAMxii+193SIAM Studies in Applied Mathematics10.1137/1.97816119707771284712