We present dynamical analysis of discrete-time delayed neural networks with impulsive effect. Under impulsive effect, we derive some new criteria for the invariance and attractivity of discrete-time neural networks by using decomposition approach and delay difference inequalities. Our results improve or extend the existing ones.

1. Introduction

As we know, the mathematical model of neural network consists of four basic components: an input vector, a set of synaptic weights, summing function with an activation, or transfer function, and an output. From the view point of mathematics, an artificial neural network corresponds to a nonlinear transformation of some inputs into certain outputs. Due to their promising potential for tasks of classification, associative memory, parallel computation and solving optimization problems, neural networks architectures have been extensively researched and developed [1–25]. Most of neural models can be classified as either continuous-time or discrete-time ones. For relative works, we can refer to [20, 24, 26–28].

However, besides the delay effect, an impulsive effect likewise exists in a wide variety of evolutionary process, in which states are changed abruptly at certain moments of time [5, 29]. As is well known, stability is one of the major problems encountered in applications, and has attracted considerable attention due to its important role in applications. However, under impulsive perturbation, an equilibrium point does not exist in many physical systems, especially, in nonlinear dynamical systems. Therefore, an interesting subject is to discuss the invariant sets and the attracting sets of impulsive systems. Some significant progress has been made in the techniques and methods of determining the invariant sets and attracting sets for delay difference equations with discrete variables, delay differential equations, and impulsive functional differential equations [30–37]. Unfortunately, the corresponding problems for discrete-time neural networks with impulses and delays have not been considered. Motivated by the above-mentioned papers and discussion, we here make a first attempt to arrive at results on the invariant sets and attracting sets of discrete-time neural networks with impulses and delays.

2. Preliminaries

In this paper, we consider the following discrete-time networks under impulsive effect:

xi(k)=e-aihxi(k-1)+1-e-aihai∑j=1nbijfj(xj(k-τ))+1-e-aihaici,k≥k0,k≠kl,xi(kl)=Iil(xi(kl-)),i=1,2,…,n,l=1,2,…,
where bij,ci(i,j=1,2,…,n) are real constants, ai(i=1,2,…,n); h, τ are positive real numbers such that τ>1.kl(l=1,2,…) is an impulsive sequence such that k1<k2<···<kl<··· and liml→∞kl=∞.fj,Iil:R→R are real-valued functions.

By a solution of (2.1), we mean a piecewise continues real-valued function xi(k) defined on the interval [k0-τ,∞) which satisfies (2.1) for all k≥k0.

In the sequel, by Φi we will denote the set of all continuous real-valued function xi(k) defined on the interval [k0-τ,∞), which satisfies the compatibility condition:

By the method of steps, one can easily see that, for any given initial function ϕi∈Φi, there exists a unique solution xi(k)(i=1,2,…,n) of (2.1) which satisfies the initial condition:

xi(k)=ϕi(k),fork∈[k0-τ,k0],
this function will be called the solution of the initial problem (2.1)–(2.3).

For convenience, we rewrite (2.1) and (2.3) into the following vector form

x(k)=Ax(k-1)+Bf(x(k-τ))+C,k≥k0,k≠kl,x(kl)=Il(x(kl-)),l=1,2,…,x(k)=ϕ(k),k∈[k0-τ,k0],
where x(k)=(x1(k),x2(k),…,xn(k))T, A=diag{e-a1h,e-a2h,…,e-anh}, B=((1-e-aih/ai)bij)n×n, C=diag{((1-e-a1h)/a1)c1,((1-e-a2h)/a2)c2,…,((1-e-anh)/an)cn}, f(x)=(f1(x),f2(x),…,fn(x))T, Il(x)=(I1l(x),I2l(x),…,Inl(x))T, and ϕ=(ϕ1,ϕ2,…,ϕn)T∈Φ, in which Φ=(Φ1,Φ2,…,Φn)T.

In what follows, we will introduce some notations and basic definitions.

Let Rn be the space of n-dimensional real column vectors and let Rm×n denote the set of m×n real matrices. E denotes an identical matrix with appropriate dimensions. For A,B∈Rm×n or A,B∈Rn,A≥B(A>B) means that each pair of corresponding elements of A and B satisfies the inequality ≥(>). Particularly, A is called a nonnegative matrix if A≥0 and is denoted by A∈R+m×n and z is called a positive vector if z>0. ρ(A) denotes the spectral radius of A.

C[X,Y] denotes the space of continuous mappings from the topological space X to the topological space Y.

PC[I,Rn]≜{φ:I→Rn∣φ(k+) and φ(k-) exist for k∈I,φ(k+)=φ(k) for k∈I and φ(k-)=φ(k) except for points kl∈I}, where I⊂R is an interval, φ(k+) and φ(k-) denote the right limit and left limit of function φ(k), respectively. Especially, let PC=PC([k0-τ,k0],Rn).

Definition 2.1.

The set S⊂Rn is called a positive invariant set of (2.4), if for any initial value ϕ∈S, one has the solution x(k)∈S for k≥k0.

Definition 2.2.

The set S⊂Rn is called a global attracting set of (2.4), if for any initial value ϕ∈PC, the solution x(k) converges to S as k→+∞. That is,
dist(x(k),S)→0,ask→+∞,
where dist(x,S)=infy∈Sd(x,y);(x,y)=supk0-τ≤k|x(k)-y(k)|. In particular, S={0} is called asymptotically stable.

Following [33], we split the matrices A,B into two parts, respectively,

A=A+-A-,B=B+-B-,C=C+-C-
with ai+=max{ai,0},ai-=max{-ai,0}, bij+=max{(1-e-aih/ai)bij,0},bij-=max{-((1-e-aih)/ai)bij,0},ci+=max{((1-e-aih)/ai)ci,0}, ci-=max{-((1-e-aih)/ai)ci,0}.

Then the first equation of (2.4) can be rewritten as

x(k)=(A+-A-)x(k-1)+(B+-B-)f(x(k-τ))+(C+-C-).

Now take the symmetric transformation y=-x. From (2.7), it follows that

x(k)=A+x(k-1)+A-y(k-1)+B+f(x(k-τ))+B-g(y(k-τ))+(C+-C-),y(k)=A+y(k-1)+A-x(k-1)+B+g(y(k-τ))+B-f(x(k-τ))+(C--C+),
where f(-u)=-g(u).

Set

z(k)=(x(k)y(k)),h(z(k))=(f(x(k))g(y(k))),𝒜=(A+A-A-A+),ℬ=(B+B-B-B+),𝒞=(C+-C-C--C+),
By virtue of (2.8) and (2.7), we have

z(k)=𝒜z(k-1)+ℬh(z(k-τ))+𝒞.

Set Il(-v)=-Jl(v), in view of the impulsive part of (2.4), we also have x(kl)=Il(x(kl-)), y(kl)=Jl(y(kl-)), and so we have

z(kl)=ωl(z(kl-)),l=1,2,…,
where ωl(z)=(Il(x)T,Jl(y)T)T.

Lemma 2.3 (see [<xref ref-type="bibr" rid="B34">34</xref>]).

Suppose that M∈R+n×n and ρ(M)<1, then there exists a positive vector z such that
(E-M)z>0.

For M∈R+n×n and ρ(M)<1, one denotes

Ωρ(M)={z∈Rn∣(E-M)z>0,z>0}.

By Lemma 2.3, we have the following result.

Lemma 2.4.

Ωρ(M) is nonempty, and for any scalars k1>0,k2>0 and vectors z1,z2∈Ωρ(M), one has
k1z1+k2z2∈Ωρ(M).

Lemma 2.5.

Assume that u(k)=(u1(k),u2(k),…,un(k))T∈C[[k0,∞),Rn] satisfy
u(k)≤Mu(k-1)+Nu(k-τ)+J,k≥k0,u(θ)∈PC,θ∈[k0-τ,k0],
where M=(mij)∈R+n×n,N=(nij)∈R+n×n, J∈Rn.

If ρ(M+N)<1, then there exists a positive vector v=(v1,v2,…,vn)T such that

u(k)≤ve-λ(k-k0)+(E-M-N)-1J,k≥k0,
where λ>0 is a constant and defined as
(E-Meλ-Neλτ)v≥0
for the given v.

Proof.

Since M,N∈R+n×n and ρ(M+N)<1, by Lemma 2.3, there exists a positive vector p∈Ωρ(M+N) such that (E-M-N)p>0.

Set Hi(λ)=pi-∑j=1n(mijeλ+nijeλτ)pj(i=1,2,…,n), then we have

Ḣi(λ)=-∑j=1n(mijeλ+nijτeλτ)pj<0.
Due to
Hi(0)=pi-∑j=1n(mij+nij)pj>0,
there must exist a λ>0, such that
∑j=1n(mijeλ+nijeλτ)pj≤pi,i=1,2,…,n.

For u(θ)∈PC, θ∈[k0-τ,k0], there exists a positive constant l>1 such that

u(θ)≤lpe-λ(θ-k0)+W,θ∈[k0-τ,k0],
where W=(E-M-N)-1J.

By Lemma 2.4, lp∈Ωρ(M+N). Without loss of generality, we can find a v∈Ωρ(M+N) such that

Set u(k)=v(k)e-λ(k-k0)+W,k≥k0, substituting this into (2.15), we have

v(k)≤Meλv(k-1)+Neλτv(k-τ).
By (2.23), we get that
v(θ)≤v,θ∈[k0-τ,k0].

Next, we will prove for any k≥k0,

v(k)≤v.
To this end, we consider an arbitrary number ε>0, we claim that
v(k)<(1+ε)v,k≥k0.
Otherwise, by the continuity of u(k), there must exist a k*>k0 and index r such that
v(k)<(1+ε)v,fork∈[k0,k*),vr(k*)=(1+ε)vr.
Then, by using (2.24) and (2.28), from (2.22), we obtain
(1+ε)vr=vr(k*)≤∑j=1n(mrjeλvj(k*-1)+nrjeλτvj(k*-τ))<∑j=1n(mrjeλ+nrjeλτ)(1+ε)vj≤(1+ε)vr,
which is a contradiction. Hence, (2.27) holds for all numbers ε>0. It follows immediately that (2.26) is always satisfied, which can easily be led to (2.16). This completes the proof.

3. Main Results

For convenience, we introduce the following assumptions.

For any x,y∈Rn, there exist a nonnegative matrix P=(pij)n×n≥0 and a nonnegative vector μ=(μ1,μ2,…,μn)T≥0 such that
f(x)-f(y)≤P(x-y)+μ.

For any x,y∈Rn, there exist nonnegative matrices 𝒬l=(qijl)n×n≥0 and a nonnegative vector ν=(ν1,ν2,…,νn)T≥0 such that
Il(x)-Il(y)≤𝒬l(x-y)+ν,l=1,2,….

Also ρ(𝒜+ℬ𝒫)<1 and ρ(𝒬l)<1,l=1,2,…, where 𝒫=diag{P,P},𝒬l=diag{𝒬l,𝒬l}.

Also Ω=⋂l=1∞[Ωρ(𝒬l)]∩Ωρ(𝒜+ℬ𝒫) is nonempty.

Theorem 3.1.

Assume that (H1)–(H4) hold. Then there exists a positive vector η=(αT,βT)T∈Ω such that S={ϕ∈PC∣-β≤ϕ≤α} is a positive invariant set of (2.4), where α≥0,β≥0,α,β∈Rn.

Proof.

From (H1) and (H2), we can claim that for any z∈R2n,
h(z)≤𝒫z+Λ,ωl(z)≤𝒬lz(kl-)+Γ,l=1,2,…,
where Λ=((μ+|f(0)|)T, (μ+|f(0)|)T)T and satisfied ℬΛ+𝒞>0, Γ=((ν+|f(0)|)T, (ν+|f(0)|)T)T.

So, by using (2.10) and (2.11) and taking into account (3.3), we get

η=(αT,βT)T≜max{k1η1,k2η1},
by Lemma 2.4, clearly, η∈Ω and
(E-𝒜-ℬ𝒫)η≥ℬ𝒜+𝒞,(E-𝒬l)η≥Γ,l=1,2,….

Next, we will prove, for any -β≤ϕ≤α, that is, z(k)≤η,k∈[k0-τ,k0],

z(k)≤η,k∈[k0,k1).

In order to prove (3.11), we first prove, for any ε>0,

z(k)<(1+ε)η,k∈[k0,k1).

If (3.12) is false, by the piecewise continuous nature of z(k), there must exist a k*∈[k0,k1) and an index q such that

z(k)<(1+ε)η,fork∈[k0,k*),zq(k*)=(1+ε)ηq.

Denoting 𝒜=(cij)2n×2n,ℬ𝒫=(dij)2n×2n,ℬΛ+𝒞=(λ1,λ2,…,λ2n), we get

(1+ε)ηq=zq(k*)≤∑j=12n(cqjzj(k*-1)+dqjzj(k*-τ))+λq<∑j=12n(cqj+dqj)(1+ε)ηj+λq≤(1+ε)(ηq-λq)+λq<(1+ε)ηq.
This is a contradiction and hence (3.12) holds. From the fact that (3.12) is fulfilled for any ε>0, it follows immediately that (3.11) is always satisfied.

On the other hand, by using (3.5), (3.10), and (3.11), we obtain that

z(k1)≤𝒬1z(k1-)+Γ≤𝒬1η+Γ≤η.
Therefore, we can claim that
z(k)≤η,k∈[k1-τ,k1).
In a similar way to the proof of (3.11), we can proof that (3.16) implies
z(k)≤η,k∈[k1,k2).
Hence, by the induction principle, we conclude that
z(k)≤η,k∈[kl-1,kl),l=1,2,…,
which implies z(k)≤η holds for any k≥k0, that is, -β≤x(k)≤α for any k≥k0. This is completes the proof of the theorem.

Remark 3.2.

In fact, under the assumptions of Theorem 3.1, the η must exist, for example, since ρ(𝒜+ℬ𝒫)<1 and ρ(𝒬l)<1 imply (E-𝒜-ℬ𝒫)-1>0 and (E-𝒬l)-1>0, respectively, so we may take η as the follows:
η=max{(E-𝒜-ℬ𝒫)-1(ℬΛ+𝒞),(E-𝒬l)-1Γ}.

Theorem 3.3.

If assumptions (H1)–(H4) hold, then the S={ϕ∈PC∣-β≤ϕ≤α} is a global attracting set of (2.4), where α≥0, β≥0,α,β∈Rn, and the vector η=(αT,βT)T is chosen as (3.19).

Proof.

From (3.4), assumption (H3) and Lemma 2.5, and taking into account the definition of η, we obtain that
z(k)≤ze-λ(k-k0)+(E-𝒜-ℬ𝒫)-1(ℬΛ+𝒞)≤ze-λ(k-k0)+η,k≠kl,l=1,2,…,
where the positive vector z∈Ω and λ>0 satisfying
(E-𝒜eλ-ℬ𝒫eλτ)z≥0.
From (3.15) and taking into account the definition of z,η, we get that
z(k1)≤𝒬1z(k1-)+Γ≤𝒬1ze-λ(k1-k0)+𝒬1η+Γ≤ze-λ(k1-k0)+η.
Therefore, we have that
z(k)≤ze-λ(k-k0)+η,k∈[k1-τ,k1].
By using (3.20), (3.23) and Lemma 2.5 again, we obtain that
z(k)≤ze-λ(k-k0)+η,k∈[k1,k2),
Hence, by the induction principle, we conclude that
z(k)≤ze-λ(k-k0)+η,k∈[k0,kl),l=1,2,…,
which implies that the conclusion holds. The proof is complete.

4. An Illustrative Example

Consider the system (2.1) with the following parameters (n=2, i,j=1,2)ai=1/4, bij=1/4, ci=1/4, pij=3/8, qij=1/4, fj(xj)=sin(xj), Iil(xi)=cos(xi), h=1, l=1,

Λ=(104637845),Γ=(84396257).
From the given parameters, we have

𝒜=(14000014000014000014),ℬ=(1-e-1/41-e-1/4001-e-1/41-e-1/400001-e-1/41-e-1/4001-e-1/41-e-1/4),𝒞=(1-e-1/4001-e-1/4-(1-e-1/4)00-(1-e-1/4)),𝒫=(383800383800003838003838),𝒬1=(141400141400001414001414),
Obviously, according to Theorems 3.1 and 3.3, the S={ϕ∈PC∣-β≤ϕ≤α} is the invariant and global attracting set of (2.4).

5. Conclusion

In this paper, by using M-matrix theory and decomposition approach, some new criteria for the invariance and attractivity of discrete-time neural networks have been obtained. Moreover, these conditions can be easily checked in practice.

Acknowledgments

This work was supported by the Foundation of Education of Fujian Province, China(JA07142), the Scientic Research Foundation of Fujian Province, China(JA09152), the Foundation for Young Professors of Jimei University, the Scientic Research Foundation
of Jimei University, and the Foundation for Talented Youth with Innovation in Science and Technology of Fujian Province (2009J05009).

CohenM. A.GrossbergS.Absolute stability of global pattern formation and parallel memory storage by competitive neural networksCarpenterG. A.CohenM. A.GrossbergS.Computing with neural networksGuezA.ProtopopsecuV.BarhenJ.On the stability, storage capacity and design of nonlinear continuous neural networksSamoilenkoA. M.PerestyukN. A.LakshmikanthamV.BainovD. D.SimeonovP. S.MichelA. N.FarrellJ. A.PorodW.Qualitative analysis of neural networksLiJ. H.MichelA. N.PorodW.Qualitative analysis and synthesis of a class of neural networksLiaoX. X.Stability of Hopfield-type neural networks. IGopalsamyK.HeX. Z.Stability in asymmetric Hopfield nets with transmission delaysGopalsamyK.HeX.Delay-independent stability in bidirectional associative memory networksLiaoX.YuJ.Qualitative analysis of bidirectional associative memory with time delaysMohamadS.GopalsamyK.Dynamics of a class of discrete-time neural networks and their continuous-time counterpartsCaoJ.Periodic solutions and exponential stability in delayed cellular neural networksCaoJ.New results concerning exponential stability and periodic solutions of delayed cellular neural networksCaoJ.LiangJ.Boundedness and stability for Cohen-Grossberg neural network with time-varying delaysCaoJ.WangJ.Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delaysCaoJ.SongQ.Stability in Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delaysHuS.WangJ.Global stability of a class of continuous-time recurrent neural networksMohamadS.GopalsamyK.Neuronal dynamics in time varying environments: continuous and discrete time modelsGopalsamyK.Stability of artificial neural networks with impulsesWangZ.LiuY.LiuX.On global asymptotic stability of neural networks with discrete and distributed delaysWangY.XiongW.ZhouQ.XiaoB.YuY.Global exponential stability of cellular neural networks with continuously distributed delays and impulsesLouX. Y.CuiB. T.Global asymptotic stability of delay BAM neural networks with impulsesYangY.CaoJ.Stability and periodicity in delayed cellular neural networks with impulsive effectsZhaoH.Global asymptotic stability of Hopfield neural network involving distributed delaysAkçaH.AlassarR.CovachevV.CovachevaZ.Al-ZahraniE.Continuous-time additive Hopfield-type neural networks with impulsesLiY.LuL.Global exponential stability and existence of periodic solution of Hopfield-type neural networks with impulsesChenT.AmariS. I.Stability of asymmetric Hopfield networksBainovD. D.SimeonovP. S.XuD.Asymptotic behavior of nonlinear difference equations with delaysXuD.Invariant and attracting sets of Volterra difference equations with delaysXuD.YangZ.Attracting and invariant sets for a class of impulsive functional differential equationsHuangZ.XiaY.Exponential p-stability of second order Cohen-Grossberg neural networks with transmission delays and learning behaviorHuangZ.MohamadS.WangX.FengC.Convergence analysis of general neural net-works under almost periodic stimuliHuangZ.MohamadS.BinH.Multistability of HNNs with almost periodic stimuli and continuously distributed delaysChuT.ZhangZ.WangZ.A decomposition approach to analysis of competitive-cooperative neural networks with delayLaSalleJ. P.