Dynamical Analysis of DTNN with Impulsive Effect

We present dynamical analysis of discrete-time delayed neural networks with impulsive effect. Under impulsive effect, we derive some new criteria for the invariance and attractivity of discrete-time neural networks by using decomposition approach and delay difference inequalities. Our results improve or extend the existing ones.


Introduction
As we know, the mathematical model of neural network consists of four basic components: an input vector, a set of synaptic weights, summing function with an activation, or transfer function, and an output.From the view point of mathematics, an artificial neural network corresponds to a nonlinear transformation of some inputs into certain outputs.Due to their promising potential for tasks of classification, associative memory, parallel computation and solving optimization problems, neural networks architectures have been extensively researched and developed 1-25 .Most of neural models can be classified as either continuous-time or discrete-time ones.For relative works, we can refer to 20, 24, 26-28 .
However, besides the delay effect, an impulsive effect likewise exists in a wide variety of evolutionary process, in which states are changed abruptly at certain moments of time 5, 29 .As is well known, stability is one of the major problems encountered in applications, and has attracted considerable attention due to its important role in applications.However, under impulsive perturbation, an equilibrium point does not exist in many physical systems, especially, in nonlinear dynamical systems.Therefore, an interesting subject is to discuss the invariant sets and the attracting sets of impulsive systems.Some significant progress has been made in the techniques and methods of determining the invariant sets and attracting sets for delay difference equations with discrete variables, delay differential equations, and impulsive functional differential equations 30-37 .Unfortunately, the corresponding problems for discrete-time neural networks with impulses and delays have not been considered.Motivated by the above-mentioned papers and discussion, we here make a first attempt to arrive at results on the invariant sets and attracting sets of discrete-time neural networks with impulses and delays.

Preliminaries
In this paper, we consider the following discrete-time networks under impulsive effect: By a solution of 2.1 , we mean a piecewise continues real-valued function x i k defined on the interval k 0 − τ, ∞ which satisfies 2.1 for all k ≥ k 0 .
In the sequel, by Φ i we will denote the set of all continuous real-valued function x i k defined on the interval k 0 − τ, ∞ , which satisfies the compatibility condition: By the method of steps, one can easily see that, for any given initial function φ i ∈ Φ i , there exists a unique solution x i k i 1, 2, . . ., n of 2.1 which satisfies the initial condition: this function will be called the solution of the initial problem 2.1 -2.3 .
For convenience, we rewrite 2.1 and 2.3 into the following vector form In what follows, we will introduce some notations and basic definitions.
Let R n be the space of n-dimensional real column vectors and let R m×n denote the set of m × n real matrices.E denotes an identical matrix with appropriate dimensions.For A, B ∈ R m×n or A, B ∈ R n , A ≥ B A > B means that each pair of corresponding elements of A and B satisfies the inequality ≥ > .Particularly, A is called a nonnegative matrix if A ≥ 0 and is denoted by A ∈ R m×n and z is called a positive vector if z > 0. ρ A denotes the spectral radius of A.
C X, Y denotes the space of continuous mappings from the topological space X to the topological space Y .
PC I, R n {ϕ : where I ⊂ R is an interval, ϕ k and ϕ k − denote the right limit and left limit of function ϕ k , respectively.Especially, let In particular, S {0} is called asymptotically stable.
Following 33 , we split the matrices A, B into two parts, respectively, Then the first equation of 2.4 can be rewritten as Now take the symmetric transformation y −x.From 2.7 , it follows that By virtue of 2.8 and 2.7 , we have

2.10
Set I l −v −J l v , in view of the impulsive part of 2.4 , we also have , and so we have where ω l z I l x T , J l y T T .
Lemma 2.3 see 34 .Suppose that M ∈ R n×n and ρ M < 1, then there exists a positive vector z such that For M ∈ R n×n and ρ M < 1, one denotes By Lemma 2.3, we have the following result.
Lemma 2.4.Ω ρ M is nonempty, and for any scalars where λ > 0 is a constant and defined as for the given v.
Proof.Since M, N ∈ R n×n and ρ M N < 1, by Lemma 2.3, there exists a positive vector m ij e λ n ij τe λτ p j < 0.

2.18
Due to there must exist a λ > 0, such that n j 1 m ij e λ n ij e λτ p j ≤ p i , i 1, 2, . . ., n.

2.20
For u θ ∈ PC, θ ∈ k 0 − τ, k 0 , there exists a positive constant l > 1 such that where By 2.23 , we get that Next, we will prove for any k ≥ k 0 , v k ≤ v.

2.26
To this end, we consider an arbitrary number ε > 0, we claim that Otherwise, by the continuity of u k , there must exist a k * > k 0 and index r such that Then, by using 2.24 and 2.28 , from 2.22 , we obtain m rj e λ n rj e λτ 1 ε v j which is a contradiction.Hence, 2.27 holds for all numbers ε > 0. It follows immediately that 2.26 is always satisfied, which can easily be led to 2.16 .This completes the proof.

Main Results
For convenience, we introduce the following assumptions.
H 1 For any x, y ∈ R n , there exist a nonnegative matrix P p ij n×n ≥ 0 and a nonnegative vector μ μ 1 , μ 2 , . . ., μ n T ≥ 0 such that H 2 For any x, y ∈ R n , there exist nonnegative matrices Q l q l ij n×n ≥ 0 and a nonnegative vector ν ν 1 , ν 2 , . . ., ν n T ≥ 0 such that Proof.From H 1 and H 2 , we can claim that for any z ∈ R 2n , So, by using 2.10 and 2.11 and taking into account 3.3 , we get By assumptions H 3 , H 4 and Lemma 2.3, there exists a positive vector η 1 ∈ Ω such that

3.6
Since BΛ C and Γ are positive constant vectors, by 3.6 , there must exist two scalars

3.10
Next, we will prove, for any

3.11
In order to prove 3.11 , we first prove, for any ε > 0, If 3.12 is false, by the piecewise continuous nature of z k , there must exist a k * ∈ k 0 , k 1 and an index q such that 3.14 This is a contradiction and hence 3.12 holds.From the fact that 3.12 is fulfilled for any ε > 0, it follows immediately that 3.11 is always satisfied.
On the other hand, by using 3.5 , 3.10 , and 3.11 , we obtain that Therefore, we can claim that

3.16
In a similar way to the proof of 3.11 , we can proof that 3.16 implies 3.17 Hence, by the induction principle, we conclude that which implies z k ≤ η holds for any k ≥ k 0 , that is, −β ≤ x k ≤ α for any k ≥ k 0 .This is completes the proof of the theorem.
Remark 3.2.In fact, under the assumptions of Theorem 3.1, the η must exist, for example, since ρ A BP < 1 and ρ Q l < 1 imply E − A − BP −1 > 0 and E − Q l −1 > 0, respectively, so we may take η as the follows: where the positive vector z ∈ Ω and λ > 0 satisfying From 3.15 and taking into account the definition of z, η, we get that

3.22
Therefore, we have that

3.23
By using 3.20 , 3.23 and Lemma 2.5 again, we obtain that Hence, by the induction principle, we conclude that which implies that the conclusion holds.The proof is complete.

An Illustrative Example
Consider the system 2.1 with the following parameters n 2, i, j 1, 2 From the given parameters, we have

Conclusion
In this paper, by using M-matrix theory and decomposition approach, some new criteria for the invariance and attractivity of discrete-time neural networks have been obtained.Moreover, these conditions can be easily checked in practice.
Jimei University, and the Foundation for Talented Youth with Innovation in Science and Technology of Fujian Province 2009J05009 .