Global Dissipativity on Uncertain Discrete-Time Neural Networks with Time-Varying Delays

The problems on global dissipativity and global exponential dissipativity are investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks are established in linear matrix inequality LMI , which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free-weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.


Introduction
In the past few decades, delayed neural networks have found successful applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers 1 .In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks 2 .Many important results on the dynamical behaviors have been reported for delayed neural networks; see 1-16 and the references therein for some recent publications.
It should be pointed out that all of the abovementioned literatures on the dynamical behaviors of delayed neural networks are concerned with continuous-time case.However, when implementing the continuous-time delayed neural network for computer simulation, it becomes essential to formulate a discrete-time system that is an analogue of the continuoustime delayed neural network.To some extent, the discrete-time analogue inherits the dynamical characteristics of the continuous-time delayed neural network under mild or no restriction on the discretization step-size, and also remains some functional similarity 17 .Unfortunately, as pointed out in 18 , the discretization cannot preserve the dynamics of the continuous-time counterpart even for a small sampling period, and therefore there is a crucial need to study the dynamics of discrete-time neural networks.Recently, the dynamics analysis problem for discrete-time delayed neural networks and discrete-time systems with time-varying state delay has been extensively studied; see 17-21 and references therein.
It is well known that the stability problem is central to the analysis of a dynamic system where various types of stability of an equilibrium point have captured the attention of researchers.Nevertheless, from a practical point of view, it is not always the case that every neural network has its orbits approach a single equilibrium point.It is possible that there is no equilibrium point in some situations.Therefore, the concept on dissipativity has been introduced 22 .As pointed out in 23 , the dissipativity is also an important concept in dynamical neural networks.The concept of dissipativity in dynamical systems is a more general concept and it has found applications in the areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control 23 .Some sufficient conditions checking the dissipativity for delayed neural networks and nonlinear delay systems have been derived, for example, see 23-33 and references therein.In 23, 24 , authors analyzed the dissipativity of neural network with constant delays, and derived some sufficient conditions for the global dissipativity of neural network with constant delays.In 25, 26 , authors considered the global dissipativity and global robust dissipativity for neural network with both time-varying delays and unbounded distributed delays; several sufficient conditions for checking the global dissipativity and global robust dissipativity were obtained.In 27, 28 , by using linear matrix inequality technique, authors investigated the global dissipativity of neural network with both discrete time-varying delays and distributed timevarying delays.In 29 , authors developed dissipativity notions for nonnegative dynamical systems with respect to linear and nonlinear storage functions and linear supply rates, and obtained a key result on linearization of nonnegative dissipative dynamical systems.In 30 , the uniform dissipativity of a class of nonautonomous neural networks with time-varying delays was investigated by employing M-matrix and the techniques of inequality.In 31-33 , the dissipativity of a class of nonlinear delay systems was considered; some sufficient conditions for checking the dissipativity were given.However, all of the abovementioned literatures on the dissipativity for delayed neural networks and nonlinear delay systems are concerned with continuous-time case.To the best of our knowledge, few authors have considered the problem on the dissipativity of uncertain discrete-time neural networks with time-varying delays.Therefore, the study on the dissipativity of uncertain discrete-time neural networks is not only important but also necessary.
Motivated by the above discussions, the objective of this paper is to study the problem on global dissipativity and global exponential dissipativity for uncertain discrete-time neural networks.By employing appropriate Lyapunov-Krasovskii functionals and LMI technique, we obtain several new sufficient conditions for checking the global dissipativity and global exponential dissipativity of the addressed neural networks.
Notations.The notations are quite standard.Throughout this paper, I represents the unitary matrix with appropriate dimensions; N stands for the set of nonnegative integers; R n and R n×m denote, respectively, the n-dimensional Euclidean space and the set of all n × m real matrices.The superscript "T " denotes matrix transposition and the asterisk " * " denotes the elements below the main diagonal of a symmetric block matrix.|A| denotes the absolutevalue matrix given by |A| |a ij | n×n ; the notation X ≥ Y resp., X > Y means that X and Y are symmetric matrices, and that X − Y is positive semidefinite resp., positive definite .
• is the Euclidean norm in R n .For a positive constant a, a denotes the integer part of a.For integers a, b with a < b, N a, b denotes the discrete interval given by N a, b {a, a 1, . . ., b − 1, b}.C N −τ, 0 , R n denotes the set of all functions φ: N −τ, 0 → R n .Matrices, if not explicitly specified, are assumed to have compatible dimensions.

Model Description and Preliminaries
In this paper, we consider the following discrete-time neural network model . ., g n x n k T ∈ R n , g j x j k denotes the activation function of the jth neuron at time k; u u 1 , u 2 , . . ., u n T is the input vector; the positive integer τ k corresponds to the transmission delay and satisfies τ ≤ τ k ≤ τ τ ≥ 0 and τ ≥ 0 are known integers ; C diag{c 1 , c 2 , . . ., c n }, where c i 0 ≤ c i < 1 describes the rate with which the ith neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; A a ij n×n is the connection weight matrix; B b ij n×n is the delayed connection weight matrix.
The initial condition associated with model 2.1 is given by Throughout this paper, we make the following assumption 6 .
H For any j ∈ {1, 2, . . ., n}, g j 0 0 and there exist constants G − j and G j such that

2.3
Similar to 23 , we also give the following definitions for discrete-time neural networks 2.1 .Definition 2.1.Discrete-time neural networks 2.1 are said to be globally dissipative if there exists a compact set S ⊆ R n , such that for all x 0 ∈ R n , ∃ positive integer K x 0 > 0, when k ≥ k 0 K x 0 , x k, k 0 , x 0 ⊆ S, where x k, k 0 , x 0 denotes the solution of 2.1 from initial state x 0 and initial time k 0 .In this case, S is called a globally attractive set.
Definition 2.2.Let S be a globally attractive set of discrete-time neural networks 2.1 .Discrete-time neural networks 2.1 are said to be globally exponentially dissipative if there exists a compact set S * ⊃ S in R n such that ∀x 0 ∈ R n \ S * , there exist constants M x 0 > 0 and 0 Set S * is called globally exponentially attractive set, where To prove our results, the following lemmas are necessary.
Lemma 2.3 see 34 .Given constant matrices P , Q, and R, where is equivalent to the following conditions: Lemma 2.4 see 35, 36 .Given matrices P , Q, and R with P T P , then holds for all F k satisfying F T k F k ≤ I if and only if there exists a scalar ε > 0 such that

Main Results
In this section, we shall establish our main criteria based on the LMI approach.For presentation convenience, in the following, we denote

is globally dissipative, and
is a positive invariant and globally attractive set.
Proof.For positive diagonal matrices D 1 > 0 and D 2 > 0, we know from assumption H that Defining η k x k 1 − x k , we consider the following Lyapunov-Krasovskii functional candidate for model 2.1 as where

3.6
Calculating the difference of V i k i 1, 2, 3 along the positive half trajectory of 2.1 , we obtain Similarly, one has x k − δ .

3.12
For positive diagonal matrices H 1 > 0 and H 2 > 0, we can get from assumption H that 6

3.15
it follows from 3.7 to 3.14 that

3.16
From condition 3.2 and inequality 3.16 , we get Therefore, discrete-time neural network 2.1 is a globally dissipative system, and the set S is a positive invariant and globally attractive set as LMI 3.2 holds.The proof is completed.
Next, we are now in a position to discuss the global exponential dissipativity of discrete-time neural network 2.1 as follows.

Theorem 3.2. Under the conditions of Theorem 3.1, neural network 2.1 is globally exponentially dissipative, and
is a positive invariant and globally attractive set.
Proof.When x ∈ R n \ S, that is, x / ∈ S, we know from 3.16 that

3.19
From the definition of V k in 3.5 , it is easy to verify that where

3.21
For any scalar μ > 1, it follows from 3.19 and 3.20 that

3.22
Summing up both sides of 3.22 from 0 to k − 1 with respect to j, we have

3.23
It is easy to compute that

3.25
It follows from 3.23 -3.25 that where

3.28
From the definition of V k in 3.5 , we have It follows from 3.28 and 3.29 that for all k ∈ N, which means that discrete-time neural network 2.1 is globally exponentially dissipative, and the set S is a positive invariant and globally attractive set as LMI 3.2 holds.The proof is completed.
Remark 3.3.In the study on dissipativity of neural networks, the assumption H of this paper is as same as that in 28 , the constants G − j and G j j 1, 2, . . ., n in assumption H of this paper are allowed to be positive, negative, or zero.Hence, assumption H , first proposed by Liu et al. in 6 , is weaker than the assumption in 23-27, 30 .
Remark 3.4.The idea of constructing Lyapunov-Krasovskii functional 3.5 is that we divide the delay interval τ, τ into two subintervals τ, δ and δ, τ , thus the proposed Lyapunov-Krasovskii functional is different when the time-delay τ k belongs to different subinterval.The main advantage of such Lyapunov-Krasovskii functional is that it makes full use of the information on the considered time-delay τ k .Now, let us consider the case when the parameter uncertainties appear in the discretetime neural networks with time-varying delays.In this case, model 2.1 can be further generalized to the following one: where C, A, B are known real constant matrices, and the time-varying matrices ΔC k , ΔA k and ΔB k represent the time-varying parameter uncertainties that are assumed to satisfy the following admissible condition: where M and N i i 1, 2, 3 are known real constant matrices, and F k is the unknown time-varying matrix-valued function subject to the following condition: 3.33 For model 3.31 , we have the following result readily.

3.35
and 3.37 From Lemma 2.4, we know that 3.37 is equivalent to the following inequality: As an application of Lemma 2.3, we know that 3.39 is equivalent to the following inequality: By simple computation and noting that ΔC k ΔA k ΔB k MF k N 1 N 2 N 3 , we have

3.41
Therefore, inequality 3.40 is just the same as inequality 3.2 when we use C ΔC, A ΔA, B ΔB to replace C, A, B of inequality 3.2 , respectively.From Theorems 3.1 and 3.2, we know that uncertain discrete-time neural network 3.31 is globally dissipative and globally exponentially dissipative, and is a positive invariant and globally attractive set.The proof is then completed.

4.3
Therefore, by Theorem 3.1, we know that model 2.1 with above given parameters is globally dissipative and globally exponentially dissipative.It is easy to compute that the positive invariant and global attractive set are S {x : x ≤ 0.9553}.
The following example is given to illustrate that when the sufficient conditions ensuring the global dissipativity are not satisfied, the complex dynamics will appear.

4.4
It is easy to check that the linear matrix inequality 3.2 with G i i 1, 2, 3, 4 and δ of Example 4.1 has not a feasible solution.Figure 1 depicts the states of the considered neural network 2.1 with initial conditions x 1 s 0.5, x 2 s 0.45, s ∈ N −9, 0 .One can see from Figure 1 that the chaos behaviors have appeared for neural network 2.1 with above given parameters.

Conclusions
In this paper, the global dissipativity and global exponential dissipativity have been investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions.By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed

1 Figure 1 :
Figure 1: State responses of x 1 k and x 2 k .
then uncertain discrete-time neural network 3.31 is globally dissipative and globally exponentially dissipative, and By the Matlab LMI Control Toolbox, we can find a solution to the LMI in 3.2 as follows: