Delay-Dependent Stability Criteria of Uncertain Periodic Switched Recurrent Neural Networks with Time-Varying Delays

This paper deals with the problem of delay-dependent stability criterion of uncertain periodic switched recurrent neural networks with time-varying delays. When uncertain discrete-time recurrent neural network is a periodic system, it is expressed as switched neural network for the ﬁnite switching state. Based on the switched quadratic Lyapunov functional approach (cid:2) SQLF (cid:3) and free-weighting matrix approach (cid:2) FWM (cid:3) , some linear matrix inequality criteria are found to guarantee the delay-dependent asymptotical stability of these systems. Two examples illustrate the exactness of the proposed criteria.


Introduction
Recurrent neural networks RNNs are a very important tool for many application areas such as associative memory, pattern recognition, signal processing, model identification, and combinatorial optimization.With the development of research on RNNs in theory and application, the model is more and more complex.When the continuous-time RNNs are simulated using computer, they should be discretized into discrete-time RNNs 1-3 .Simultaneously, in implementations of artificial neural networks, time-varying delay may occur due to finite switching speeds of the amplifiers and communication time 4, 5 .Therefore, researchers have considered that discrete-time RNNs with time-varying delay are incorporated in the processing and/or transmission parts of the network architectures 6-9 .Parameter uncertainties and nonautonomous phenomena often exist in real systems due to modeling inaccuracies 4 .Particularly when we consider a long-term dynamical behaviors of the system and consider seasonality of the changing environment, the parameters of the system usually will change with time 10-14 .In order to model those systems with neural networks, the uncertain or switched or jumping neural networks with time-varying delay appear in many papers 6, 15-24 .So in this paper we consider the stability of the following discrete-time recurrent neural networks with time-varying delay: where u k {u 1 k , u 2 k , . . ., u n k } ∈ R n is the state vector associated with n neurons, A diag{a 1 , a 2 , . . ., a n } is a diagonal matrix with positive entries, W 1 and W 2 are, respectively, the connection weight matrix and the delayed connection weight matrix, I is input vector, g u {g 1 u , g 2 u , . . ., g n u } and f u {f 1 u , f 2 u , . . ., f n u } are the neuron activation function vectors, and d k is nonnegative differential time-varying functions which denote the time delays and satisfy 1.2 In most literatures it is required that parameter uncertainty matrices, such as ΔA k , ΔW 1 k , ΔW 2 k , and ΔI k , should be in the form where E a , E w1 , E w2 , and E i are given constant matrices of appropriate dimensions and F k is an uncertain matrix such that In practice, however, ΔA k , ΔW 1 k , ΔW 2 k , and ΔI k are generally difficult to have the decomposition of matrices for D, E a , E w1 , E w2 , and E i .In addition, periodic oscillation in recurrent neural networks is an interesting dynamic behavior as many biological and cognitive activities require repetition 7, 10, 11, 25 .Simultaneously, periodic oscillations in recurrent neural networks have been found in many applications such as associative memories, pattern recognition, machine learning, and robot motion control 25 .So, if 1.1 is an uncertain periodic neural network in which the period is less than a constant b, then 1.1 can be expressed as switched neural network for the finite switching state, that is, if where r k is a switching rule defined by r k : N → Ω with Ω {S 1 , S 2 , . . ., S N }.Moreover, r k j means the sub-recurrent neural network sub-RNN S j , which is corresponding to A j , W 1j , W 2j , I j , is active.
The dynamic behaviors of those models are foundations for applications.Under 1.5 most papers discuss the stability of uncertain neural networks with the common Lyapunov function approach 5, 6, 15-23, 25 .To the best of the authors' knowledge, up to now, there is scarcely any paper that studies the uncertain periodic neural networks using the SQLF.This situation motivates this research.
Motivated by the above discussions, the authors intend to study a problem of the delay-dependent stability criterion of uncertain discrete-time recurrent neural networks with time-varying delays that the uncertain recurrent neural networks have a finite number of sub-RNNs, and the sub-RNNs may change from one to another according to arbitrary switching and restricted switching.The contributions of this paper are the following.1 Using a switching graph, uncertain periodic recurrent neural networks with time-varying delays are transformed into switched recurrent neural networks; 2 the derivative of the SQLF 3.7 of the literature 8 is improved in 3.11 , please see Remark 4.3 and Table 3; 3 based on the switching graph, the delay-dependent stability criteria of switched recurrent neural networks are studied by FWM and SQLF.Then an effective LMI approach is developed to solve the problem.
This paper is organized as follows.In Section 2, we give some basic definitions.We analyze the stability of the system 2.2 with the SQLF and FWM in Section 3. Some examples are given in Section 4. Section 5 offers the conclusions of this paper.

Preliminaries
In many electronic circuits, nonmonotonic functions can be more appropriate to describe the neuron activation in designing and implementing an artificial neural network 7 ; hence, we have the following assumption.

2.1
Under the assumption, the equilibrium points of UDNN 1.1 exist by the fixed point theorem 1 .In the following, let u The systems 1.1 and 1.5 are, respectively, shifted to the following form: For convenience, the switching graph is defined.
Definition 2.1.Let Γ Ω, W be a switching graph, where Ω is the set of sub-RNNs S i and W is the set of weighted arcs w ij ∈ {0, 1}.w ij 1 or 0 represents the sub-RNN S i switches or does not switch to the sub-RNN S j .
Remark 2.2.When w ij 1, if N l 1 w jl 0, that is sub-RNN S j cannot switch to any other sub-RNN, we suppose that the uncertain neural networks will always stay in the sub-RNN S i that means w ii 1 and w ij 0.
Throughout this paper, the superscript T stands for the transpose of a matrix, P > 0 means that the matrix P is positive definite, and the symmetric terms in a symmetric matrix are denoted by * , for example, 2.3

Asymptotical Stability of Uncertain Periodic Switched Recurrent Neural Networks
and any appropriate dimensional matrices N ij , M ij , and T ij such that the following LMIs hold: where

3.5
Proof.Suppose that y l x l 1 − x l ; then we have We consider the following SQLF: It is clear that the following equations are true: 3.9 Firstly, we prove that under w ij 1 the SQLF is less than 0. Suppose that r k i and r k 1 j, that means the sub-RNN S i switches to the sub-RNN S j ; we obtain

3.12
In order to strictly guarantee x l should be less than 0. In the switching graph Γ if there exists sub-RNN S i S i ∈ Ω , which satisfied N j 1 w ij 0, which means the S i cannot switch to any other sub-RNN, the equation Q 1i Q 1j can be grounded, otherwise the switching sequence must be β and there exist l such that S α L 1 S α l 1 ≤ l < L .Because the affection of the sub-RNNs S 1 , S 2 . . ., S l−1 and S l on the whole system is before time α l , after α l the β changes to a periodic sequence β {S α l → • • • , → S α L → S α L 1 }.Suppose that i α l α L 1 ; then in switching sequence β the following LMIs all hold: 15 3.17 On the other hand, for any appropriately dimensioned matrices N ij , M ij , and T ij the following equations are true: In addition, for any semipositive definite matrix X ij X ij T and U ij U ij T , the following equations hold:

3.20
From the assumption, we have

3.21
Similar to the conclusion in 8 , for . ., o n } ≥ 0 the following inequalities are also true: Then we add the terms on the right side of 3.16 -3.23 to yield And ξ k is defined in 3.18 .
Secondly, based on the switching graph Γ, when w ij 1 S i , S j ∈ Ω , all corresponding ΔV i k are less than 0 that means the system 2.2 is asymptotical stable.This completes the proof of Theorem 3.1.
Remark 3.2.Using the method in 8 , it is easily to know that the system is the globally exponentially stable.
Please see Table 3.
Combined with Theorem 3.1, we consider the common Lyapunov function approach; then we have the following.Corollary 3.4.Let d 1 and d 2 be positive integers such that 0 ≤ d 1 ≤ d 2 .The system 2.2 is asymptotical stable if there exist symmetric matrices P P T > 0, and any appropriate dimensional matrices N, M, and T such that the following LMIs hold:

4.1
Then we have Employing the LMIs in Theorem 3.1 yields upper bounds on d 2 that guarantee the stability of system 1.1 for various lower bounds d 1 , which are listed in Table 1.When d 1 1 and d 2 3, it can be seen from Figure 1 that all the state solutions corresponding to the 10 random initial points are convergent asymptotically to the unique equilibrium x * {0, 0, 0}, and according to Theorem 3.1, LMIs 3.1 -3.4 are solvable in Matlab 7.0.1.

Example 4.2. Consider the discrete-time recurrent neural network 2.2 with
Then we have Employing the LMIs in 8 and those in Corollary 3.4 yields upper bounds on d 2 that guarantee the stability of system for various lower bounds d 1 , which are listed in Table 2.It is clear that the obtained upper bounds of this paper are better than those of 8 .It can be seen from Figure 2 that, when d 1 1 and d 2 3, all the state solutions corresponding to the 10 random initial points are convergent asymptotically to the unique equilibrium x * {0, 0, 0}.

Conclusions
This paper was dedicated to the delay-dependent stability of uncertain periodic switched recurrent neural networks with time-varying delay.A less conservative LMI-based globally stability criterion is obtained with the switched quadratic Lyapunov functional approach and free-weighting matrix approach for periodic uncertain discrete-time recurrent neural networks with a time-varying delay.One example illustrates the exactness of the proposed criterion.Another example demonstrates that the proposed method is an improvement over the existing one.

and Λ ij 3
are defined in 3.1 -3.4 .Therefore, when the corresponding LMIs satisfy

Remark 4 . 3 .
Employing the LMIs in 8 and those in Corollary 3.4 yields upper bounds on d 2 that guarantee the stability of system 1.1 of the Example 1 of 8 for various lower bounds d 1 , which are listed in Table3 .

Table 1 :
Allowable upper bound of d 2 with given d 1 .

Table 2 :
Allowable upper bound of d 2 with given d 1 .

Table 3 :
Allowable upper bound of d 2 with given d 1 .