On Stability Analysis for Generalized Neural Networks with Time-Varying Delays

This paper deals with the problem of stability analysis for generalized neural networks with time-varying delays. With a suitable Lyapunov-Krasovskii functional (LKF) and Wirtinger-based integral inequality, sufficient conditions for guaranteeing the asymptotic stability of the concerned networks are derived in terms of linear matrix inequalities (LMIs). By applying the proposed methods to two numerical examples which have been utilized in many works for checking the conservatism of stability criteria, it is shown that the obtained results are significantly improved comparing with the previous ones published in other literature.


Introduction
Neural networks have been successfully applied to various science and engineering problems such as pattern recognition, optimization, medical diagnosis, and image and signal processing [1][2][3][4].In addition to this, the model of neural networks can take a variety of forms such as bidirectional associative memory (BAM) [5] and Cohen-Grossberg [6] neural networks.Therefore, the stability analysis of neural networks has been extensively studied because the application of neural networks heavily depends on the dynamic behavior of its equilibrium points and it is a prerequisite job to check the stability of the concerned networks.Recently, a model of neural networks having time-varying delays has been considered and its asymptotic stability has been extensively investigated due to the fact that the occurrence of time delays, which are caused by the inherent communication time among the neurons and the finite switching speed of amplifiers in hardware implementation of the networks, can harm the system performance and stability.With this regard, a lot of weight has been placed on the stability analysis for local field and static neural networks with time-varying delays.For local field neural networks, a new activation condition which has not been considered was proposed to reduce the conservatism of stability criteria in [7].Two delaypartitioning approaches were utilized to get further enhanced delay-dependent stability criteria for neural networks with time-varying delays in [8][9][10][11].Very recently, a new augmented Lyapunov-Krasovskii functional was proposed to enhance the feasible region of stability condition of delayed neural networks [12].For static neural networks, improved delaydependent stability criteria were presented in [13,14].By constructing an augmented Lyapunov-Krasovskii functional, both delay-independent and delay-dependent stability criteria were proposed in [15] with some new techniques.The robust global asymptotic stability of generalized static neural networks with linear fractional uncertainties and time-varying delays was proposed in [16] within a novel input-output framework.Very recently, interval time-varying delays were considered to investigate the problem of delaydependent stability of static neural networks [17].
The main issue of delay-dependent stability analysis for dynamic systems with time-delays is to enhance the feasible region comparing with the existing results.In delaydependent stability analysis, maximum delay bounds for guaranteeing the asymptotic stability have been considered as one of the index for checking the conservatism of stability condition.The reduction of conservatism in delay-dependent stability criteria mainly depends on the construction of LKF and the utilization of some techniques to estimate the timederivative of LKF.In LKF point of view, discretized form [18], triple integral form [19][20][21], and augmented LKF in [12,22] and [23,24] have been proposed to reduce the conservatism of stability condition.In techniques for estimating the timederivative of LKFs, Park's inequality [25], Jensen inequality [18], model transformation [26,27], free-weighting matrices techniques [28,29], convex combination technique [30], reciprocally convex optimization [31], and the Wirtingerbased integral inequality [32] are remarkable results which promotes the development of delay-dependent stability analysis.Among the techniques in estimating the time-derivative of LKFs, Wirtinger-based integral inequality, which provides more tighter lower bound of integral terms than Jensen's inequality, has been applied to stability analysis of delayed neural networks [33,34] very recently.However, in stability analysis for neural networks with time-varying delays, the application of the Wirtinger-based integral inequality with an augmented LKF has not been fully investigated yet and thus there is still room for further improvements on the reduction of conservatism.
With the motivation mentioned above, the problem of stability analysis for generalized neural network with timevarying delays is investigated in this paper.Inspired by the works [35,36], a generalized neural network which contains the static neural networks and the local field neural networks as special cases is considered in this work.Thus, the general model of neural networks used in this work is a superordinate concept to the model utilized in most literature including the works [37][38][39][40][41].In Lemma 3, a modified Wirtinger-based integral inequality which changes the intervals of integral terms will be introduced as Lemma 3.Then, by constructing a suitable augmented Lyapunov-Krasovskii functional and utilizing Lemma 3, an improved stability condition such that the considered neural networks are asymptotically stable is derived in terms of linear matrix inequalities (LMIs).When the information about the time-derivative of time-varying delays is unknown, a stability condition will be presented as Corollary 10.The advantage and superiority of the main results will be shown via two numerical examples which have been utilized in many previous works to check the conservatism of stability criteria.
Notation.R  and R × denote the -dimensional Euclidean space with vector norm ‖ ⋅ ‖ and the set of all  ×  real matrices, respectively.S  and S  + are the sets of symmetric and positive definite  ×  matrices, respectively.  , 0  , and 0 ⋅ denote × identity matrix, × and × zero matrices, respectively. > 0 (<0) means symmetric positive (negative) definite matrix. ⊥ stands for a basis for the null space of .diag{⋅ ⋅ ⋅ } represents the block diagonal matrix.For any square matrix  and any vectors   , respectively, we define Sym{} =  +   and col{
ℎ() is the time-varying delay satisfying where ℎ  is a known positive scalar and ℎ  is any constant one.
The activation functions of neuron satisfy the following assumption.
Remark 2. In Assumption 1,  +  and  −  can be allowed to be positive, negative, or zero.As mentioned in [40], Assumption 1 describes the class of globally Lipschitz continuous and monotone nondecreasing activation when  −  = 0 and  +  > 0. And the class of globally Lipschitz continuous and monotone increasing activation functions can be described when  +  >  −  > 0.
It should be noted that the activation functions   (⋅) ( = 1, . . ., ) satisfy the following condition [42]: The aim of this paper is to investigate the delay-dependent stability analysis of system (4) which will be introduced in next section.Before deriving our main results, the following lemmas will be utilized in deriving the main results.

Main Results
In this section, two delay-dependent stability criteria for system (4) will be proposed.For the sake of simplicity of matrix and vector representation,   ( = 1, 2, . . ., 12) ∈ R 12× which will be used in Theorem 6 and Corollary 10 are defined as block entry matrices.(e.g., The other notations for some vectors and matrices are defined in the appendix.Now, the following theorem is given as a main result. Theorem 6.For given scalars ℎ  > 0, ℎ  , and diagonal matrices   = diag{ − 1 , . . .,  −  } and   = diag{ + 1 , . . .,  +  }, system ( 4) is asymptotically stable for 0 ≤ ℎ() ≤ ℎ  and ḣ () ≤ ℎ  , if there exist matrices Proof.Let us consider the following candidate for the appropriate Lyapunov-Krasovskii functional: where

Mathematical Problems in Engineering
The results of the time-derivative of   () ( = 1, 2, 3) are as follows: By applying Leibniz integral rule to V 4 (), it follows that ] Considering Leibniz integral rule and the two zero equalities inspired by the work of [45] to V 5 () leads to where  1 and  2 are any symmetric matrices.
Remark 7. Theorem 6 utilized  5 () which was inspired by the author's work [22].In [22], the problem of delaydependent stability for neural networks with interval timevarying delays was addressed.However, when estimating the time-derivative of value obtained by  5 (), Wirtinger-based integral inequality has not been used in [22], which means that there is still room for further improvement in enhancing the feasible region of stability condition.In ( 18) and ( 19), the integral terms ]  (28) were estimated by the use of Lemma 3 for the first time.Thus, more cross terms such as were utilized in the stability condition of Theorem 6.
Remark 8.It should be noted that when ℎ  is larger than one, the condition presented in Theorem 6 is infeasible since the term −(1−ℎ  )G cannot be negative.When ℎ  is unknown or larger than 1, then, by not considering  4 () of the functional in (12), the following corollary can be obtained.[32].Thus, the integral terms such as ] 3 () and ] 4 () of Theorem 6 can be utilized as the element of the augmented vector ().
Proof.The proof of Corollary 10 is very similar to the proof of Theorem 6.Thus, it is omitted.

Numerical Examples
In this section, two numerical examples are introduced to show the improvements of the proposed methods.
Example 1.Consider the neural networks (4) with the parameters This example has been utilized in many previous works to check the conservatism of delay-dependent stability.In Table 1, the comparison of our results with the previous works of [7][8][9], and [10][11][12] are conducted.When ℎ  is 0.8 or 0.9, Theorem 6 provides significantly larger delay bounds than those listed in Table 1.Also, When ℎ  is larger than one or unknown, the result of Corollary 10 is the largest value The maximum delay bounds obtained by Theorem 6 and Corollary 10 and the results of  are listed in Table 2.
From this table, it can be confirmed that the proposed stability condition also gives larger delay bounds, which support the less conservatism of Theorem 6 and Corollary 10.

Conclusion
In this paper, two improved delay-dependent stability criteria for generalized neural networks with time-varying delays have been proposed by the use of the augmented Lyapunov stability theorem and Lemma 3. In Theorem 6, by constructing the suitable augmented Lyapunov-Krasovskii functional and utilizing Lemma 3, the delay-dependent sufficient condition for asymptotic stability of the concerned network was derived.When ℎ  is larger than one or unknown, the stability condition was also presented as Corollary 10 based on the result of Theorem 6. Via two numerical examples which have been dealt with in many previous works to check the conservatism of stability criteria, it was shown that Theorem 6 and Corollary 10 provide larger delay bounds than those of the recent works.

Appendix
Consider