Stability Analysis for Delayed Neural Networks : Reciprocally Convex Approach

This paper is concerned with global stability analysis for a class of continuous neural networks with time-varying delay. The lower and upper bounds of the delay and the upper bound of its first derivative are assumed to be known. By introducing a novel Lyapunov-Krasovskii functional, some delay-dependent stability criteria are derived in terms of linear matrix inequality, which guarantee the considered neural networks to be globally stable. When estimating the derivative of the LKF, instead of applying Jensen’s inequality directly, a substep is taken, and a slack variable is introduced by reciprocally convex combination approach, and as a result, conservatism reduction is proved to be more obvious than the available literature. Numerical examples are given to demonstrate the effectiveness and merits of the proposed method.


Introduction
In recent years, important application in the fields of pattern recognition, signal processing, optimization and associative memories, and so forth has made neural networks (NNs) the eye of attention.Stability analysis of NNs with time-varying delay, as a result, has received much attention ever since, because time delay is frequently encountered in NNs, owing to finite switching speed of amplifiers in the communication and response of neurons, and could cause instability and oscillations in the system.Both delay-independent and delaydependent stability criteria have been brought up.While delay-independent criteria tend to be more conservative, more attention is given to delay-dependent criteria, because they could make use of the length of the delay.
Various methods have been proposed to reduce conservatism when deriving stability criteria.For example, the freeweighting matrix approach noticed in [6,7,[19][20][21][22] is proved to be very effective since bounding techniques on some crossproduct terms are avoided.Stability analyses of NNs with multiple and single time-varying delays are done by [17,23], respectively.Moreover, a new free-weighting matrix approach is brought up in [24] to estimate the derivative of Lyapunov functional without missing any negative quadratic terms, and thus, improved delay-dependent stability criteria are established.Along with free-weighting matrix approach, [14,25,26] adopted the delay partitioning idea to solve delay-dependent stability problem, and the proposed methods have significantly reduced conservatism.
When deriving stability criteria for delayed systems, two kinds of approaches are usually used, namely, Lyapunov function approaches [27][28][29] and Lyapunov-Krasovskii functional (LKF) approaches [17,23,24,[30][31][32][33].The former makes no restriction on the derivative of time delay and usually gives a simpler stability criterion or delay-independent criterion while the latter, expressed in the form of LMI, takes the derivative of time delay into account, gives a delay-dependent criterion, and thus can be less conservative since LKF makes use of more information about the system.Discretized LKF method developed in [34] is another method for stability analysis, and [13] made some necessary adjustments to make the method compatible with robust stability problem for NNs with uncertain delays.
In addition, the range of time-varying delay for NNs is mostly considered to have a lower bound of zero, as seen in [7,11,23,24], while in practice it may not be restricted to 0, and setting ℎ 1 to zero would result in increased conservatism.It is the same with the derivative of time delay.In many papers, time delay of NNs is either constant or unknown, as noticed in [14,25,26], while an upper or lower bound could be assumed.
In this paper, the stability problem for continuous NNs with time-varying delay is taken into consideration.A novel LKF is brought up, and changes are made to deal with different cases concerning time delay and its derivative.When estimating the derivative of LKF, instead of applying Jensen's inequality directly, a substep is taken, and a slack variable is introduced, and consequently, conservatism reduction is proved to be more obvious than existing results.Numerical examples are given, and analysis is made to demonstrate the effectiveness and merits of the proposed method.
In Section 1, a brief introduction is presented, and some notations are defined.In Section 2, the stability problem is formulated, and some preliminaries are given.In Section 3, new criteria in the form of one theorem and three corollaries for NNs with time-varying delay are presented.In Section 4, numerical examples are presented, along with results from the other literature.The paper is concluded in Section 5.
Notations.The notations used throughout the paper are standard.The superscript "" stands for matrix transposition;   denotes the -dimensional Euclidean space; the notation  > 0 means that  is a real positive definite;  and 0 represent the identity matrix and a zero matrix, respectively; diag(⋅ ⋅ ⋅) stands for a block-diagonal matrix;  min () ( max ()) denotes the minimum (maximum) eigenvalue of symmetric matrix ; ‖ ⋅ ‖ denotes the Euclidean norm of a vector and its induced norm of a matrix.In symmetric block matrices, we use an asterisk ( * ) to represent a term that is induced by symmetry.Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

Model Descriptions and Preliminaries
The dynamic behavior of a continuous-time neural networks with time delay can be described by the following state equation: where () is the state vector of the neural network;  is a positive matrix;  and  are the connection weight and the delayed connection weight matrices, respectively.(()) represents the activation function vector of neurons, and  is a constant external bias vector.ℎ() denotes axonal signal transmission delay, which is nonnegative, bounded, and has 0 < ℎ 1 < ℎ < ℎ 2 , 0 ≤ ḣ ≤ , and will be written as ℎ for short throughout the paper.The initial conditions associated with system (2) are of the form where The following assumptions are made on system (2) throughout this paper.

Main Results
In this section, some delay-dependent sufficient conditions of the global stability for the neural networks with time-varying delay in (5) are derived.First, we consider the case where the upper bound of ḣ is known, and correspondingly, the global stability condition is given as follows.
Mathematical Problems in Engineering Moreover, Therefore, we have which shows that system (5) is globally stable.This completes the proof.
Remark 3. Theorem 2 presents a stability criterion for the delayed neural network.When coping with ẋ  () 2 ẋ () , instead of using Jensen's inequality directly, we use a substep which can make the method less conservative, which can be noticed as reciprocally convex combination approach in [36].It follows that If no substep is taken, it will follow that We can see that compared to (27), ( 26) is relatively free since T could be more than nonnegative, and consequently, its LMI could suffer bigger delay.For the same reason, if the middle term could be found between and 0, conservatism would be further reduced.
Remark 4. Based on Theorem 2, we can determine the maximum admissible delay ℎ 1 and ℎ 2 at a known upper bound of ḣ .Moreover, the relationship between () and  could be specified using   and   .As to the case when ḣ is unknown, we can refer to Corollary 5, where some changes are made on the LKF.
Corollary 5. Suppose that the time delay ℎ in reference system (5) satisfies 0 < ℎ 1 < ℎ < ℎ 2 .Under the condition given by ( 6) and (7), if there exist matrices  > 0, ,  2,2 , . . .,  2, ) > 0, and  such that the following matrix inequalities hold: where Υ1 , Υ2 , Υ3 , and Υ4 are defined as where  ẋ ,   ,  ℎ ,  ℎ 1 ,  ℎ 2 ,   , and  ℎ are defined in (11), then system (5) is globally stable.Moreover, where Δ is defined as Proof.We choose an LKF as where Since the result can be obtained directly from Theorem 2, the rest of the proof for Corollary 5 is omitted.Remark 6. Corollary 5 presents the stability criterion when the upper bound of ḣ is unknown.Since  is unknown and under current conditions, it cannot be estimated or substituted, the first term in Ṽ2 () of ∫ 0 −ℎ [  () 3 () +   (()) 4 (())] should be changed or eliminated from the LKF.In Corollary 5, the term is eliminated because the other two terms in Ṽ2 () can serve the same function, and it is unnecessary to keep any extra ( − ℎ) or (( − ℎ)).The rest of the terms were reserved because they will not generate any ḣ -related terms when estimating the derivative of the LKF.

Numerical Examples
In this section, examples are provided to demonstrate the advantages of the proposed stability criteria.
Example 11.Consider the delayed neural network in (5) with the following parameters, which has also been investigated by [13,39]: Our objective is to find the allowable maximum time delay ℎ 2 such that the system is stable under different ℎ 1 and ḣ .The simulation results from the available literature are shown in Table 1, along with results from Theorem 2 and Corollaries 5, 7, and 9.It is clear that the conservatism reduction proves to be more obvious than those in [23,24,37,38].
In addition, when , , , and   take the same values as (48), and   takes a different value as results under the same system with (49) are shown in Table 2. Since   is specified in (49), allowable maximum time delay ℎ 2 is expected to be different from those of (48).As shown in Table 2, with ℎ 1 getting bigger, the difference between ℎ 2 and ℎ 1 becomes smaller, to ensure stability of (49).Moreover, allowable maximum ℎ 2 of (49) is apparently smaller than their counterparts of (48) because   in (49) is a positive definite, while its counterpart in (48) is zero, which means that (()) in ( 49) is more closely related to () than (48).This example is used to demonstrate the effectiveness of Theorem 2 in Table 3 and Corollaries 5, 7, and 9 in Table 4.Both   and   have taken positive definite values, and allowable maximum time delay ℎ 2 under different ℎ 1 and  is presented in Tables 3 and 4.
As seen in Table 3, values of ℎ 2 change periodically with different  under the same ℎ 1 , but the difference between ℎ 2 and ℎ 1 decreases readily with increasing ℎ 1 under the same .It can be expected that whatever value  takes, when ℎ 1 is big enough, ℎ 1 and ℎ 2 would converge to a single point.
Moreover, it can be seen in Table 3 that the point is the same with the convergence point under  = 0, which is also the allowable maximum constant time delay.
Table 4 is used to demonstrate the effectiveness of Corollaries 5, 7, and 9. Allowable maximum time delay is presented under unknown  for Corollary 5, ℎ 1 = 0 for Corollary 7, and both unknown  and ℎ 1 = 0 for Corollary 9.It can be seen that when  is unknown, ℎ 2 is apparently smaller than otherwise.Moreover, sums of ℎ 1 and ℎ 2 are about the same with those under ℎ 1 = 0.1 but smaller than those under ℎ 1 ≥ 0.2, which signifies that ℎ 1 and ℎ 2 have a near-linear relationship under 0 ≤ ℎ 1 ≤ 0.1.

Conclusion
This paper has investigated the global stability of the neural networks with time-varying delay.By introducing a novel LKF, delay-dependent global stability criteria have been obtained.By reciprocally convex combination approach, a substep is taken, and a slack variable is introduced to estimate the derivative of LKF, and as a result, the proposed method is expected to be less conservative than the available literature.The proposed criteria have been formulated in terms of linear matrix inequalities and, thus, can be readily solved by standard computing software.Numerical examples are given, and analysis is made under different ranges and derivatives of time delay.The conservatism reduction has been proved to be more obvious than existing results.

Remark 8 . 1 −ℎ 2 ∫
Corollary 7 presents the stability criterion when the lower bound ℎ 1 is zero.If ℎ 1 is zero, the second term in V2 () of ∫  −ℎ 1   () 1 () should be changed or eliminated from the LKF.In Corollary 7, the term is eliminated because there is no need to introduce an extra variable  1 , while  and other matrices can serve the same function.Moreover, ẋ  () 1 ẋ ()  and (ℎ 2 − ℎ 1 ) ∫ −ℎ  + ẋ  () 2 ẋ ()  in V3 () can be merged because when ℎ 1 is zero, they have the same form.But  1 and  2 can be reserved, because when estimating the upper bound of the LKF, it is still useful to introduce a  like Theorem 2.

Example 12 .
Consider the delayed system in (

Table 1 :
Allowable maximum time delay ℎ 2 under different ℎ 1 and  in this paper along with results from other papers for comparison.