On Less Conservative Stability Criteria for Neural Networks with Time-Varying Delays Utilizing Wirtinger-Based Integral Inequality

1 School of Electrical Engineering, ChungbukNationalUniversity, 52Naesudong-ro,Heungduk-gu, Cheongju 361-763, Republic of Korea 2Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, 280 Daehak-ro, Kyongsan 712-749, Republic of Korea 3 School of Electronics Engineering, Daegu University, Gyungsan 712-714, Republic of Korea 4Department of Biomedical Engineering, School of Medicine, Chungbuk National University, 52 Naesudong-ro, Heungduk-gu, Cheongju 361-763, Republic of Korea


Introduction
Since neural networks are generally recognized as one of the simplified models of neural processing in the human brain and can provide their good performance and strong capability of information processing, they have been successfully applied in many fields such as image and signal processing, pattern recognition, fixed-point computations, optimization, feedback control, medical diagnosis, financial applications, and other scientific areas [1][2][3].Due to the finite switching speed of amplifiers and the inherent communication time between the neurons, it is well known that time delays exist and may cause oscillation or deteriorate system performance.Therefore, during the last few decades, many researchers [4][5][6][7][8][9] put their times and efforts into stability analysis of neural networks with time delays because it is a prerequisite and an important job to check whether the equilibrium point of the concerned networks is stable or not due to the fact that the application of these networks is heavily dependent on the dynamic behavior of the equilibrium points.
Methodologically, the delay-derivative dependent [10], the weighting-delay based [11], delay-slope dependent [12], and delay-partitioning [13] analyses were taken by the use of the information for time-delay.Also, Faydasicok and Arik [14,15] addressed the stability of neural networks with multiple time delays.The asymptotic stability analysis was dealt with in the aforesaid works, while, in [16][17][18][19], the exponential stability analysis was studied.In addition to this, the stochastic perturbation condition was considered in [20].Furthermore, more attentions have been received in delaydependent stability analysis since time delays encountered in neural networks are usually not very big [21].
One of the hot issues in delay-dependent stability analysis for dynamic systems such as linear systems, neutral systems, neural networks, and so on is to reduce the conservatism of stability criteria or to enhance the feasible region of the concerned criteria.To check the enhancement of the feasible region of stability criteria, maximum delay bounds for guaranteeing the asymptotic stability of the concerned systems are compared with the other methods.The Jensen inequality [22], 2 Mathematical Problems in Engineering Park's inequality [23], model transformation [24,25], freeweighting matrices techniques [26,27], convex combination technique [28], and reciprocally convex optimization [29] are well recognized and have been utilized in many fields as tools of reducing the conservatism of stability and stabilization criteria.Very recently, Seuret and Gouaisbaut [30] proposed the Wirtinger-based integral inequality and the advantages of the proposed integral inequality were shown via the comparison of maximum delay bounds for various systems such as systems with constant and known delay, systems with a time-varying delay, and sampled-data systems.
Another remarkable approach to reduce the conservatism of stability criteria is the delay-partitioning idea which was firstly proposed by Gu [22].The advantage of this method provides larger delay bounds when the delay-partitioning number increases.The idea of the work [22] has been utilized in stability analysis of neural networks with time delays by many researchers [4, 6, 7, 9-11, 13, 21].In [9], with the recent techniques such as free-weighting matrices techniques and reciprocally convex optimization, a delaypartitioning approach was presented by considering the timevarying delay at each subinterval.Zhang et al. [11] proposed a new delay-partitioning stability analysis by introducing a tuning parameter adjusting delay interval.Zhang et al. [21] proposed new Lyapunov-Krasovskii functionals and delaypartitioning method to investigate the problem of delaydependent stability analysis for neural networks with two additive time-varying delays and some discussions about the recent works were described.However, when the delaypartitioning number increases, the computational burden and time-consumption increase while the increasing rate of maximum delay bounds is decreased.
In addition to the techniques mentioned above, the choice of Lyapunov-Krasovskii functional and augmented state vectors also play important roles to enhance the feasible region of stability criteria.Since the introduction of the triple integral Lyapunov-Krasovskii functional [32,33], some new results in stability analysis of neural networks with timevarying delays have been presented in [4,19].Furthermore, in the authors' previous works [5,8] with the addition of zero equalities [34], it was shown that larger delay bounds can be obtained by constructing newly augmented Lyapunov-Krasovskii functional and introducing some new methods in the activation function condition.However, the application of the Wirtinger-based integral inequality to the terms obtained by calculating the augmented Lyapunov-Krasovskii functional has not been fully investigated yet, and thus there is room for further improvements on the reduction of conservatism.
With motivation mentioned above, in this paper, two improved delay-dependent stability criteria for neural networks with time-varying delays will be proposed.First, in Theorem 6, by constructing a suitable augmented Lyapunov-Krasovskii functional, an improved stability condition such that the considered neural networks are asymptotically stable is derived in terms of linear matrix inequalities (LMIs) by applying Wirtinger-based integral inequality to the augmented quadratic integral term and utilizing the zero equalities [34].Second, based on the results of Theorem 6 and motivated by the works [35][36][37], a further improved stability criterion will be proposed in Theorem 9 by ensuring the positiveness of the Lyapunov-Krasovskii functional and utilizing the Wirtinger-based integral inequality.Through three numerical examples utilized in many previous works to check the conservatism of stability criteria, it will be shown that the proposed stability criteria can provide larger delay bounds than the recent existing results.By extension, the developed methods can be applied into the networked control [38,39], the filtering problem [40,41], the uncertain systems of neutral type [42], and so on.
Notation.R  is the -dimensional Euclidean space, and R × means the set of all  ×  real matrices.For symmetric matrices  and ,  >  (resp.,  ≥ ) means that the matrix  −  is positive definite (resp., nonnegative). ⊥ denotes a basis for the null-space of .  , 0  , and 0 ⋅ denote  ×  identity matrix and  ×  and  ×  zero matrices, respectively.‖ ⋅ ‖ refers to the Euclidean vector norm or the induced matrix norm.diag{⋅ ⋅ ⋅ } denotes the block diagonal matrix.For square matrix , Sym{} means the sum of  and its symmetric matrix   ; that is, Sym{} = +  .⋆ represents the elements below the main diagonal of a symmetric matrix. [()] ∈ R × means that the elements of matrix  [()] include the scalar value of ().
The delay, ℎ(), is a time-varying continuous function satisfying where ℎ  is a known positive scalar and ℎ  is any constant one.
The neuron activation functions satisfy the following assumption.
Remark 2. In Assumption 1,  +  and  −  can be allowed to be positive, negative, or zero.As mentioned in [12], Assumption 1 describes the class of globally Lipschitz continuous and monotone nondecreasing activation when  −  = 0 and  +  > 0. And the class of globally Lipschitz continuous and monotone increasing activation functions can be described when  +  >  −  > 0.
For simplicity, in stability analysis of the neural networks (1), the equilibrium point  * = [ * 1 , . . .,  *  ]  whose uniqueness has been reported in [17] is shifted to the origin by utilizing the transformation (⋅) = (⋅) −  * , which leads the system (1) to the following form: where () = [ It should be noted that the activation functions   (⋅) ( = 1, . . ., ) satisfy the following condition [16]: The objective of this paper is to investigate the delaydependent stability analysis of system (4) which will be introduced in Section 3. Before deriving our main results, the following lemmas will be utilized in deriving the main results.
Proof.Let us consider the following candidate for the appropriate Lyapunov-Krasovskii functional: where It should be noted that From ( 15), V 1 () can be represented as The result of the time-derivative of  2 () is as follows: By calculating V 3 (), it follows that Inspired by the work of [34], the following two zero equalities with any symmetric matrices   ( = 1, 2) are considered: Summing the two zero equalities presented at (19) leads to The result of the V 4 () is given by The integral term () ]  with the addition of the two integral terms at (20) can be described as Here, by applying Lemma 3, we have
Remark 7. As mentioned in the Introduction section, Lemma 3 was first introduced in [30] to reduce the conservatism of delay-dependent stability criteria.In [30], an upper bound of the integral form of − ∫ −ℎ  ().However, unlike the presented method in [30], the augmented integral term of was estimated in ( 23)-( 26) with the consideration of the two integral terms obtained by zero equality (20).Thus, by utilizing the newly introduced state vectors such as () , more relaxed conditions can be expected since more information about the past history of states and some new cross-terms which may play roles to reduce the conservatism of stability condition were considered in Theorem 6.In the authors' future works, this method will be extended to various problems such as state estimation,  ∞ performance analysis, filtering, synchronization between two chaotic systems, and stability and stabilization of other dynamic systems which are receiving much attention in the control society.
Remark 8. Another novelty in Theorem 6 is  3 () introduced in (13).In many works dealing with stability of neural networks with time-varying delays, the proposed Lyapunov-Krasovskii functionals having information of time-varying delays have been of the form of ]  which is different from the previous works.Thus, the results of time-derivative of the proposed  3 () contain some new cross-terms such as Mathematical Problems in Engineering 9 which were presented in (18) and does not be used in existing works.
Remark 10.Unlike Theorem 6, Theorem 9 does not require the positiveness condition of R > 0. Instead, the condition (41) was added.It should be noted that the positive definiteness of a chosen Lyapunov-Krasovskii functional does not necessarily require all the involved symmetric matrices in the Lyapunov-Krasovskii functional to be positive definite [36].As presented in (39), by taking the states ∫ When ℎ  is 0.8, 0.9, and unknown, maximum delay bounds for guaranteeing the asymptotic stability are listed in Table 1 which conducts the comparison of the previous results [4,5,9,10,13,19].In spite of not employing delay-partitioning approach, the obtained delay bounds by applying Theorem 6 are larger than those of [4,9,10,13,19] which divided the delay interval into two subintervals.Furthermore, comparing with the results of Theorem 6, it can be confirmed that Theorem 9 enhances the feasible region of stability criteria which shows the effectiveness of the lower bound of Lyapunov-Krasovskii functional as mentioned in Remark 10.
In Table 2, maximum delay bounds for guaranteeing the asymptotic stability of system (43) are listed for various conditions of ḣ ().As presented in Table 2, one can see that Theorem 6 effectively reduces the conservatism of stability criteria compared with the previous results [5-7, 9, 11, 13].Also, it can be confirmed that Theorem 9 slightly enhances the feasible region of Theorem 6.
For this example, one can see that maximum delay bounds are much larger than those of [4,7,8] in Table 3 which shows maximum delay bounds when ḣ () is 0.4, 0.45, 0.5, and 0.55.It can be also confirmed that the feasible region of Theorem 9 is effectively enhanced compared with those of Theorem 6, which also supports the effectiveness of the statements in Remark 10.To explain the contribution of    [ () (()) ]  ( > 0).From the results of Corollary 1 listed in Table 3, one can confirm that the results of Corollary 1 is smaller than those of Theorem 6.This means that the proposed  3 () in Theorem 6 is effective in reducing the conservatism of stability condition.

Conclusion
In this paper, two improved delay-dependent stability criteria for neural networks with time-varying delays have been proposed by the use of the Lyapunov stability theorem and LMI framework.In Theorem 6, by constructing the suitable augmented Lyapunov-Krasovskii functional and utilizing some novel Lyapunov-Krasovskii functionals and techniques mentioned in Remarks 7 and 8, a delay-dependent sufficient condition for asymptotic stability of the concerned network was derived.Based on the results of Theorem 6 and by taking lower bound of Lyapunov-Krasovskii functional and utilizing the property of its positiveness with the newly constructed augmented vectors, the further improved stability condition was derived in Theorem 9. Via three numerical examples dealt with in many previous works to check the conservatism of stability criteria, the improvements of the feasible region of the three proposed stability criteria have been successfully verified.Moreover, in [46], the triple integral forms of Lyapunov-Krasovskii functional were shown effectiveness in reducing the conservatism of stability sufficient conditions.Thus, by grafting such an approach onto the proposed idea of this paper, further improved results will be investigated in the near future.
Example 1.Consider the neural networks (4) with the parameters as follows:

Table 2 :
Delay bounds ℎ  with different ℎ  (Example 2).* means that the corresponding result is not presented.
*  is the delay-partitioning number.*
*  is the delay-partitioning number.