Passivity Analysis of Markovian Jumping Neural Networks with Leakage Time-Varying Delays

This paper is concerned with the passivity analysis of Markovian jumping neural networks with leakage time-varying delays. Based on a Lyapunov functional that accounts for the mixed time delays, a leakage delay-dependent passivity conditions are derived in terms of linearmatrix inequalities (LMIs).Themixed delays includes leakage time-varying delays, discrete time-varying delays, and distributed time-varying delays. By employing a novel Lyapunov-Krasovskii functional having triple-integral terms, new passivity leakage delay-dependent criteria are established to guarantee the passivity performance. This performance not only depends on the upper bound of the time-varying leakage delay σ(t) but also depends on the upper bound of the derivative of the time-varying leakage delayσ μ .While estimating the upper boundof derivative of the Lyapunov-Krasovskii functional, the discrete anddistributed delays should be treated so as to appropriately develop less conservative results. Two numerical examples are given to show the validity and potential of the developed criteria.


Introduction
In the past few decades, neural networks (NNs) have been a hot research topic because of their emerged application in static image processing, pattern recognition, fixed-point computation, associative memory, combinatorial optimization [1][2][3][4][5]. Because the interactions between neurons are generally asynchronous in biological and artificial neural networks, time delays are usually encountered. Since the existence of time delays is frequently one of the main sources of instability for neural networks, the stability analysis for delayed neural networks had been extensively studied and many papers have been published on various types of neural networks with time delays based on the LMI approach [6][7][8][9][10][11][12][13][14].
On the other hand, the main idea of passivity theory is that the passive properties of a system can keep the system internally stable. In addition, passivity theory is frequently used in control systems to prove the stability of systems. The problem of passivity performance analysis has also been extensively applied in many areas such as signal processing, fuzzy control, sliding mode control [15], and networked control [16]. The passivity idea is a promising approach to the analysis of the stability of NNs, because it can lead to more general stability results. It is important to investigate the passivity analysis for neural networks with time delays. More recently, dissipativity or passivity performances of NNs have received increasing attention and many research results have been reported in the literature, for example, [17][18][19][20][21].
In practice, the RNNs often exhibit the behavior of finite state representations (also called clusters, patterns, or modes) which are referred to as the information latching problems [22]. In this case, the network states may switch (or jump) between different RNN modes according to a Markovian chain, and this gives rise to the so-called Markovian jumping recurrent neural networks. It has been shown that the information latching phenomenon is recognized to exist universally in neural networks [23,24], which can be dealt with extracting finite state representation from a trained network, that is, a neural network sometimes has finite modes that switch from one to another at different times. The results related to all kinds of Markovian jump neural networks with time delay can also be found in [25][26][27] and the references therein. It should be pointed out that all the above mentioned references assume that the considered transition probabilities in the Markov process or Markov chain are time invariant, that is, the considered Markov process or Markov chain is assumed to be homogeneous. It is noted that such kind of assumption is required in most existing results on Markovian jump systems [28,29]. The detailed discussion about piecewise homogeneous and nonhomogeneous Markovian jumping parameters has been given in [30] and references therein.
On the other hand, a typical time delay called as leakage (or "forgetting") delay may exist in the negative feedback terms of the neural network and it has a great impact on the dynamic behaviors of delayed neural networks and more details are given in [31][32][33][34][35][36]. In [34] the authors introduced leakage time-varying delay for dynamical systems with nonlinear perturbations and derived leakage delay-dependent stability conditions via constructing a new type of Lyapunov-Krasovskii functional and LMI approach. Recently, the passivity analysis for neural networks of neutral type with Markovian jumping parameters and time delay in the leakage term have been addressed in [37]. With reference to the results above, it has been studied that many results get to be found out for passivity analysis of Markovian jumping neural networks with leakage time-varying delays. Thus, the main purpose of this paper is to shorten such a gap by making the first attempt to deal with the passivity analysis problem for a type of continuous-time neural networks with time-varying transition probabilities and mixed time delays.
In this paper, the problem of passivity analysis of Markovian jump neural networks with leakage time-varying delay and discrete and distributed time-varying delays is considered. The Markov process in the under lying neural networks is assumed to be finite piecewise homogeneous, which is a special nonhomogeneous (time-varying) Markov chain. Motivated by [30] a novel Lyapunov-Krasovskii functional is constructed in which the positive definite matrices are dependent on the system mode and a triple-integral term is introduced for deriving the delay-dependent stability conditions. By employing a novel Lyapunov-Krasovskii functional having triple integral terms, new passivity leakage delaydependent criteria are established to guarantee the passivity performance of the given systems. This performance not only depends on the upper bound of the time-varying leakage delay ( ) but also depends on the upper bound of the derivative of the time-varying leakage delay . When estimating an upper bound of the derivative of the Lyapunov-Krasovskii functional, we handle the terms related to the discrete and distributed delays appropriately so as to develop less conservative results. Two numerical examples are given to show the validity and potential of the development of the proposed passivity criteria.
Lemma 2 (Jensen Inequality). For any matrix ≥ 0, any scalars and with ≤ and a vector function ( ) : [ , ] → R such that the integrals concerned are well defined, the following inequality holds: Lemma 3. For any constant matrix = > 0 and scalars > 0, 1 > 0, 2 > 0 such that the following inequalities hold The main purpose of this paper is to establish a delaydependent sufficient condition to ensure that neural networks (1) are passive.
Definition 4. The system (1) is said to be passive, if there exists a scalar ] ≥ 0 such that for all ≥ 0 and for all the solutions of (1), the following inequality holds under zero initial conditions.

Main Results
In this section, we derive a new delay-dependent criterion for passivity of the delayed Markovian jumping neural networks (1) using the Lyapunov-Krasovskii functional method combining with LMI approach. For presentation convenience, in the following, we denote Now, we establish the following passivity condition for the system (1). , and the remaining coefficients are all zero.
Now, we can deduce that Thus, if (33) holds, then since E[ ( , , )] ≥ 0 and ( 0 , 0 , 0 ) = 0 holds under zero initial condition, from (31) it follows that ( ) ≤ 0 for any ≥ 0, which implies that (13) is satisfied and therefore the delayed neural networks (1) are locally passive. Next we shall prove that E[‖ ( )‖ 2 ] → 0 as → ∞. Taking expectation on both sides of (28) and integrating from 0 to we have By using Dynkin's formula, we have Using Jenson's inequality and (36), we have Similarly, it follows from the definition of 1 ( , , ) that Hence, it can be obtained that where From (39) and (40), it can be deduced that the trivial solution of system (1) is locally passive. Then the solutions ( ) = ( , 0, ) of system (1) is bounded on [0, ∞). considering (1), we know that( ) is bounded on [0, ∞), which leads to the uniform continuity of the solution ( ) on [0, ∞). From (36), we note that the following inequality holds: By Barbalats' lemma [38], it holds that E[‖ ( )‖ 2 ] → 0 as → ∞ and this completes the proof of the global passivity of the system (1).   The time varying delay ( ) satisfies where 1 , 2 , , are some constants and the leakage delay ≥ 0 is a constant. Now, the passivity condition for the neural networks (43) is given in the following corollary and the result follows from Theorem 5.
Proof. We can define the Lyapunov functional for the above neural networks as in Theorem 5 by replacing ( ) by . The proof is the same as that of Theorem 5, and hence it is omitted.
Remark 9. In this paper, Theorem 5 provides passivity criteria for the Markovian jumping neural networks with leakage time varying delays. Such stability criterion is derived based on the assumption that the leakage time varying delays are differentiable and the values of are known. A new set of triple integral terms have been introduced in the Lyapunov-Krasovskii functional to derive the leakage delaydependent passivity conditions via LMI approach. New type of Lyapunov-Krasovskii functional is constructed in which the positive definite matrices 1 , , 2 , , 3 , are dependent on the system mode and a triple-integral term is introduced for deriving the delay-dependent passivity conditions.

Numerical Examples
In this chapter, we provide two simple examples presented here in order to illustrate the usefulness of our main results. Our aim is to examine the passivity analysis of given delayed neural networks.
This shows that the given Markovian jumping neural networks (1) or (3) are globally passive with respect to the passive control.
This shows that the given Markovian jumping neural networks (49) are globally passive with respect to the passive control.

Conclusion
In this paper, stochastic stability analysis of Markovian jump neural networks with leakage time-varying delay and discrete and distributed time-varying delays is considered. The Markov process in the underlying neural networks is finite piecewise homogeneous. A leakage delay-dependent passivity conditions have been derived in terms of LMIs by constructing novel Lyapunov-Krasovskii functional having triple integral terms. This performance not only depends on the upper bound of the time-varying leakage delay ( ) but also depends on the upper bound of the derivative of the time-varying leakage delay . Two numerical examples have been provided to demonstrate the effectiveness of the proposed methods for both with and without Markovian jumping parameters.