Stochastic Passivity of Uncertain Neural Networks with Time-varying Delays

Recommended by Elena Litsyn The passivity problem is investigated for a class of stochastic uncertain neural networks with time-varying delay as well as generalized activation functions. By constructing appropriate Lyapunov-Krasovskii functionals, and employing Newton-Leibniz formulation, the free-weighting matrix method, and stochastic analysis technique, a delay-dependent criterion for checking the passivity of the addressed neural networks is established in terms of linear matrix inequalities LMIs, which can be checked numerically using the effective LMI toolbox in MATLAB. An example with simulation is given to show the effectiveness and less conservatism of the proposed criterion. It is noteworthy that the traditional assumptions on the differentiability of the time-varying delays and the boundedness of its derivative are removed.


Introduction
During the last two decades, many artificial neural networks have been extensively investigated and successfully applied to various areas such as signal processing, pattern recognition, associative memory, and optimization problems 1 .In such applications, it is of prime importance to ensure that the designed neural networks are stable 2 .
In hardware implementation, time delays are likely to be present due to the finite switching speed of amplifiers and communication time.It has also been shown that the processing of moving images requires the introduction of delay in the signal transmitted through the networks 3 .The time delays are usually variable with time, which will affect the stability of designed neural networks and may lead to some complex dynamic behavior such as oscillation, bifurcation, or chaos 4 .Therefore, the study of stability with consideration of time delays becomes extremely important to manufacture high quality neural networks 5 .Many important results on stability of delayed neural networks have been reported, see 1-10 and the references therein for some recent publications.
H3 15 There exist two scalars ρ 1 > 0, ρ 2 > 0 such that the following inequality: is equivalent to the following conditions:

Main Results
For presentation convenience, in the following, we denote

3.6
By It o differential rule, the stochastic derivative of V t along the trajectory of model 3.5 can be obtained as

3.7
From the definition of y t , we have

3.8
By assumption H1 and Lemma 2.2, we get 3.9 It follows from 3.8 and 3.9 that α s dω s .

3.13
Integrating both sides of 3.5 from t − τ to t − τ t , we have Similarly, by using of the same way, and noting τ − τ t ≤ τ, we get

3.15
From assumption H2 , we have which are equivalent to where e r denotes the unit column vector having 1 element on its rth row and zeros elsewhere.Let L diag{l 1 , l 2 , . . ., l n }, S diag{s 1 , s 2 , . . ., s n }, 3.18 Similarly, one has It follows from 3.7 , 3.10 , 3.13 , 3.15 , 3.20 and 3.21 that It is easy to verify the equivalence of Π < 0 and Ω < 0 by using Lemma 2.4.Thus, one can derive from 3.3 and 3.25 that u T s u s ds .

3.28
From Definition 2.1, we know that the stochastic neural networks 2.1 are globally passive in the sense of expectation, and the proof of Theorem 3.1 is then completed.
Remark 3.2.Assumption H2 was first proposed in 10 .The constants F − j and F j i 1, 2, . . ., n in assumption H2 are allowed to be positive, negative or zero.Hence, Assumption H2 is weaker than the assumption in 27-37 .In addition, the conditions in 32-37 that the time-varying delay is differentiable and the derivative is bounded or smaller than one have been removed in this paper.
Remark 3.3.In 36 , authors considered the passivity of uncertain neural networks with both discrete and distributed time-varying delays.In 37 , authors considered the passivity for stochastic neural networks with time-varying delays and random abrupt changes.It is worth pointing out that, the method in this paper can also analyze the passivity for models in 36, 37 .
Remark 3.4.It is known that the obtained criteria for checking passivity of neural networks depend on the constructed Lyapunov functionals or Lyapunov-Krasovskii functionals in varying degrees.Constructing proper Lyapunov functionals or Lyapunov-Krasovskii functionals can reduce conservatism.Recently, the delay fractioning approach has been used to investigate global synchronization of delayed complex networks with stochastic disturbances, which has shown the potential of reducing conservatism 22 .Using the delay fractioning approach, we can also investigate the passivity of delayed neural networks.The corresponding results will appear in the near future.Remark 3.5.When we do not consider the stochastic effect, model 2.1 turns into the following model: where Corollary 3.7.Under assumption H2 , model 3.30 is passive if there exist a scalar γ > 0, three symmetric positive definite matrices P i (i 1, 2, 3), two positive diagonal matrices L and S, and matrices Q i (i 1, 2, 3, 4) such that the following LMI holds: Abstract and Applied Analysis 13 where

4.2
Therefore, by Corollary 3.7, we know that model 3.30 is passive.It should be pointed out that the conditions in 33-36 cannot be applied to this example since that require the differentiability of the time-varying delay.

Conclusions
In this paper, the passivity has been investigated for a class of stochastic uncertain neural networks with time-varying delay as well as generalized activation functions.By employing a combination of Lyapunov-Krasovskii functionals, the free-weighting matrix method, Newton-Leibniz formulation, and stochastic analysis technique, a delayindependent criterion for checking the passivity of the addressed neural networks has been established in terms of linear matrix inequalities LMIs , which can be checked numerically using the effective LMI toolbox in MATLAB.The obtained results generalize and improve the earlier publications and remove the traditional assumptions on the differentiability of the time-varying delay and the boundedness of its derivative.An example has been provided to demonstrate the effectiveness and less conservatism of the proposed criterion.We would like to point out that it is possible to generalize our main results to more complex neural networks, such as neural networks with discrete and distributed delays 10, 26 , and neural networks of neutral-type 7, 20 , neural networks with Markovian jumping 24, 25 .The corresponding results will appear in the near future. dx

1 Figure 1 :
Figure 1: State responses of x 1 t and x 2 t .
t , x 2 t , . . ., x n t T ∈ R n is the state vector of the network at time t, n corresponds to the number of neurons; C diag c 1 , c 2 , . . ., c n is a positive diagonal matrix, A a ij n×n , and B b ij n×n are known constant matrices; ΔC t , ΔA t and ΔB t are time-varying parametric uncertainties; σ t, x t , x t − τ t ∈ R n×n is the diffusion coefficient matrix and ω t ω 1 t , ω 2 t , . . ., ω n t T is an n-dimensional Brownian motion defined on a complete probability space Ω, F, {F t } t≥0 , P with a filtration {F t } t≥0 satisfying the usual conditions i.e., it is right continuous and F 0 contains all P -null sets ; f x t f 1 x 1 t , f 2 x 2 t , . . ., f n x n t T denotes the neuron activation at time t; u t u 1 t , u 2 t , . . ., u n t T ∈ R n is a varying external input vector; τ t > 0 is the timevarying delay, and is assumed to satisfy 0 ≤ τ t ≤ τ, where τ is constant.
The initial condition associated with model 2.1 is given byx s φ s , s ∈ −τ, 0 .2.2Let x t, φ denote the state trajectory of model 2.1 from the above initial condition and x t, 0 the corresponding trajectory with zero initial condition.Throughout this paper, we make the following assumptions.H1 33 The time-varying uncertainties ΔC t , ΔA t and ΔB t are of the formΔC t H 1 G 1 t E 1 , ΔA t H 2 G 2 t E 2 , ΔB t H 3 G 3 t E 3 , 2.3whereH 1 , H 2 , H 3 , E 1 , E 2 ,and E 3 are known constant matrices of appropriate dimensions, G 1 t , G 2 t , and G 3 t are known time-varying matrices with Lebesgue measurable elements bounded by By the Matlab LMI Control Toolbox, we find a solution to the LMI in 3.33 as follows: