In this paper, we consider the input-to-stability for a class of stochastic neutral-type memristive neural networks. Neutral terms and S-type distributed delays are taken into account in our system. Using the stochastic analysis theory and Itô formula, we obtain the conditions of mean-square exponential input-to-stability for system. A numerical example is given to illustrate the correctness of our conclusions.
National Natural Science Foundation of China615630331156300511501281Natural Science Foundation of Jiangxi Province20122BAB20100220151BAB212011Nanchang Universitycx20150891. Introduction
The complex network is considered to be one of the leading research subjects of science and technology in twenty-first Century, which include neural networks, communication networks, power networks, and social networks. In particular, the research on the neural network is very widely including the control theory, stability theory, and bifurcation theory (see [1–4]). As a special case of complex network, memristive neural networks can better simulate the human brain, so it has also become a focus for the majority of scholars. The memristor is a kind of nonlinear resistor which has memory function to simulate the mechanism of human neuron and synapse. Recently, memristive neural networks systems have been successfully applied in associative memory, chaos synchronization, image processing, and so on. Although a lot of achievements have been made in the field of application, most of the current research efforts on memristive neural networks are mainly focused on deterministic models (see [5–7]). However, in reality, noise disturbance always exists, which may cause instability and other poor performances. So it is necessary to study the memristive neural networks with random disturbance due to its theoretical and practical significance.
In addition, in order to deal with the dynamic image, we need to introduce delays between the signal transmissions of neurons, which have formed the memristive neural networks with delays. In practical application, it is more common for a dynamic system with time-varying delay, because the constant delay is only an ideal approximation of the time-varying delay. Many scholars have made great achievements in this respect (see [8–11]). Here, we have to point out that neural networks are composed of a large number of neurons, many of which are clustered into spherical or layered structures and interact with each other and are connected to a variety of complex neural pathways through the axon. Thus, there exists distributed delay in the transmission of signals. Usually, the discrete delays and distributed delays cannot contain each other in the same system; however, in [12] we can see that discrete delays and distributed delays can be written in a unified form under Stieltjes-Lebesgue integral, that is, S-type distributed delays (see [13, 14]). In fact, the differential expression of the systems not only is related to the derivative of the current state but also has a great relationship with the derivative of the past state. It is called neutral delay neural network. Therefore, it is very significant to study the stochastic neutral-type memristive neural network with time-varying delays and S-type distributed delays.
The control input has a great influence on the dynamic behavior of the neural network. The input-to-state stability (ISS) was first proposed by Sontag to check robust stability, which is more general than the traditional exponential stability. The traditional exponential stability includes asymptotical stability, exponential stability, and almost sure stability. In [15, 16], global asymptotical stability analysis for a kind of discrete-time recurrent neural network has been studied. In [17–22], exponential stability and almost sure stability of the neural network have been investigated. As far as we know, the traditional stability is that the state of the neural network is close to the equilibrium point when the time approaches infinity. But this does not always happen in our reality. ISS control analysis opens up a new dynamic neural network application in nonlinear system. In [23], input-to-state stability for a class of stochastic memristive neural networks with time-varying delay has been studied.
Motivated by the above discussion, even though the stability problem of stochastic neural networks has been studied, there are few studies on the stability of stochastic neutral-type memristive neural network. In this paper, we consider the stochastic neutral-type memristive neural networks to end the gap. Using the stochastic analysis theory and Itô formula, we obtain the sufficient conditions of mean-square exponential input-to-stability and some corollaries for system (1).
The rest of the paper is organized as follows. In Section 2, we present a model and give some hypotheses. The main conclusions are proved in Section 3. In Section 4, a numerical example is given to illustrate the correctness of our conclusions. Finally, the further discussion is drawn in Section 5.
Throughout this paper, solutions of all the systems considered are intended in the Filippovs sense. Let Rn represent n-dimensional Euclidean space. The superscript “T” denotes the transpose of a matrix or vector. l∞ denotes the class of essentially bounded functions u from [0,+∞] to Rn with u∞=esssupt≥0u<∞. Let τ>0 and C([-τ,0],Rn) denote the family of continuous functions φ from [-τ,0] to Rn with the norm φ=sup-τ≤s≤0φs, where · is the Euclidean norm in Rn. Let LF02([-τ,0];Rn) denote the family of all F0 measurable, l([-τ,0];Rn)-valued stochastic variables ψ={ψ(s):-τ≤s≤0} such that ∫-τ0E[ψs2]ds<∞, where E[·] stands for the correspondent expectation operator with respect to the given probability measure P.
2. Preliminaries
Considering the following stochastic neutral-type memristive neural network, for i=1,2,…,n, (1)dxit-∑j=1ndijxjt-τt=-cixit+∑j=1naijxitfjxjt+∑j=1nbijxitgjxjt-τt+∑j=1neijxit∫-∞0hjxjt+θdηjθ+uitdt+∑j=1nσijt,xjt,xjt-τtdωjt,with initial conditions xi(t)=ϕi(t), t∈[-τ,0], where xi(t) is the voltage of the capacitor Ci, fj(xj(t)), gj(xj(t)), and hj(xj(t)) represent the neuron activation functions of the jth neuron at time t, ui(t)∈l∞ is the external constant input of the ith neuron at time t, ωj is a standard Brownian motion defined on the complete probability space (Ω,F,P) with a natural filtration {Ft}t≥0, and σij is a Borel measurable function. C=diag(c1,c2,…,cn) and D=diag(d1,d2,…,dn) are self-feedback connection matrices, and aij(xi(t)), bij(xi(t)), and eij(xi(t)) represent memristor-based weights,(2)aijxit=W1ijCi×signij,bijxit=W2ijCi×signij,eijxit=W3ijCi×signij,signij=1i≠j,-1i=j,where W(k)ij denote the memductances of memristors R(k)ij, k=1,2,3. And ∫-∞0hj(xj(t+θ))dηj(θ) is the Lebesgue-Stieltjes integral and ηj(θ) is nonnegative function of bounded variation on (-∞,0], which satisfies ∫-∞0dηj(θ)=lj>0. According to the pinched hysteretic loops of ideal memristors, let(3)aijxit=a′xit≤1,a′′xit>1,bijxit=b′xit≤1,b′′xit>1,eijxit=e′xit≤1,e′′xit>1for i=1,2,…,n, and a′, a′′, b′, b′′, e′, and e′′ are constants. Let a_ij=mina′,a′′, a¯ij=maxa′,a′′, b_ij=minb′,b′′, b¯ij=maxb′,b′′, e_ij=mine′,e′′, e¯ij=maxe′,e′′, for i,j=1,2,…,n. By applying theory of differential inclusions and set-valued maps in system (1), it follows that (4)dxit-∑j=1ndijxjt-τt∈-cixit+∑j=1ncoa_ij,a¯ijfjxjt+∑j=1ncob_ij,b¯ijgjxjt-τt+∑j=1ncoe_ij,e¯ij∫-∞0hjxjt+θdηjθ+uitdt+∑j=1nσijt,xjt,xjt-τtdωjt,with initial conditions xi(t)=ϕi(t), t∈[-τ,0], for i,j=1,2,…,n. Using Filippovs Theorem in [24], there exist a^ij∈[a_ij,a¯ij], b^ij∈[b_ij,b¯ij], and e^ij∈[e_ij,e¯ij] such that(5)dxit-∑j=1ndijxjt-τt=-cixit+∑j=1na^ijfjxjt+∑j=1nb^ijgjxjt-τt+∑j=1ne^ij∫-∞0hjxjt+θdηjθ+uitdt+∑j=1nσijt,xjt,xjt-τtdωjt,with initial conditions xi(t)=ϕi(t), t∈[-τ,0], for i,j=1,2,…,n.
To obtain the main results, we need the following hypotheses.
fj, gj, and hj satisfy the following conditions:(6)fjs1-fjs2≤αis1-s2,gjs1-gjs2≤βis1-s2,hjs1-hjs2≤γis1-s2,
where αi, βi, and γi are positive constants, ∀s1,s2∈R, j=1,2,…,n.
For all i,j=1,2,…,n, ∃μij,νij≥0, which satisfy (7)σijt,u1,v1-σijt,u2,v22≤μiju1-u22+νijv1-v22.
τ(t) satisfies the following conditions: (8)0≤τt≤τ,0≤τ˙t≤χ<1,
where τ and π are positive constants.
∀θ∈(-∞,0], there exists a positive constant β0, which satisfies (9)∫-∞0e-βθdηjθ=K<+∞,
where β∈[0,β0).
D=(dij)n×n is a matrix, · represents the matrix norm which satisfies η0=D∈(0,1), where matrix norm is defined as D=λmaxDDT, and λmax(·) means the maximum eigenvalue (spectral radius) of the matrix.
Definition 1 (see [25]).
The trivial solution of system (1) is said to be mean-square exponentially input-to-state stable if for every ϕ∈LF02-τ,0;Rn and u(t)∈l∞ there exist scalars α0>0, β0>0, and γ0>0 such that the following inequality holds: (10)Ext;ϕ2≤α0e-β0tEϕ2+γ0u∞2.
3. Main Results
In this section, the mean-square exponential input-to-state stability of the trivial solution for system (1) is addressed.
Theorem 2.
Under (H1)–(H5), the trivial solution of system (1) is mean-square exponentially input-to-state stable, if there exist positive constants pi, δi, (i=1,…,n) such that the following conditions hold:(11)A1i=-2pici+pi+δi+∑j=1npicidij+αja^ij+βjb^ij+lie^ij+pjαia^ji+μij+γie^ij∫-∞0e-βθdηjθ+∑j=1n∑k=1npjdjkαia^jk+γie^ji∫-∞0e-βθdηjθ<0,(12)A2i=-1-χδi+∑j=1npjcjdji+βib^ij+dji+νji+∑j=1n∑k=1npidjiαkb^jk+βka^jk+lke^jk+βib^jidjk<0.
Proof.
In order to obtain the mean-square exponential input-to-state stability, we consider the following Lyapunov-Krasovskii functional:(13)Vt,xt=V1t,xt+V2t,xt,where(14)V1t,xt=eβt∑i=1npixit-∑j=1ndijxjt-τt2,V2t,xt=∫t-τtteβs∑i=1nδixis2ds+∑i=1n∑j=1npie^ijγj1+∑k=1ndik∫-∞0∫t+θteβs-θxjs2dsdηjθ.By Itô formula, it follows that(15)dVt,xt=LVt,xtdt+Vxt,xtσtdt,where Vx(t,x(t))=(∂V(t,x(t))/∂x1,…,∂V(t,x(t))/∂xn) and L is the weak infinitesimal operator which satisfies(16)LV1t,xt=βeβt∑i=1npixit-∑j=1ndijxjt-τt2+2eβt∑i=1npixit-∑j=1ndijxjt-τt×-cixit+∑j=1na^ijfjxjt+∑j=1nb^ijgjxjt-τt+∑j=1ne^ij∫-∞0hjxjt+θdηjθ+uit+eβt∑i=1n∑j=1npiσij2t,xjt,xjt-τt≤2βeβtmaxpi∑i=1nxi2t+η0xj2t-τt-2eβt∑i=1npicixi2t+2eβt∑i=1n∑j=1npicidijxjt-τtxit+2eβt∑i=1n∑j=1npia^ijfjxjtxit-2eβt∑i=1n∑j=1npidija^ijxjt-τt∑k=1nfkxkt+2eβt∑i=1n∑j=1npib^ijgjxjt-τtxit-2eβt∑i=1n∑j=1npidijxjt-τt∑k=1nb^ikgkxkt-τt+2eβt∑i=1n∑j=1npie^ij∫-∞0hjxjt+θdηjθxit+2eβt∑i=1n∑j=1npidijxjt-τt∑k=1ne^ik∫-∞0hkxkt+θdηkθ+2eβt∑i=1npixituit+2eβt∑i=1npi∑j=1ndijxjt-τtuit+eβt∑i=1n∑j=1npiσij2t,xjt,xjt-τt,where D=(dij)n×n is a matrix which satisfies Dx≤Dx=η0x, and under (H1) and (H2) we have(17)LV1t,xt≤2βeβtmaxpi∑i=1nxi2t+η0xj2t-τt-2eβt∑i=1npicixi2t+2eβt∑i=1n∑j=1npicidijxjt-τtxit+2eβt∑i=1n∑j=1npia^ijαjxjtxit+2eβt∑i=1n∑j=1npidija^ijxjt-τt∑k=1nαkxkt+2eβt∑i=1n∑j=1npib^ijβjxjt-τtxit+2eβt∑i=1n∑j=1npidijxjt-τt∑k=1nb^ikβkxkt-τt+2eβt∑i=1n∑j=1npie^ij∫-∞0hjxjt+θxitdηjθ+2eβt∑i=1n∑j=1n∑k=1npidije^ik∫-∞0hkxkt+θxjt-τtdηkθ+2eβt∑i=1npixituit+2eβt∑i=1npi∑j=1ndijxjt-τtuit+eβt∑i=1n∑j=1npiμijxj2t+νijxj2t-τt.By the inequality a2+b2≥2ab, we can obtain(18)LV1t,xt≤2βeβtmaxpi∑i=1nxi2t+η0xj2t-τt-2eβt∑i=1npicixi2t+eβt∑i=1n∑j=1npicidijxj2t-τt+xi2t+eβt∑i=1n∑j=1npia^ijαjxj2t+xi2t+eβt∑i=1n∑j=1n∑k=1npia^ikαkdijxjt-τt2+xkt2+eβt∑i=1n∑j=1npib^ijβjxjt-τt2+xit2+eβt∑i=1n∑j=1n∑k=1npib^ikβkdijxjt-τt2+xkt-τt2+eβt∑i=1n∑j=1npie^ij∫-∞0hjxjt+θ2+xit2dηjθ+eβt∑i=1n∑j=1n∑k=1npidije^ik∫-∞0hkxkt+θ2+xjt-τt2dηkθ+eβt∑i=1npi∑j=1ndijxjt-τt2+uit2+eβt∑i=1npixi2+uit2+eβt∑i=1n∑j=1npiμijxj2t+νijxj2t-τt≤2βeβtmaxpi∑i=1nxi2t+η0xj2t-τt-2eβt∑i=1npicixi2t+eβt∑i=1n∑j=1npicidijxj2t-τt+xi2t+eβt∑i=1n∑j=1npia^ijαjxj2t+xi2t+eβt∑i=1n∑j=1n∑k=1npia^ikαkdijxjt-τt2+xkt2+eβt∑i=1n∑j=1npib^ijβjxjt-τt2+xit2+eβt∑i=1n∑j=1n∑k=1npib^ikβkdijxjt-τt2+xkt-τt2+eβt∑i=1n∑j=1npie^ij∫-∞0γjxjt+θ2dηjθ+ljxit2+eβt∑i=1n∑j=1n∑k=1npidije^ik∫-∞0γkxkt+θ2dηkθ+lkxjt-τt2+eβt∑i=1npi∑j=1ndijxjt-τt2+uit2+eβt∑i=1npixi2+uit2+eβt∑i=1n∑j=1npiμijxj2t+νijxj2t-τt,LV2t,xt=eβt∑i=1nδixit2-1-τ˙teβt-τt∑i=1nδixit-τt2+∑i=1n∑j=1npie^ijγj1+∑k=1ndik∫-∞0eβt-θxjt2dηjθ-∑i=1n∑j=1npie^ijγj1+∑k=1ndik∫-∞0eβtxjt+θ2dηjθ≤eβt∑i=1nδixit2-1-χeβt-τ∑i=1nδixit-τt2+eβt∑i=1n∑j=1npie^ijγj1+∑k=1ndikxjt2∫-∞0e-βθdηjθ-∑i=1n∑j=1npie^ijγj1+∑k=1ndik∫-∞0eβtxjt+θ2dηjθ.By condition (12), there exists a sufficiently small constant β>0 such that(19)LVt,xt≤eβt∑i=1n2βmaxpi-2pici+pi+δi+∑j=1npicidij+αja^ij+βjb^ij+lie^ij+pjαia^ji+μij+γie^ij∫-∞0e-βθdηiθ+∑j=1n∑k=1npjdjkαia^jk+γie^ji∫-∞0e-βθdηiθxit2+eβt∑i=1n2βmaxpiη0-1-χe-βτδi+∑j=1npjcjdji+βib^ij+dji+νji+∑j=1n∑k=1npidjiαkb^jk+βka^jk+lke^jk+βib^jidjkxit-τt2+eβt∑i=1npi1+∑j=1ndijuit2≤eβt∑i=1npi1+∑j=1ndijuit2.Define τk≔inf{s≥0:xs≥k} as a stopping time; we have(20)EVt∧τk,xt∧τk=EV0,x0+E∫0t∧τkVs,xsds.Note that (21)EVt∧τk,xt∧τk≤EV0,x0+eβt-1∑i=1npi1+∑j=1ndijuit2,letting k→∞(22)EV0,x0≤2max1≤i≤npi1+η02+max1≤i≤nδiτ+max1≤i≤n∑j=1npje^jiγi∫-∞0e-βθ-1dηjθβ1+∑k=1ndjkEϕ2=K0Eϕ2.Then we have(23)EVt,xt≤K0Eϕ2+K1eβtu∞2,where K1=∑i=1npi1+∑i=1n|dij|. By the inequalities (m+n)2≤(1+ε)m2+(1+1/ε)n2, ε>0, it follows that(24)min1≤j≤npjEeβtxt2=min1≤j≤npjEeβtt-Dxt-τt+Dxt-τt2≤min1≤j≤npjeβtE1+εxt-Dxt-τt2+1+1εDxt-τt2≤1+εEt,xt+min1≤j≤npjη021+1εEeβtxt-τt2≤1+εK0Eϕ2+K1eβtu∞2+min1≤j≤npjη021+1εEeβtxt-τt2.We can find a large enough constant ε and a sufficiently small constant β such that η02eβτ(1+1/ε)<1, so(25)sup-τ≤s≤tmin1≤j≤npjEeβsxs2≤min1≤j≤npjEϕ2+sup0≤s≤tmin1≤j≤npjEeβsxs2≤min1≤j≤npj+1+εK0Eϕ2+1+εK1eβtu∞2+sup-τ≤s≤tmin1≤j≤npjη021+1εeβτEeβsxs2;that is,(26)sup-τ≤s≤tEeβsxs2≤αEϕ2+γeβtu∞2,where(27)α=min1≤j≤npj+1+εK0min1≤j≤npj1-η021+1/εeβτ,γ=1+εK1min1≤j≤npj1-η021+1/εeβτ.Furthermore, (28)Ext2≤αe-βtEϕ2+γu∞2.Hence, the desired assertion is derived.
Theorem 3.
Under (H1)–(H5), if conditions of (12) are satisfied, the trivial solution of system (1) with ui(t)=0 is mean-square exponentially stable.
Moreover, when we remove the S-type distributed delay system (1) becomes the following system: (29)dxit-∑j=1ndijxjt-τt=-cixit+∑j=1naijxitfjxjt+∑j=1nbijxitgjxjt-τt+uitdt+∑j=1nσijt,xjt,xjt-τtdωjt.
Corollary 4.
Under (H1)–(H3) and (H5), the trivial solution of system (29) is mean-square exponentially input-to-state stable, if there exist positive constants pi, δi, (i=1,…,n) such that the following conditions hold: (30)A1i=-2pici+pi+δi+∑j=1npicidij+αja^ij+βjb^ij+pjαia^ji+μij+∑j=1n∑k=1npjdjkαia^jk<0,A2i=-1-χδi+∑j=1npjcjdji+βib^ij+dji+νji+∑j=1n∑k=1npidjiαkb^jk+βka^jk+βib^jidjk<0.
Corollary 5.
Under (H1)–(H3) and (H5), if conditions of (30) are satisfied, the trivial solution of system (29) with ui(t)=0 is mean-square exponentially stable.
Remark 6.
In particular, when we remove the neutral terms, system (29) becomes the system in [23]; from [23] we can see that the trivial solution of system is mean-square exponentially input-to-state stable and the trivial solution of system with ui(t)=0 is mean-square exponentially stable in the certain condition. So we can say that our model is the extension of model in [23].
Remark 7.
In fact, let(31)ηjθ=-1θ≤-τj,0-τj<θ≤0,j=1,2,…,n,where ηj(θ) is nonnegative function of bounded variation on (-∞,0] and S-type distributed delays terms become ∑j=1neij(xi(t))hj(xj(t-τj)), so system (1) contains the system in [25]. In addition, suppose that dηj(θ)=k(θ)dθ; we have that S-type distributed delays terms become the generally distributed delays ∑j=1neij(xi(t))∫-∞0k(θ)hj(xj(t+θ))dθ, so our system contains the recent work of [26]. It shows that this paper is more general than the existing articles.
Remark 8.
On the achievements of [23, 25], this paper discusses a class of more general neural network systems through introducing many factors such as neutral terms, S-type distributed delays, and stochastic perturbations and analyzes the mean-square exponential input-to-state stability of the given neutral stochastic system by utilizing the Lyapunov-Krasovskii functional method, stochastic analysis techniques, and Itô formula. The considered Lyapunov-Krasovskii functional in our paper is more complex comparing with those in [23, 25] since it covers neutral terms and double integrals. Therefore, our theoretical results can be seen as an extension in [23, 25]. In addition, our results are computationally efficient as the sufficient conditions can be easily checked without using linear matrix inequality toolbox.
4. Numerical Simulation
In this section, a numerical example is given to illustrate the correctness of our conclusions. Consider a two-dimensional system with fi(x)=gi(x)=0.1cosx, τt=0.4+0.1cost, hi(x)=0.3x, ηi(θ)=eθ, θ∈(-∞,0], u1(t)=0.5cosx(t), and u2t=0.2cosx(t):(32)σij2×2=0.8x1t0.2x2t-τt0.2x1t-τt0.7x2tdij2×2=0.2000.3,aij2×2=a11x1a12x1a21x2a22x2bij2×2=b11x1b12x1b21x2b22x2,eij2×2=e11x1e12x1e21x2e22x2,a1jx1t=-0.1x1t≤1,0.1x1t>1,j=1,2;a2jx2t=-0.1x2t≤1,0.1x2t>1,j=1,2;b1jx1t=-0.1x1t≤1,0.1x1t>1,j=1,2;b2jx2t=-0.01x2t≤1,0.01x2t>1,j=1,2;e11x1t=-2x1t≤1,0.5x1t>1,e12x1t=-0.1x1t≤1,0.1x1t>1,e21x2t=-0.1x2t≤1,0.1x2t>1,e22x2t=-2x2t≤1,1x2t>1.Take p1=0.125, p2=0.095, c1=2.7, c2=3.18, δ1=δ2=0.2, τ=0.5, χ=0.001, l1=l2=1, K=0.1, α1=α2=0.1, β1=β2=0.1, and γ1=γ2=0.3, which satisfy (H1) to (H5). Then, it is easy to check following conditions:(33)A11=-2p1c1+p1+δ1+∑j=12p1c1d1j+αja^1j+βjb^1j+l1e^1j+pjα1a^j1+μ1j+γ1e^1jK+∑j=12∑k=12pjdjkα1a^jk+γ1e^j1K=-0.0479<0,A12=-2p2c2+p2+δ2+∑j=12p2c2d2j+αja^2j+βjb^2j+l2e^2j+pjα2a^j2+μ2j+γ2e^2jK+∑j=12∑k=12pjdjkα2a^jk+γ2e^j2K=-0.0548<0,A21=-1-χδ1+∑j=12pjcjdj1+β1b^1j+dj1+νj1+∑j=12∑k=12p1dj1αkb^jk+βka^jk+lke^jk+β1b^j1djk=-0.0478<0,A22=-1-χδ2+∑j=12pjcjdj2+β2b^2j+dj2+νj2+∑j=12∑k=12p2dj2αkb^jk+βka^jk+lke^jk+β2b^j2djk=-0.0150<0.So the conditions of Theorem 2 are satisfied, and we can obtain that the trivial solution of system (1) is mean-square exponentially input-to-state stable with initial values x(s)=ϕ(0); see Figure 1. When u1(t)=u2(t)=0 the trivial solution of system (1) is mean-square exponentially stable; see Figure 2.
Transient response of variables x(t).
Transient response of variables x(t) with u(t)=0.
5. Conclusions and Discussion
By stochastic analysis theory and Itô formula, mean-square exponential input-to-stability of a class of stochastic neutral-type memristive neural networks is studied. The correctness of our conclusions has been illustrated by a numerical example. In the current papers, there are few studies on stochastic neutral-type memristive neural networks with time-varying delay and S-type distributed delays. Furthermore, in this paper we discuss mean-square exponential input-to-stability of system (1); one may continue to discuss synchronization and passivity as well as some other complex dynamical behaviors on (1).
Competing Interests
The authors declare that they have no competing interests.
Acknowledgments
The authors thank National Natural Science Foundation of China (no. 61563033, 11563005, and 11501281), Natural Science Foundation of Jiangxi Province (nos. 20122BAB201002 and 20151BAB212011), and Innovation Fund Designated for Graduate Students of Nanchang University (no. cx2015089) for their financial support.
DingS.WangZ.Stochastic exponential synchronization control of mem-ristive neural networks with multiple time-varying delays2015162162510.1016/j.neucom.2015.03.0692-s2.0-84929282793WangW.LiL.PengH.XiaoJ.YangY.Synchronization control of memristor-based recurrent neural networks with perturbations2014538142452489110.1016/j.neunet.2014.01.0102-s2.0-8489671937324524891WangH.YuY.WenG.Stability analysis of fractional-order Hopfield neural networks with time delays2014559810910.1016/j.neunet.2014.03.0122-s2.0-84900022117ZengX.XiongZ.WangC.Hopf bifurcation for neutral-type neural network model with two delays2016282173110.1016/j.amc.2016.01.050WuA.ZengZ.Improved conditions for global exponential stability of a general class of memristive neural networks201520397598510.1016/j.cnsns.2014.06.0292-s2.0-84908621511WanY.CaoJ.Periodicity and synchronization of coupled memristive neural networks with supremums2015159113714310.1016/j.neucom.2015.02.0072-s2.0-84933279995ZhangG.HuJ.ShenY.Exponential lag synchronization for delayed memristive recurrent neural networks2015154869310.1016/j.neucom.2014.12.0162-s2.0-84925131023XiaoJ.ZhongS.LiY.New passivity criteria for memristive uncertain neural networks with leakage and time-varying delays20155913314810.1016/j.isatra.2015.09.0082-s2.0-84947617304XinY.LiY.ChengZ.XiaH.Global exponential stability for switched memristive neural networks with time-varying delays201680344210.1016/j.neunet.2016.04.002LiuY.LiC.HuangT.WangX.Robust adaptive lag synchronization of uncertain fuzzy memristive neural networks with time-varying delays201619018819610.1016/j.neucom.2016.01.018HanX.WuH.FangB.Adaptive exponential synchronization of memristive neural networks with mixed time-varying delays2016201405010.1016/j.neucom.2015.11.103WangL.XuD.Global asymptotic stability of bidirectional associative memory neural networks with S-type distributed delays2002331186987710.1080/00207720210161777MR19493102-s2.0-0037107242HanW.KaoY.WangL.Global exponential robust stability of static interval neural networks with S-type distributed delays201134882072208110.1016/j.jfranklin.2011.05.023MR28418962-s2.0-80052605123HuangZ.LiX.MohamadS.LuZ.Robust stability analysis of static neural network with S-type distributed delays200933276076910.1016/j.apm.2007.12.006MR24684932-s2.0-54249147077BaoG.ZengZ.Global asymptotical stability analysis for a kind of discrete-time recurrent neural network with discontinuous activation functions201619324224910.1016/j.neucom.2016.02.017ZhangZ.CaoJ.ZhouD.Novel LMI-based condition on global asymptotic stability for a class of Cohen-Grossberg BAM networks with extended activation functions20142561161117210.1109/tnnls.2013.22898552-s2.0-84901445911ShiL.ZhuH.ZhongS.HouL.Globally exponential stability for neural networks with time-varying delays201321921104871049810.1016/j.amc.2013.04.0352-s2.0-84878230626HuangC.ChenP.HeY.HuangL.TanW.Almost sure exponential stability of delayed Hopfield neural networks200821770170510.1016/j.aml.2007.07.030MR24230482-s2.0-43549087588LiuL.ZhuQ.Almost sure exponential stability of numerical solutions to stochastic delay Hopfield neural networks201526669871210.1016/j.amc.2015.05.1342-s2.0-84936748018PanJ.LiuX.XieW.Exponential stability of a class of complex-valued neural networks with time-varying delays201516429329910.1016/j.neucom.2015.02.0242-s2.0-84935568051YangD.QiuG.LiC.Global exponential stability of memristive neural networks with impulse time window and time-varying delays20161711021102610.1016/j.neucom.2015.07.0402-s2.0-84947043072ZhuQ.LiX.Exponential and almost sure exponential stability of stochastic fuzzy delayed Cohen-Grossberg neural networks2012203749410.1016/j.fss.2012.01.0052-s2.0-84862868271LouX. Y.YeQ.Input-to-state stability of stochastic memristive neural networks with time-varying delay20152015814085710.1155/2015/140857MR33192132-s2.0-84924608085FilippovA. F.Classical solutions of differential equations with multi-valued right-hand side1967560962110.1137/0305040MR0220995ZhuQ.CaoJ.Mean-square exponential input-to-state stability of stochastic delayed neural networks201413115716310.1016/j.neucom.2013.10.0292-s2.0-84894040421SongY.SunW.JiangF.Mean-square exponential input-to-state stability for neutral stochastic neural networks with mixed delays201620519520310.1016/j.neucom.2016.03.048