This paper studies the asymptotic behavior for a class of delayed reaction-diffusion Hopfield neural networks driven by finite-dimensional Wiener processes. Some new sufficient conditions are established to guarantee the mean square exponential stability of this system by using Poincaré’s inequality and stochastic analysis technique. The proof of the almost surely exponential stability for this system is carried out by using the Burkholder-Davis-Gundy inequality, the Chebyshev inequality and the Borel-Cantelli lemma. Finally, an example is given to illustrate the effectiveness of the proposed approach, and the simulation is also given by using the Matlab.

1. Introduction

Recently, the dynamics of Hopfield neural networks with reaction-diffusion terms have been deeply investigated because their various generations have been widely used in some practical engineering problems such as pattern recognition, associate memory, and combinatorial optimization (see [1–3]). However, under closer scrutiny, that a more realistic model would include some of the past states of the system, and theory of functional differential equations systems has been extensively developed [4, 5], meanwhile many authors have considered the asymptotic behavior of the neural networks with delays [6–9]. In fact random perturbation is unavoidable in any situation [3, 10]; if we include some environment noise in these systems, we can obtain a more perfect model of this situation [3, 11–16]. So, this paper is devoted to the exponential stability of the following delayed reaction-diffusion Hopfield neural networks driven by finite-dimensional Wiener processes:dui(t,x)=(∑j=1l∂∂xj(Dij(x)∂ui∂xj)-aiui+∑j=1ncijfj(uj(t-r,x)))dt+∑j=1mgij(ui(t-r,x))dWj,∂ui∂ν|∂O=0,t≥0,ui(θ,x)=ϕi(θ,x),x∈O∈Rl,θ∈[-r,0],i=1,2,…,n.

There are n neural network units in this system and ui(t,x) denote the potential of the cell i at t and x. ai are positive constants and denote the rate with which the ith unit will reset its potential to the resting state in isolation when it is disconnected from the network and external inputs at t, and cij are the output connection weights from the jth neuron to the ith neuron. fj are the active functions of the neural network. r is the time delay of a neuron. 𝒪 denotes an open bounded and connected subset of ℝl with a sufficient regular boundary ∂𝒪, ν is the unit outward normal on ∂𝒪, ∂ui/∂ν=(∇ui,ν)ℝl, and gij are noise intensities. Initial data ϕi are ℱ0-measurable and bounded functions, almost surely.

We denote (Ω,ℱ,ℙ) a complete probability space with filtration {ℱt}t≥0 satisfying the usual conditions (see [10]). Wi(t),i=1,2,…,m, are scale standard Brownian motions defined on (Ω,ℱ,ℙ).

For convenience, we rewrite system (1.1) in the vector form:
du=(∇⋅(D(x)∘∇u)-Au+Cf(u(t-r)))dt+G(u(t-r))dW,∂u(t,x)∂ν|∂O=0,t≥0,u(0,x)=ϕ(x),
where C=(cij)n×n, u=(u1,u2,…,un)T, ∇u=(∇u1,…,∇un)T, W=(W1,W2,…,Wm)T, f(u)=(f1(u1),f2(u2),…,fn(un))T, A=Diag(a1,a2,…,an), ϕ=(ϕ1,ϕ2,…,ϕn)T, G(u)=(gij(ui))n×m, D=(Dij)n×l, and D∘∇u=(Dij∂ui/∂xj)n×l is the Hadamard product of matrix D and ∇u; for the definition of divergence operator ∇·u, we refer to [2, 3].

2. Preliminaries and Notations

In this paper, we introduce the following Hilbert spaces H≜L2(𝒪), V≜H1(𝒪), according to [17–19], V⊂H=H′⊂V′, where H′,V′ denote the dual of the space H,V, respectively, the injection is continuous, and the embedding is compact. ∥·∥,|∥·∥| represent the norm in H,V, respectively.

U≜(L2(𝒪))n is the space of vector-valued Lebesgue measurable functions on 𝒪, which is a Banach space under the norm ∥u∥U=(∑i=1n∥ui(x)∥2)1/2.

C≜C([-r,0],U) is the Banach space of all continuous functions from [-r,0] to U, when equipped with the sup-norm ∥ϕ∥C=sup-r≤s≤0∥ϕ∥U.

With any continuous ℱt-adapted U-valued stochastic process u(t):Ω→U, t≥-r, we associate a continuous ℱt-adapted C-valued stochastic process ut:Ω→C,t>0, by setting ut(s,x)(ω)=u(t+s,x)(ω), s∈[-r,0],x∈𝒪.

Cℱ0b denote the space of all bounded continuous processes ϕ:[-r,0]×Ω→U such that ϕ(θ,·) is ℱ0-measurable for each θ∈[-r,0] and E∥ϕ∥C<∞.

ℒ(K) is the set of all linear bounded operators from K into K; when equipped with the operator norm, it becomes a Banach space.

In this paper, we assume the following.

fi and Gij are Lipschitz continuous with positive Lipschitz constants k1,k2 such that |fi(u)-fi(v)|≤k1|u-v| and |Gij(u)-Gij(v)|≤k2|u-v|,∀u,v∈ℝ, and fi(0)=0, gij(0)=0.

There exists α>0 such that Dij(x)≥α/l.

Let η=2αβ2+2k3-nk12σ2er-mk22er-2>0,k3=min{ai}, σ=max{|cij|}.

Remark 2.1.

We can infer from H1 that system (1.1) has an equilibrium u(t,x,ω)=0.

Let us define the linear operator as follows:
A:Π(A)∈U⟶U,Au=∇⋅(D(x)∘∇u),
and Π(𝔄)={u∈H2(𝒪)n,∂u/∂ν|∂𝒪=0}.

Lemma 2.2 (Poincaré’s inequality).

Let 𝒪 be a bounded domain in Rl and ϕ belong to a collection of twice differentiable functions defined on 𝒪 into R; then
‖ϕ‖≤β-1|‖ϕ‖|,
where the constant β depends on the size of 𝒪.

Lemma 2.3.

Let us consider the equation
dudt=Au,t≥0,u(0)=ϕ.
For every ϕ∈U, let u(t)=S(t)ϕ denote the solution of (2.3); then S(t) is a contraction map in U.

Proof.

Now we take the inner product of (2.3) with u(t) in U; by employing the Gaussian theorem and condition H2, we get that (𝔄u,u)≤-α∥|u|∥H1(𝒪)n2, (·,·) is the inner product in U, ∥|u|∥H1(𝒪)n2 denote the norm of H1(𝒪)n (see [3]), which means
12ddt‖u(t)‖U2+α‖|u(t)|‖H1(O)n2≤0.
Thanks to the Poincaré inequality, one obtains
ddt‖u(t)‖U2+2αβ2‖u(t)‖U2≤0.
Multiplying e2αβ2t in both sides of the inequality, we have
ddt(e2αβ2t‖u(t)‖U2)≤0.
Integrating the above inequality from 0 to t, we obtain
‖u(t)‖U2≤e-2αβ2t‖ϕ‖U2.
By the definition of ∥T(t)∥ℒ(U), we have ∥T(t)∥ℒ(U)≤1.

Definition 2.4 (see [<xref ref-type="bibr" rid="B20">20</xref>–<xref ref-type="bibr" rid="B22">22</xref>]).

A stochastic process u(t):[-r,+∞)×Ω→U is called a global mild solution of (1.1) if

u(t) is adapted to ℱt

u(t) is measurable with ∫0∞∥u(t)∥U2dt<∞ almost surely and

u(t)=S(t)ϕ-∫0tS(t-s)Ads+∫0tS(t-s)f(u(s-r))ds+∫0tS(t-s)G(u(s-r))dW,u(t)=ϕ∈CF0b,t∈[-r,0],
for all t∈[-r,+∞) with probability one.Definition 2.5.

Equation (1.1) is said to be almost surely exponentially stable if, for any solution u(t,x,ω) with initial data ϕ∈Cℱ0b, there exists a positive constant λ such that
limsupt⟶∞ln‖ut‖C≤-λ,ut∈C,almostsurely.

Definition 2.6.

System (1.1) is said to be exponentially stable in the mean square sense if there exist positive constants κ and α such that, for any solution u(t,x,ω) with the initial condition ϕ∈Cℱ0b, one has
E‖u(t)‖C2≤κe-α(t-t0),t≥t0,ut∈C.

3. Main ResultTheorem 3.1.

Suppose conditions H1–H3 hold; then (1.1) is exponentially stable in the mean square sense.

Proof.

Let u be the mild solution of (1.1); thanks to the Itô formula, we observe that
d(eλtui2)=λeλtui2dt+eλt(2ui(∑j=1l∂∂xi(Dij∂ui∂xj)-aiui+∑j=1ncijfj(uj(t-r))))dt+eλt(GiGiT)dt+2eλtuiGidW,Gi=(Gi1,Gi2,…,Gim),
where λ is a positive constant that will be defined below. Then, by integration between 0 and t, we find that
eλtui2(t)=ϕi(0)2+∫0tλeλsui2ds+2∫0teλs(ui∑j=1l∂∂xj(Dij∂ui∂xj))ds-2∫0teλsaiui2ds+2∫0teλsui∑j=1ncijfj(uj(s-r))ds+∫0teλsGiGiTds+2∫0teλsuiGidW.
Integrating the above equation over 𝒪, by virtue of Fubini’s theorem, we prove that
eλt‖ui2‖2=‖ϕi(0)‖2+λ∫0teλs∫Oui2dxds+∫0teλs∫O2ui∑j=1l∂∂xj(Dij∂ui∂xj)dxds-2∫0teλs∫Oaiui2dxds+2∫0teλs∫Oui∑j=1ncijfj(uj(s-r))dxds+∫0teλs∫OGiGiTdxds+2∫0teλs∫OuiGidxdW.
Taking the expectation on both sides of the last equation, by means of [3, 10, 16]
2E∫0t∫OeλsuiGidxdW=0.
Then, by Fubini’s theorem, we have
eλtE‖ui2‖2=E‖ϕi(0)‖2+λ∫0teλs∫OEui2dxds+2E∫0teλs∫Oui∑j=1l∂∂xj(Dij∂ui∂xj)dxds-2∫0teλs∫OaiEui2dxds+2E∫0teλs∫Oui∑j=1ncijfj(uj(s-r))dxds+E∫0teλs∫OGiGiTdxds≜I1+I2+I3+I4+I5+I6.
We observe that
I1≜E‖ϕi(0)‖2≤supθ∈[-r,0]E‖ϕi(θ)‖2,I2≜λ∫0t∫OeλsEui2dxds=λ∫0teλsE‖ui‖2ds.
From the Neumann boundary condition, by means of Green’s formula and H2 (see [3, 6, 7]), we know
I3≜2E∫0t∫Oeλs(ui∑j=1l∂∂xj(Dij∂ui∂xj))dxds=-2E∫0t∫Oeλs∑j=1lDij(∂ui∂xj)2dxds≤-2α∫0teλsE|‖ui‖|2ds≤-2αβ2∫0teλsE‖ui‖2ds.
Then, by using the positiveness of ai, one gets the relation
I4≜-2∫0t∫OeλsaiEui2dxds≤-2k3∫0teλsE‖ui‖2ds,
where k3=min{a1,a2,…,an}>0. By using the Young inequality as well as condition H1, we have that
I5≜2E∫0t∫Oeλsui∑j=1ncijfjdxds≤∫0t∫Oeλs(E|ui|2+E|∑j=1ncijfj|2)dxds≤∫0t∫Oeλs(E|ui|2+σ2∑j=1nE|fj(uj(s-r))|2)dxds≤∫0t∫Oeλs(E|ui|2+σ2k12∑j=1nE|uj(s-r)|2)dxds≤∫0teλs(E‖ui‖2+σ2k12E‖u(s-r)‖U2)ds,
where σ=max|cij|, and
I6≜∫0t∫OeλsEGiGiTdxds≤mk22∫0teλsE‖ui(s-r)‖2ds.
We infer from (3.6)–(3.11) that
eλtE‖ui(t)‖2≤supθ∈[-r,0]E‖ϕi(θ)‖2-(2αβ2+2k3-1-λ)∫0teλsE‖ui‖2ds+σ2k12∫0teλs‖u(t-r)‖U2ds+mk22∫0teλsE‖ui(s-r)‖2ds.
Adding (3.12) from i=1 to i=n, we obtain
eλtE‖u‖U2≤E‖ϕ‖C2-(2αβ2+2k3-1-λ)∫0teλsE‖u‖U2ds+(nk12σ2+mk22)∫0teλsE‖u(s-r)‖U2ds,
due to
∫0teλsE‖u(s-r)‖U2ds≤eλr∫-rteλsE‖u(s)‖U2ds≤e2λr∫-r0E‖ϕ(s)‖U2ds+eλr∫0teλsE‖u(s)‖U2ds≤re2λrE‖ϕ‖C2+eλr∫0teλsE‖u(s)‖U2ds;
we induce from the previous equations that
eλtE‖u‖U2≤-c1∫0teλsE‖u‖U2ds+c2,
where c1=2αβ2+2k3-1-nk12σ2eλr-mk22eλr-λ and c2=(1+mk22re2λr+nk12σ2re2λr)E∥ϕ∥C2; so we choose λ=1 such that c1=η>0. By using the classical Gronwall inequality we see that
eλtE‖u‖U2≤c2e-ηt;
in other words, we get
E‖u‖U2≤c2e-(η+1)t.
So, for t+θ≥t/2≥0, we also have
E‖u(t+θ)‖U2≤c2e-(η+1)(t+θ),≤c2e-κt,θ∈[-r,0],κ=(η+1)2,
and we can conclude that
E‖ut‖C2≤c2e-κt.

Theorem 3.2.

If the system (1.1) satisfies hypotheses H1–H3, then it is almost surely exponentially stable.

Proof.

Let u(t) be the mild solution of (1.1). By Definition 2.4 as well as the inequality (∑i=1nai)2≤n∑i=1nai2,ai∈ℝ, we have
EsupN≤t≤N+1‖u(t)‖U2≤4supN≤t≤N+1‖S(t-N+1)u(N)‖U2+4supN≤t≤N+1‖∫N-1t-AS(t-s)uds‖U2+4supN≤t≤N+1‖∫N-1tS(t-s)Cf(u(s-r))ds‖U2+4supN≤t≤N+1‖∫N-1tS(t-s)G(u(s-r))dW‖U2≜I1+I2+I3+I4.
Using the contraction of the map S(t) and the result of Theorem 3.1, we find
I1≜4supN≤t≤N+1E‖(S(t-N+1)u(N-1))‖U2≤4supN≤t≤N+1E‖uN-1‖C2≤4c2e-κ(N-1).
By the Hölder inequality, we obtain
I2≜4supN≤t≤N+1E‖∫N-1t-AS(t-s)uds‖U2≤4supN≤t≤N+1(t-N+1)∫N-1tE‖-AS(t-s)u‖U2ds≤8supN≤t≤N+1∫N-1tE‖Au‖U2ds≤8k42∫N-1N+1E‖u‖U2ds≤8k42∫N-1N+1E‖us‖C2ds≤8k42c2∫N-1N+1e-κsds≤8k42ρ1e-κ(N-1),
where ρ1=c2/κ,k4=max{a1,a2,…,an}.

By virtue of Theorem 3.1, Hölder inequality, and H1, we have
I3≜4supN≤t≤N+1‖E∫N-1tS(t-s)Cf(u(s-r))ds‖U2≤4supN≤t≤N+1(t-N+1)E∫N-1t‖Cf(u(s-r))‖U2ds≤8σ2supN≤t≤N+1E∫N-1t‖f(u(s-r))‖U2ds≤8k12σ2∫N-1N+1E‖u(s-r)‖U2ds≤8k12σ2∫N-1N+1E‖us‖C2ds≤8k12c2∫N-1N+1e-κsds≤8k12ρ1e-κ(N-1).
Then, by the Burkholder-Davis-Gundy inequality (see [18, 22]), there exists c3 such that
I4≜4supN≤t≤N+1E‖∫N-1tS(t-s)G(u(s-r))dW‖U2≤4c3supN≤t≤N+1E∫N-1t‖S(t-s)G(u(s-r))I‖U2ds≤4c3k22supN≤t≤N+1∫N-1tE‖u(s-r)‖U2ds≤4c3k22∫N-1N+1E‖us‖C2ds≤4c3k22c2∫N-1N+1e-κsds≤4c3k22ρ1e-κ(N-1),
where I=(1,1,…,1)T is an m-dimensional vector.

We can deduce from (3.21)–(3.24) that
EsupN≤t≤N+1‖u(t)‖U2≤ρ2e-κ(N-1),
where ρ2=4c2+(8k42+8k12+4c3k22)ρ1.

Thus, for any positive constants ɛN, thanks to the Chebyshev inequality we have that
P(supN≤t≤N+1‖u(t)‖U>ɛN)≤1ɛN2supN≤t≤N+1E‖u(t)‖U2≤1ɛN2ρ2e-κ(N-1).
Due to the Borel-Cantelli lemma, we see that
limsupt⟶∞ln‖u(t)‖U2t≤-κ,almostsurely.
This completes the proof of the theorem.

4. Simulation

Consider two-dimensional stochastic reaction-diffusion recurrent neural networks with delay as follows:du1(t,x)=(10Δu1-7u1+1.3tanh(u1(t-1,x)))dt+u1(t-1,x)dW,du2(t,x)=(10Δu2-7u2+tanh(u1(t-1,x))-tanh(u2(t-1,x)))dt+u2(t-1,x)dW∂ui(t,0)∂x=∂ui(t,20)∂x=0,t≥0,u1(θ,x)=cos(0.2πx),u2(θ,x)=cos(0.1πx),x∈[0,20],θ∈[-1,0].

Δ is the Laplace operator. We have β≥1/20, α≥10, k1=1, k2=1, k3=7, σ=1.3, n=2, and η>0; by Theorems 3.1 and 3.2, this system is mean square exponentially stable as well as almost surely exponentially stable. The results can be shown in Figures 1, 2 and 3.

We use the forward Euler method to simulate this example [23–25]. We choose the time step Δt=0.01 and space step Δx=1, and δ=Δt/Δx2=0.01.

Acknowledgment

The authors wish to thank the referees for their suggestions and comments. We are also indebted to the editors for their help. This work was supported by the National Natural Science Foundation of China (no. 11171374), Natural Science Foundation of Shandong Province (no. ZR2011AZ001).

LiaoX. X.GaoY. L.Stability of Hopfield neural networks with reactiondiffusion termsWangL. S.XuD.Global exponential stability of Hopfield reaction-diffusion neural networks with time-varying delaysWangL. S.WangY. F.Stochastic exponential stability of the delayed reaction diffusion interval neural networks with Markovian jumpling parametersHaleJ. K.LunelV. S. M.MohammedS.-E. A.WangL. S.GaoY. Y.Global exponential robust stability of reactiondiffusion interval neural networks with time varying delaysWangL. S.ZhangR.WangY.Global exponential stability of reaction-diffusion cellular neural networks with S-type distributed time delaysZhaoH. Y.WangG. L.Existence of periodic oscillatory solution of reactiondiffusion neural networks with delaysLuJ. G.LuL. J.Global exponential stability and periodicity of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditionsMaoX.SunJ.WanL.Convergence dynamics of stochastic reaction-diffusion recurrent neural networks with delaysItohM.ChuaL. O.Complexity of reaction-diffusion CNNLouX.CuiB.New criteria on global exponential stability of BAM neural networks with distributed delays and reaction-diffusion termsSongQ.WangZ.Dynamical behaviors of fuzzy reaction-diffusion periodic cellular neural networks with variable coefficients and delaysSongQ.CaoJ.ZhaoZ.Periodic solutions and its exponential stability of reaction-diffusion reccurent neural networks with distributed time delaysLiuK.Lyapunov functionals and asymptotic stability of stochastic delay evolution equationsTemamR.Da PratoG.ZabczykJ.ChueshovI.JahanipurR.Stochastic functional evolution equations with monotone nonlinearity: existence and stability of the mild solutionsTaniguchiT.Almost sure exponential stability for stochastic partial functional-differential equationsCaraballoT.LiuK.TrumanA.Stochastic functional partial differential equations: existence, uniqueness and asymptotic decay propertyKamraniM.HosseiniS. M.The role of coefficients of a general SPDE on the stability and convergence of a finite difference methodHighamD. J.Mean-square and asymptotic stability of the stochastic theta methodKloedenP. E.PlatenE.