Random Attractor of Reaction-Diffusion Hopfield Neural Networks Driven by Wiener Processes

It is well known that the dynamics of Hopfield neural networks have been deeply investigated because they have been successfully employed in many areas such as pattern recognition, associate memory, and combinatorial optimization. The diffusion effect cannot be avoided in the neural networks when electrons are moving in asymmetric electromagnetic field. So, the dynamical behavior of reactiondiffusion Hopfield neural networks (RDHNNs, for short) has been receiving much attention, recently [1–11]. But in a more realistic model, in order to describe the propagation of an electric potential in a neuron, it is sensible to include some noise in the system. In fact, a neural network can be stabilized or destabilized by certain stochastic inputs. Many scholars have been devoted to the stochastic RDHNNs such as [2, 12, 13]. Giving a deep insight into these literatures, we will find that most of these literatures consider the RDHNNs with finite dimensional Wiener processes; there are few results on the RDHNNs driven by infinite dimensional Wiener processes. However, since the neurons can be regarded as long thin cylinders, which act like electrical cables, the infinite dimensional Wiener processes are more favorable than standard Brownian motion. On the other hand, attractor plays an important role in the long time behavior for dynamical systems. The random attractor extends the concept of a strange attractor from deterministic to stochastic system. There has been great interest in random attractors for stochastic partial differential equations in recent decades. The random attractors are compact invariant random sets attracting all the orbits of attraction basin. They provide crucial geometric information about their asymptotic regime as t → ∞. They can help us understand the chaotic behavior of the stochastic DRDHNNs and reduce the complexity, as well as providing the statistical properties of this system. When the global existence and uniqueness of the solution can be assured, many scholars pay much attention to the global stability, boundedness, and even synchronization of the RDHNNs [1–3, 11]. However, to the best of our knowledge, there is no result on the attractor for RDHNNs, let alone random attractor for stochastic RDHNNs. We hope this work can lay a solid foundation for the future research. So, this paper is devoted to the random attractor for the following RDHNNs driven by Wiener processes:


Introduction
It is well known that the dynamics of Hopfield neural networks have been deeply investigated because they have been successfully employed in many areas such as pattern recognition, associate memory, and combinatorial optimization.The diffusion effect cannot be avoided in the neural networks when electrons are moving in asymmetric electromagnetic field.So, the dynamical behavior of reactiondiffusion Hopfield neural networks (RDHNNs, for short) has been receiving much attention, recently [1][2][3][4][5][6][7][8][9][10][11].
But in a more realistic model, in order to describe the propagation of an electric potential in a neuron, it is sensible to include some noise in the system.In fact, a neural network can be stabilized or destabilized by certain stochastic inputs.Many scholars have been devoted to the stochastic RDHNNs such as [2,12,13].Giving a deep insight into these literatures, we will find that most of these literatures consider the RDHNNs with finite dimensional Wiener processes; there are few results on the RDHNNs driven by infinite dimensional Wiener processes.However, since the neurons can be regarded as long thin cylinders, which act like electrical cables, the infinite dimensional Wiener processes are more favorable than standard Brownian motion.
On the other hand, attractor plays an important role in the long time behavior for dynamical systems.The random attractor extends the concept of a strange attractor from deterministic to stochastic system.There has been great interest in random attractors for stochastic partial differential equations in recent decades.The random attractors are compact invariant random sets attracting all the orbits of attraction basin.They provide crucial geometric information about their asymptotic regime as  → ∞.They can help us understand the chaotic behavior of the stochastic DRDHNNs and reduce the complexity, as well as providing the statistical properties of this system.When the global existence and uniqueness of the solution can be assured, many scholars pay much attention to the global stability, boundedness, and even synchronization of the RDHNNs [1][2][3]11].However, to the best of our knowledge, there is no result on the attractor for RDHNNs, let alone random attractor for stochastic RDHNNs.We hope this work can lay a solid foundation for the future research.
So, this paper is devoted to the random attractor for the following RDHNNs driven by Wiener processes: where   (, x) denote the potential of the cell  at  and x ∈ R  .  are positive constants and denote the rate with which the th unit will reset its potential to the resting state in isolation when it is disconnected from the network and external inputs at .For convenience, we rewrite system (1) in the vector form where We will also use the following notations in the paper.

Preliminaries and Notations
In this paper, we introduce the following Hilbert spaces:  = { 2 (O)}  and  = { 1 (O)}  ; according to [14,[19][20][21],  ⊂  =   ⊂   ;   ,   denote the dual of the spaces ,  respectively, the injection continues, and the embedding is compact; ‖ ⋅ ‖, ‹ ⋅ ‹ represent the usual norm in , , respectively.Let us define the operator as follows: and Π() is the domain of  defined as is the infinitesimal generator of an analytic semigroup ().
Defining the Nemytskii operator as follows: With these notations, we rewrite system (2) in the more abstract form du = (u + Cf (u) + I) d + dW, We recall that is the solution of the Ornstein-Uhlenbeck process du = ud + dW.
The regularity of (6) has been proved in [14,22]  We also need the following propositions in the following sections.
Proof.We recall that the solution of this linear equation is u() =   , so () =   .Now we take the inner product of (10) with u() in , by employing the Gaussian theorem and condition (H3), we get where (⋅, ⋅) is the inner product in  (see [1][2][3]), and we also have By (H2), one obtains By Gronwall-Bellman inequality, we have by the definition of ‖()‖ and uniform boundedness principle, we have So () is a contraction map.
Let (; ) be a complete separable metric space.We shall recall the notions of random dynamical system and random attractor.

and
P = P for all  ∈ R, where B(R) is the Borel  field of R.
Lemma 11 (see [39,40]).Suppose  is a RDS on a Polish space , and there exists a compact set () absorbing every bounded deterministic set B ∈ .Then the set is a random attractor for .

Existence and Uniqueness of the Solutions
Let where W A () has been defined in the previous section.Then, from ( 5) and ( 6), k() satisfies the equation where  = −W A ( 0 ).Let us rewrite (23) in the integral form Definition 12.If k satisfies ( 23), we say that the u() is a mild solution of (1).
Let   * ≜ ([ 0 ,  0 +  * ]; ), when equipped with the norm ‖u‖ and consider an initial data  which is F 0 -measurable and belong to , a.e. ∈ Ω.From now on, we are going to discuss mild solution of equation ( 22) by Schauder fixed point method in the space   * for some  * > 0.
Proof.We split our proof into the following steps.
By the Arzelá-Ascoli theorem, for any bounded set B ⊂ Σ(,  * ), the closure of TB is compact, so T is a completely continuous map, then by Steps 1-3 and Lemma 10, T has a fixed point in Σ(,  * ), which is a mild solution of (1).
Proof.Let k 1 , k 2 ∈ Σ(,  * ) be two solutions of system (23) on [0; ] with k 1 (0) = k 2 (0) = , and set z  = TV  ,  = 1, 2, and z = z 1 − z 2 .Then Following the method in Step 3 of Theorem 13, we have We take a stopping time  * such that Proof.By employing the method in [41,42], let {  } ∞ =1 be a sequence in { ∞ (O)}  such that Let {W  A } be a sequence of regular process such that in {([ 0 ,  0 + ] × O)}  a.e. ∈ Ω.Let k  be the solution of By using the method in the Theorem 13, it is easy to see that k  does exist on an interval of time [ 0 ,  0 +   ] such that   →  * a.s. and that k  converges to k in ([ 0 ,  0 + ]; ) [39].Moreover k  is regular and satisfies Taking the inner product of (40) with k  in  and employing the result of (11), we find that By the Cauchy-Schwartz inequality and Young inequality, we get By using Young inequality and Proposition 3, as well as condition (H1), one obtains By (H2) and (H3), we deduce that where By the classical Gronwall inequality, then we have with  ,∞ = sup ∈[ 0 , 0 +] ‖W   ()‖, for a.e. ∈ Ω. Taking the limit as  → ∞, we see that a.s.
It follows that thus we complete the proof.
It is easy to derive the following from Theorems 13, 14, and 15.

Existence of the Random Attractor
We refer  = W() and Ω = { ∈ (R, R  ) | (0) = 0}, with P being the product measure of two Wiener measures on the negative and positive parts of Ω.In this case, there exists a Wiener shift   defined as   () = ( + ) − (), ,  ∈ R, which is an ergodic transformation.

Absorbing Sets in 𝑈.
Let us define the operator as follows: and We can infer from (10) and ( 49) that The associated bilinear operator with A is defined as (u, k) = (Au, k).By utilizing the result of [2], we have where A : B is the Frobenius inner product of two  ×  matrixes, defined as A : B ≜ ∑   ∑       .Let  0 < −1 and  ∈  be given, and let k be the solution of equation (23).Taking the inner product of (22) with k in  we find that By the Cauchy-Schwartz inequality and Young inequality By using Young inequality and (H1) We deduce from ( 51)-( 54) that where 2 .By (H4), we know that  > 0; using the classical Gronwall inequality, for  0 ≤ −1, we have Theorem 17.Under conditions (H1)-(H4), the RDS  defined by ( 48) admits a random absorbing set in .
By Definition 9, we get the conclusion.
We need the following auxiliary proposition in the following sections.
Proof.Put the random variable By Theorem 18,  5 is a finite number; then by employing (71), for all  > 0, there exist () ≤ −1, for all  0 ≤ () and all  ∈ , with ‖‖ ≤ ; the solution k(0, ;  0 , ) of system (23) satisfies the inequality which also means the stochastic flow of (1) satisfies  If the active function f is a bounded Lipschitz function, then we can choose a sufficiently large  1 such that  2 = 0, so (H4) is satisfied automatically; then we get the following theorem.
Corollary 1.Under conditions (H1)-(H3) and the fact that there exists a constant  1 such that ‖f(u)‖ ≤  1 , the RDS  defined by ( 48) admits a random attractor in . is the expectation, and  ∧  = min{, }.Under these assumptions, by Theorem 15, this system has a global mild solution.Meanwhile  4 ≥ 2 √ 2 2  5 , so according to Theorem 20, this system possesses a random attractor.We simulate this example by using the Matlab; for detailed information see Figures 1-3.A Crank-Nicolson method in time and second-order center differences in space are used to discrete this model.A Newton iterative method is used to solve the discretized nonlinear equation.For more theories  about the numerical theory about SPDE, we refer to [42,43].

Example and Simulation
The method used in this article can be extended to other systems, such as the biological systems and fluid mechanical systems [44][45][46][47][48].We can also apply this method to the system driven by other types of noise, such as the G-Brown motion [49].By the way, some control technique may be used to stabilize these systems with random attractor [50][51][52][53][54].We will study these problems in the future.
are connection weights of the neural network.  are the active functions of the neural network, which are continuous.  are the intensity of the noise.  denote the th component of an external input source introduced from outside the network to the th neuron, which are constant numbers.O denotes an open bounded and connected subset of R  with a sufficiently regular boundary O.∇ is the gradient.Initial data   are F 0 -measurable and belong to 2(O), a.e. ∈ Ω.