Adaptive Exponential Synchronization for Stochastic Competitive Neural Networks with Time-Varying Leakage Delays and Reaction-Diffusion Terms

We study the exponential synchronization problem for a class of stochastic competitive neural networks with different timescales, as well as spatial diffusion, time-varying leakage delays, and discrete and distributed time-varying delays. By introducing several important inequalities and using Lyapunov functional technique, an adaptive feedback controller is designed to realize the exponential synchronization for the proposed competitive neural networks in terms of p-norm. According to the theoretical results obtained in this paper, the influences of the timescale, external stimulus constants, disposable scaling constants, and controller parameters on synchronization are analyzed.Numerical simulations are presented to show the feasibility of the theoretical results.


Introduction
Neural networks are mathematical models that are inspired by the structure and functional aspects of biological neural networks.Meyer-Baese et al. [1] proposed competitive neural networks with different timescales, which describe the dynamics of cortical cognitive maps with unsupervised synaptic modifications.In the competitive neural networks model, there are two types of state variables: the short-termmemory (STM) variables describing the fast neural activity and the long-term-memory (LTM) variables describing the slow unsupervised synaptic modifications.Hence, there are two timescales in the competitive neural networks, one of which corresponds to the fast change of the state and the other to the slow change of the synapse by external stimuli.The above competitive neural networks are described by the where  = 1, 2, . . ., ,   () is the neuron current activity level,   () is the synaptic efficiency, (  ()) is the output of neurons,   > 0 is the time constant of the neuron,   denotes the connection strength of the th neuron on the th neuron,   is the strength of the external stimulus,   is the constant external stimulus,  is the number of the constant external stimuli, and  > 0 is the timescale of the STM state.
Mathematical Problems in Engineering Synchronization problems of neural networks have been widely researched because of their extensive applications in secure communication, information processing, and chaos generators design.Synchronization of competitive neural networks with different timescales has attracted a great interest [2][3][4][5][6][7].In [7], Gan et where () and  * () are the discrete time-varying delay and the distributed time-varying delay, respectively;   and   are, respectively, the discrete time-varying delay connection strength and the distributed time-varying delay connection strength of the th neuron on the th neuron;   is the disposable scaling constant.The first term   () in each of the right sides of (2) is called leakage term corresponding to a stabilizing negative feedback of the system [8,9].In real world, the transmission delays often appear in leakage terms, which are called leakage delays [10].It is well known that leakage delays have been incorporated into neural networks by many researchers [11][12][13][14].However, leakage delays of neural networks in most bibliographies listed above are constants.As pointed out in [15][16][17][18], the delays in neural networks are usually timevarying.Hence, the results about the neural networks with constant delays in the leakage term are imperfect.
In addition, dynamic behaviors of neural networks derive from the interactions of neurons, which is dependent on not only the time of each neuron but also its space position [19,20].From this point, diffusion phenomena should not be ignored in neural networks.Many good results about reaction-diffusion neural networks have been obtained [21][22][23][24][25].The boundary conditions in most literatures listed are assumed to be Dirichlet boundary conditions.In engineering applications, such as thermodynamics, Neumann boundary conditions need to be considered.As far as we know, there are few results concerning the synchronization of competitive neural networks with reaction-diffusion term under Neumann boundary conditions.
Based on the above discussion, we are concerned with the combined effects of time-varying leakage delays, stochastic perturbation, and spatial diffusion on the synchronization of competitive neural networks with Neumann boundary conditions in terms of -norm via an adaptive feedback controller to improve the previous results.To this end, we discuss the following neural networks: (  (, )) d where  = ( ) is the Laplace operator; 0 <   () ≤  and 0 <  *  () ≤  * are the discrete time-varying delay and the distributed time-varying delay, respectively; 0 <   () ≤  is the time-varying leakage delay;   > 0 corresponds to the transmission diffusion coefficient along the th neuron.
The boundary condition and initial condition for response system (9) where  *  (  (⋅, )) =   (  (⋅, )) −   (  (⋅, )).In this paper, we give the following hypotheses.(H 1 ) There exists a positive constant   such that the neuron activation function   satisfies the following conditions: where V , V  ∈ R,  = 1, 2, . . ., . (H 2 ) There exists a positive constant   such that The paper is organized as follows.In the next section, we introduce some definitions and state several lemmas which will be essential to our proofs.In Section 3, by constructing a suitable Lyapunov functional, some new criteria are obtained to ensure the exponential synchronization of systems ( 5) and ( 9) under the adaptive feedback controller (11) and (12).Numerical simulations are carried out in Section 4 to illustrate the feasibility of the main theoretical results.A brief conclusion is given in Section 5.

Preliminary
In this section, we introduce some notations and lemmas which will be useful in the next section.
Lemma 6.Let  ≥ 2 be a positive integer and let Ω be a bound domain of   * with a smooth boundary where  1 is the smallest positive eigenvalue of the Neumann boundary problem: The proof of Lemma 6 is attached in Appendix.
Remark 9.It is the first time to consider the combined effects of time-varying leakage delays, the discrete time-varying delay, the distributed time-varying delay, stochastic perturbation, and spatial diffusion on the exponential synchronization of competitive neural networks under an adaptive feedback controller.The neural networks discussed in [6,7,31] are the special cases of the model in this paper.From this point, our results are more general.
Remark 10.In Theorem 8, the sufficient conditions are derived to achieve the adaptive synchronization for the proposed competitive neural networks.Compared with the adaptive synchronization criteria given in [7], the conditions obtained in Theorem 8 depend on not only the timescale  but also the controller parameter .It is beneficial to design an adaptive controller to realize the adaptive synchronization for the neural networks.Therefore, the criteria derived in this paper have wider application.

Numerical Simulations
In this section, some numerical simulation examples demonstrate the main results in Theorem 8.
The noise-perturbed response system is described by   (52) The adaptive controller is The initial conditions for the response system (51) are chosen as where (, ) ∈ [−0.9, 0] × Ω.  Evidently,  1 =  2 = 1,   = 0.3 < 1,   = 0.4 < 1, and  *  = 0.2 < 1.Let  = 2,  = 0.5,  1 =  2 = 0.1, and  * 1 =  * 2 = 1.By simple computation, it is easy to verify that assumptions (H 1 )-(H 6 ) are satisfied.According to Theorem 8, the drive system (49) and the response system (51) are exponentially synchronized based on -norm.Numerical simulation illustrates our results (see Figure 2).Remark 11.The conclusions given in Theorem 8 show that the adaptive synchronization criteria for competitive neural networks are dependent on the timescale , the disposable scaling constant   , and the external stimulus   .When  increases, and   increases or   decreases, respectively, assumption (H 6 ) can be satisfied more easily, and the adaptive synchronization of the competitive neural networks is more easily realized.Dynamical behaviors of the synchronization errors between systems (49) and (51) with the differential timescale, disposable scaling constant, and external stimulus, respectively, are shown in Figures 3-5.
Remark 12.By (48), it is clear to see that the controller parameter  denotes the rate of the synchronization.That is, the larger the controller parameter  is, the faster systems (49) and (51) realize synchronization.Hence, our results are consistent with the practical situation.Dynamical behaviors of the synchronization errors between systems (49) and (51) with differential controller parameter  are shown in Figure 6.The parameter ]  is another controller parameter in the feedback controller (11).By numerical simulations, we can see that it is beneficial for competitive neural networks to realize the synchronization by increasing controller parameter ]  .Dynamical behaviors of the synchronization errors between systems (49) and (51) with differential controller parameter ]  are shown in Figure 7.However, we cannot prove it.It is an interesting open problem to research.Remark 13.In many cases, two-neuron networks show the same behavior as large-size networks and many research methods used in two-neuron networks can be applied to large-size networks.Therefore, a two-neuron network can be used as an example to improve our understanding of our theoretical results.In addition, the parameter values are selected randomly to ensure that neural networks (49) exhibit a chaotic behavior.

Conclusion
In this paper, an adaptive feedback controller was designed to achieve the exponential synchronization for stochastic competitive neural networks with spatial diffusion, time-varying leakage delays, and discrete and distributed time-varying delays based on -norm.Evidently, the model discussed in this paper is more general than those correspondent models when the delays are constant delays.By constructing the Lyapunov functional and using and the stochastic analysis theory, the novel exponential synchronization criteria dependent on the timescale , external stimulus constants   , disposable scaling constants   , and controller parameter  were obtained.By theory analysis, it was shown that competitive neural networks can achieve exponential synchronization more easily by increasing the timescale and disposable scaling constants or reducing disposable scaling constants, respectively.Numerical examples and their simulations are given to show the effectiveness of the obtained results.Figures 8 and 9 show that it is beneficial for competitive neural networks with reaction-diffusion terms to realize the synchronization by increasing diffusion coefficients   or decreasing diffusion space , respectively.However, the exponential synchronization criteria obtained in this paper are independent of the diffusion coefficients and the diffusion space.They cannot reflect the influence of the diffusion coefficients and diffusion space on synchronization, which limits the application scopes of the results.Therefore, we will investigate that in our future work.

Appendix
Proof of Lemma 6.According to the eigenvalue theory of elliptic operators, the Laplacian −Δ on Ω with the Neumann boundary conditions is a self-adjoint operator with compact inverse, so there exists a sequence of nonnegative eigenvalues 0 =  0 <  1 <  2 < ⋅ ⋅ ⋅ , (lim →∞   = +∞) as well as a sequence of corresponding eigenfunctions  0 (),   (A.(A.4) The proof of Lemma 6 is complete.

Figure 3 :
Figure 3: Asymptotical behaviors of the synchronization errors with differential timescale .

Figure 4 :
Figure 4: Asymptotical behaviors of the synchronization errors with differential disposable scaling constants   .

Figure 5 :
Figure 5: Asymptotical behaviors of the synchronization errors with differential external stimulus constants   .

MathematicalFigure 6 :
Figure 6: Asymptotical behaviors of the synchronization errors with differential controller parameter .

Figure 7 :
Figure 7: Asymptotical behaviors of the synchronization errors with differential controller parameter ]  .

Figure 8 :
Figure 8: Asymptotical behaviors of the synchronization errors with differential diffusion coefficients   .