The Generalized Dahlquist Constant with Applications in Synchronization Analysis of Typical Neural Networks via General Intermittent Control

A novel and effective approach to synchronization analysis of neural networks is investigated by using the nonlinear operator named the generalized Dahlquist constant and the general intermittent control. The proposed approach offers a design procedure for synchronization of a large class of neural networks. The numerical simulations whose theoretical results are applied to typical neural networks with and without delayed item demonstrate the effectiveness and feasibility of the proposed technique.


Introduction
Since its introduction by Pecora and Carrol in 1990, synchronization of chaotic systems [1][2][3][4][5][6][7][8][9][10] is of great practical significance and has received great interest in recent years.In the above literature, the approach applied to stability analysis is basically Lyapunov's method.As we all know, the construction of a proper Lyapunov function becomes usually very skillful, and Lyapunov's method does not specifically describe the convergence rate near the equilibrium of the system.Hence, there is little compatibility among all of the stability criteria obtained so far.
The concept named the generalized Dahlquist constant [11] has been applied to the investigation of impulsive synchronization [12,13] analysis.
Intermittent control [14][15][16][17][18] has been used for a variety of purposes in engineering fields such as manufacturing, transportation, air-quality control, and communication.Synchronization using an intermittent control method has been discussed [15][16][17][18].Compared with continuous control methods [2][3][4][5][6][7][8][9][10], intermittent control is more efficient when the system output is measured intermittently rather than continuously.Our interest focuses on the class of intermittent control with time duration, wherein the control is activated in certain nonzero time intervals and is off in other time intervals.A special case of such a control law is of the form where k denotes the control strength, δ > 0 denotes the switching width, and T denotes the control period.In this paper, based on the generalized Dahlquist constant and the Gronwall inequality, a general intermittent controller is designed, where h(n) is a strictly monotone increasing function on n with h(0) = 0 then sufficient yet generic criteria for synchronization of typical neural networks with and without delayed item are obtained.This paper is organized as follows.In Section 2, some necessary background materials are presented, and a simple configuration of coupled neural networks is formulated.Section 3 deals with synchronization.The theoretical results

Formulations
Let X be a Banach space endowed with the Euclidean norm , where •, • is inner product, and, let Ω be a open subset of X.We consider the following system: where F, G are nonlinear operators defined on Ω, x(t), x(t − τ) ∈ Ω, τ is a time-delayed positive constant, and Definition 1. System (3) is called to be exponentially stable on a neighborhood Ω of the equilibrium point if there exist constants μ > 0, M > 0, such that where x(t) is any solution of (3) initiated from x(t 0 ) = x 0 .
Definition 2 (see [11]).Suppose Ω is an open subset of Banach space X, and F : Ω → X is an operator.
The constant is called to be the generalized Dahlquist constant of F on Ω, where f (r) = (F + rI)x − (F + rI)y − r x − y ; here, denote by F + rI the operator mapping every point x ∈ Ω onto F(x) + rx.For r ≥ 0, where where According to the Cauchy-Bunie Khodorkovsky inequality, we obtain Therefore, That is So the function f (r), r ≥ 0, is monotone decreasing; thus, the limit lim exists.

Synchronization Analysis and Examples
Theorem 3. If the operator G in the system (3) satisfies for any x, y ∈ Ω, where l is a positive constant, then two solutions, x(t) and y(t), respectively, initiated from where λ = α(F) + exp{−α(F)τ}l.
For all x 0 , y 0 ∈ Ω, t > s ≥ 0, Advances in Artificial Neural Systems 3 where (16) where h(r, u) = e ru ( ( Then for all t ≥ 0, we infer that Therefore, we obtain Letting r → +∞, then Integrating inequality ( 19) over [t 0 , t], we have That is Using the Gronwall inequality [19,20], we have Then Let system (3) be the drive system, and we consider the response system where x, y ∈ R n are the state variables, F(•), G(•) are nonlinear operators, U(t) is a feedback control term, and where k denotes the control strength, T is the control period, δ is called the control width, and h(n) is a strictly monotone increasing function on n with h(0) = 0.In this paper, our goal is to design suitable function, h(n) and suitable parameters, δ, T, and k such that system (24) synchronizes to system (3).
Subtract (3) from ( 24), the error system is obtained where e = y − x.Then we have the following result.

is inverse function of the function h(•).
Proof.From Theorem 3, we can get the conclusion as follows:

Advances in Artificial Neural Systems
Consider conditions ( 28) and ( 29), and we can get the conclusion that When t → ∞, e(t) → 0 is obtained under condition ( 27) and ( 26) becomes asymptotically stable.
In the simulations of the following examples, we always choose T = 5, k = 10 and make use of the norm x = √ x T x, where x ∈ R n .
Example 7. Consider a typical delayed Hopfield neural network [21][22][23] with two neurons: where , and It should be noted that the network is actually a chaotic delayed Hopfield neural network.
Equation ( 32) is considered as the drive system, and the response system is defined as follows: We calculate and get the value l < 9.15, α(F) ≤ 0.7993, where and it is easy to verify that condition (31) is satisfied.Let the initial condition be (x 1 x 2 y 1 y 2 ) T = (3 4 7 12.5) T .Then it can be clearly seen in Figure 1 that the drive system (32) synchronizes with the response system (33).
Example 8. Considering a typical delayed chaotic neural network (29) with two neurons [24,25] as the drive system, (31) as the response system, where It is easily seen that the operator f (x(t)) is differential on x in Example 7, but the operator f (x(t)) is not so in this example.
We calculate and get the value l < 1.3 and it is easy to verify that condition (31) is satisfied.Let the initial condition be (x 1 x 2 y 1 y 2 ) T = (3 4 17 12.8) T .Then the synchronization property of this example can be clearly seen in Figure 2.
Equation ( 34) is considered as the drive system, and the response system is defined as follows: We calculate and get the value α(F) ≤ 1.4369, where F(x(t)) = −Cx(t) + A f (x(t)) and choose h(n) = n/2, δ = 2.It is easy to verify that condition (31) is satisfied.Let the initial condition be (x 1 x 2 x 3 x 4 y 1 y 2 y 3 y 4 ) T = (2 − 1 − 2 1 7 6 5.4 9) T .Then it can be clearly seen in Figure 3 that the drive system (34) synchronizes with the response system (36).

Conclusion
Approaches for synchronization of two coupled neural networks which use the nonlinear operator named the generalized Dahlquist constant and the general intermittent control have been presented in this paper.Strong properties of global and asymptotic synchronization have been achieved in a finite number of steps.The techniques have been successfully applied to typical neural networks.Numerical simulations have verified the effectiveness of the method.