This paper investigates the problem on master-salve synchronization for stochastic neural networks with both time-varying and distributed time-varying delays. Together with the drive-response concept, LMI approach, and generalized convex combination, one novel synchronization criterion is obtained in terms of LMIs and the condition heavily depends on the upper
and lower bounds of state delay and distributed one. Moreover, the addressed systems can include some famous network
models as its special cases, which means that our methods extend those present ones. Finally, two numerical examples are
given to demonstrate the effectiveness of the presented scheme.
1. Introduction
In the past decade, synchronization of chaotic systems has attracted considerable attention since the pioneering works of Pecora and Carroll [1], in which it shows when some conditions are satisfied, a chaotic system (the slave/response system) may become synchronized to another identical chaotic system (the master/drive system) if the master system sends some driving signals to the slave one. Now, it is widely known that there exist many benefits of having synchronization or chaos synchronization in various engineering fields, such as secure communication [2], image processing [3], and harmonic oscillation generation. Meanwhile, there exists synchronization in language development, which comes up with a common vocabulary, while agents' synchronization in organization management will improve their work efficiency. Recently, chaos synchronization has been widely investigated due to its great potential applications. Especially, since artificial neural network model can exhibit the chaotic behaviors [4, 5], the synchronization has become an important area of study, see [6–23] and references therein. As special complex networks, delayed neural networks have been also found to exhibit some complex and unpredictable behaviors including stable equilibria, periodic oscillations, bifurcation, and chaotic attractors [24–27]. Presently, many literatures dealing with chaos synchronization phenomena in delayed neural networks have appeared. Together with various techniques such as LMI tool, M-matrix, and Jensen's inequalities, some elegant results have been derived for global synchronization of various delayed neural networks including discrete-time ones in [6–14]. Moreover, some authors have considered the problems on adaptive synchronization and H∞ synchronization in [15, 16].
Meanwhile, it is worth noting that, like time-delay and parameter uncertainties, noises are ubiquitous in both nature and man-made systems and the stochastic effects on neural networks have drawn much particular attention. Thus a large number of elegant results concerning dynamics of stochastic neural networks have already been presented in [17–23, 28, 29]. Since noise can induce stability and instability oscillations to the system, by virtue of the stability theory for stochastic differential equations, there has been an increasing interest in the study of synchronization for delayed neural networks with stochastic perturbations [17–23]. Based on LMI technique, in [17–19], some novel results have been derived on the global synchronization as the addressed networks were involved in distributed delay or neutral type. Also the works [20–23] have considered the adaptive synchronization and lag synchronization for stochastic delayed neural networks. However, the control schemes in [17–19] cannot tackle the cases as the upper bound of delay's derivative is not less than 1, and the presented results in [20–23] are not formulated in terms of LMIs, which makes them checked inconveniently by most recently developed algorithms. Meanwhile, in order to implement the practical point of view better, distributed delay should be taken into consideration and thus, some researchers have began to give some preliminary discussions in [9–11, 19]. It is worth pointing out that the range of time delays considered in [17–23] is from 0 to an upper bound. In practice, the range of delay may vary in a range for which the lower bound is not restricted to be 0. Thus the criteria in the above literature can be more conservative because they have not considered the information on the lower bound of delay. Meanwhile, it has been verified that the convex combination idea was more efficient than some previous techniques when tackling time-varying delay, and furthermore, the novel idea needs some improvements since it has not taken distributed delay into consideration altogether [30]. Yet, few authors have employed improved convex combination to consider the stochastic neural networks with both variable and distributed variable delays and proposed some less conservative and easy-to-test control scheme for the exponential synchronization, which constitutes the main focus of the presented work.
Motivated by the above-mentioned discussion, this paper focuses on the exponential synchronization for a broad class of stochastic neural networks with mixed time-varying delays, in which two involved delays belong to the intervals. The form of addressed networks can include several well-known neural network models as the special cases. Together with the drive-response concept and Lyapunov stability theorem, a memory control law is proposed which guarantees the exponential synchronization of the drive system and response one. Finally, two illustrative examples are given to illustrate that the obtained results can improve some earlier reported works.
Notation 1.
For symmetric matrix X,X>0 (resp., X≥0) means that X>0(X≥0) is a positive-definite (resp., positive-semidefinite) matrix; AT,A-T represent the transposes of matrices A and A-1, respectively. For τ>0,𝒞([-τ,0];Rn) denotes the family of continuous functions φ from [-τ,0] to Rn with the norm ∥φ∥=sup-τ≤θ≤0|φ|. Let (Ω,ℱ,{ℱt}t≥0,P) be a complete probability space with a filtration {ℱt}t≥0 satisfying the usual conditions; Lℱ0p([-τ,0];Rn) is the family of all ℱ0-measurable 𝒞([-τ,0];Rn)-valued random variables ξ={ξ(θ):-τ≤θ≤0} such that sup-τ≤θ≤0E|ξ(θ)|p<∞, where E{·} stands for the mathematical expectation operator with respect to the given probability measure P; I denotes the identity matrix with an appropriate dimension and [XYYTZ]=[XY*Z] with * denoting the symmetric term in a symmetric matrix.
2. Problem Formulations
Consider the following stochastic neural networks with time-varying delays described bydz(t)=[-b(z(t))+Ag(z(t))+Bg(z(t-τ(t)))+D∫t-ϱ(t)tg(z(s))ds+I]dt,
where z(t)=[z1(t),…,zn(t)]T∈Rn is the neuron state vector, g(z(·))=[g1(z1(·)),…,gn(zn(·))]T∈R represents the neuron activation function, I∈Rn is a constant external input vector, and A,B,D are the connection weight matrix, the delayed weight matrix, and the distributively delayed connection weight one, respectively.
In the paper, we consider the system (2.1) as the master system and the slave system as follows:dy(t)=[-b(y(t))+Ag(y(t))+Bg(y(t-τ(t)))+D∫t-ϱ(t)tg(y(s))ds+I+u(t)]dt+σ(t,ε(t),ε(t-τ(t)))dw(t)
with ε(t)=[ε1(t),…,εn(t)]T=y(t)-z(t), where A,B,D are constant matrices similar to the relevant ones (2.1) and u(t) is the appropriate control input that will be designed in order to obtain a certain control objective. In practical situations, the output signals of the drive system (2.1) can be received by the response one (2.2).
The following assumptions are imposed on systems (2.1) and (2.2) throughout the paper.
Here τ(t) and ϱ(t) denote the time-varying delay and the distributed one satisfying
0≤τ0≤τ(t)≤τm,τ̇(t)≤μ,0≤ϱ0≤ϱ(t)≤ϱm,
and we introduce τ¯m=τm-τ0,ϱ¯m=ϱm-ϱ0, and τmax=max{τm,ϱm}.
Each function bi(·):R→R is locally Lipschitz, and there exist positive scalars πi and γi(i=1,2,…,n) such that πi≥ḃi(z)≥γi>0 for all z∈R. Here, we denote Π=diag{π1,…,πn} and Γ=diag{γ1,…,γn}.
For the constants σi+,σi-, the neuron activation functions in (2.1) are bounded and satisfy
σi-≤gi(x)-gi(y)x-y≤σi+,∀x,y∈R,x≠y,i=1,2,…,n.
In system (2.2), the function σ(t,·,·):R+×Rn×Rn→Rn×m(σ(t,0,0)=0) is locally Lipschitz continuous and satisfies the linear growth condition as well. Moreover, σ(t,·,·) satisfies the following condition:
trace[σT(t,x,y)σ(t,x,y)]≤xTΠ1TΠ1x+yTΠ2TΠ2y,∀x,y∈Rn,
where Πi(i=1,2) are the known constant matrices of appropriate dimensions.
Let ε(t) be the error state and subtract (2.1) from (2.2); it yields the synchronization error dynamical systems as follows:dε(t)=[-β(ε(t))+Af(ε(t))+Bf(ε(t-τ(t)))+D∫t-ϱ(t)tf(ε(s))ds+u(t)]dt+σ(t,ε(t),ε(t-τ(t)))dw(t),
where f(ε(·))=g(y(·))-g(z(·)). One can check that the function fi(·) satisfies fi(0)=0, and σi-≤fi(x)-fi(y)x-y≤σi+,∀x,y∈R,x≠y,i=1,2,…,n.
Moreover, we denote Σ¯=diag{σ1+,…,σn+}, Σ=diag{σ1-,…,σn-}, and Σ1=diag{σ1+σ1-,…,σn+σn-},Σ2=diag{σ1++σ1-2,…,σn++σn-2}.
In the paper, we adopt the following definition.
Definition 2.1 (see [18]).
For the system (2.6) and every initial condition φ=ϕ-ψ∈Lℱ2([-2τmax,0];Rn), the trivial solution is globally exponentially stable in the mean square, if there exist two positive scalars μ,k such that
E‖ε(t;φ)‖2≤μsup-τmax≤s≤0E‖ϕ(s)-ψ(s)‖2e-kt,∀t≥0,
where E stands for the mathematical expectation and ϕ,ψ are the initial conditions of systems (2.1) and (2.2), respectively.
In many real applications, we are interested in designing a memoryless state-feedback controller u(t)=Kε(t), where K∈Rn×n is a constant gain matrix. In the paper, for a special case that the information on the size of τ(t) is available, we consider the delayed feedback controller of the following form:u(t)=Kε(t)+K1ε(t-τ(t)),
then replacing u(t) into system (2.6) yieldsdε(t)=[-β(ε(t))+Kε(t)+K1ε(t-τ(t))+Af(ε(t))+Bf(ε(t-τ(t)))+D∫t-ϱ(t)tf(ε(s))ds]dt+σ(t,ε(t),ε(t-τ(t)))dw(t).
Then the purpose of the paper is to design a controller u(t) in (2.10) to let the slave system (2.2) synchronize with the master one (2.1).
3. Main Results
In this section, some lemmas are introduced firstly.
Lemma 3.1 (see [18]).
For any symmetric matrix W∈Rn×n,W=WT≥0, scalar h>0, vector function ω:[0,h]→Rn such that the integrations concerned are well defined, then (∫0hω(s)ds)TW(∫0hω(s)ds)≤h∫0hωT(s)Wω(s)ds.
Lemma 3.2 (see [19]).
Given constant matrices P,Q,R, where PT=P,QT=Q, then the linear matrix inequality (LMI) [PRRT-Q]<0 is equivalent to the condition: Q>0,P+RQ-1RT<0.
Lemma 3.3 (see [31]).
Suppose that Ω,Ξ1i,Ξ2i,i=1,2 are the constant matrices of the appropriate dimensions, α∈[0,1], and β∈[0,1], then the inequality Ω+[αΞ11+(1-α)Ξ12]+[βΞ21+(1-β)Ξ22]<0 holds, if the four inequalities Ω+Ξ11+Ξ21<0,Ω+Ξ11+Ξ22<0,Ω+Ξ12+Ξ21<0,Ω+Ξ12+Ξ22<0 hold simultaneously.
Then, a novel criterion is presented for the exponential stability for system (2.11) which can guarantee the master system (2.1) to synchronize the slave one (2.2).
Theorem 3.4.
Supposing that assumptions (A1)–(A4) hold, then system (2.11) has one equilibrium point and is globally exponentially stable in the mean square, if there exist n×n matrices P>0,Qj>0,Rj>0(j=1,2,3),Zi>0,Si>0,Ti>0,Pi(i=1,2),n×n diagonal matrices L>0,Q>0,H>0,U>0,V>0,W>0,R>0,E>0,13n×n matrices M,N,G, and one scalar λ≥0 such that the matrix inequalities (3.1)-(3.2) hold:
-λI+P+(L+H)(Σ¯-Σ)+Q(Π-Γ)+τ¯mZ2+τ0S2≤0,[Ω+$+$T-IiT2IiTΞ1*Φ]<0,[Ω+$+$T-IiT2IiTΞ2*Φ]<0,i=1,2,
where I1=[0n·10nIn0n·2n],I2=[0n·11nIn0n·n] and
Ω=[Ω1100P1TA+UΣ200Ω17P1TK1P1TBP1TDP1TD0Ω1,13*Ω2200WΣ200000000**Ω3300RΣ20000000***Ω4400Ω4700000ATQ****Ω5500000000*****Ω660000000******Ω77P2TK1P2TBP2TDP2TD0-P2T*******Ω88VΣ2000K1TQ********Ω99000BTQ*********-T100DTQ**********-T20DTQ***********-T20************-Q-QT],$=[M-M+N-G013n⋅4n-N+G013n⋅5n],Ξ1=[τ0Mτ¯mNMNG],Ξ2=[τ0Mτ¯mGMNG],Φ=-diag{S1,Z1,S2,Z2,Z2},
With
Ω11=P1TK+KTP1+Q2-UΣ1-2ΓE+λΠ1TΠ1,Ω17=KTP2+P-P1T+Σ¯H-ΣL-ΓQ,Ω1,13=-P1T+KTQ+E,Ω22=-WΣ1+Q1+Q3-Q2,Ω33=-Q3-RΣ1,Ω44=-U+R2+ϱ02T1+ϱ¯m2T2,Ω47=L-H+ATP2,Ω55=-W+R1+R3-R2,Ω66=-R-R3,Ω77=-P2T-P2+τ¯mZ1+τ0S1,Ω88=-(1-μ)Q1-VΣ1+λΠ2TΠ2,Ω99=-(1-μ)R1-V.
Proof.
Denoting σ(t)=σ(t,ε(t),ε(t-τ(t))), we represent system (2.11) as the following equivalent form:
dε(t)=ν(t)dt+σ(t)dw(t),ν(t)=-β(ε(t))+Kε(t)+K1ε(t-τ(t))+Af(ε(t))+Bf(ε(t-τ(t)))+D∫t-ϱ(t)tf(ε(s))ds.
Now, together with assumptions (A1) and (A2), we construct the following Lyapunov-Krasovskii functional:
V(εt)=V1(εt)+V2(εt)+V3(εt)+V4(εt),
where
V1(εt)=εT(t)Pε(t)+2∑j=1nlj∫0εj[fj(s)-σj-s]ds+2∑j=1nhj∫0εj[σj+s-fj(s)]ds+2∑j=1nqj∫0εj[βj(s)-γjs]ds,V2(εt)=∫t-τ(t)t-τ0[εT(s)Q1ε(s)+fT(ε(s))R1f(ε(s))]ds+∫t-τ0t[εT(s)Q2ε(s)+fT(ε(s))R2f(ε(s))]ds+∫t-τmt-τ0[εT(s)Q3ε(s)+fT(ε(s))R3f(ε(s))]ds,V3(εt)=∫-τm-τ0∫t+θtνT(s)Z1ν(s)dsdθ+∫-τm-τ0∫t+θttrace(σT(s)Z2σ(s))dsdθ+∫-τ00∫t+θtνT(s)S1ν(s)dsdθ+∫-τ00∫t+θttrace(σT(s)S2σ(s))dsdθ,V4(εt)=ϱ0∫-ϱ00∫t+θtfT(ε(s))T1f(ε(s))dsdθ+ϱ¯m∫-ϱm-ϱ0∫t+θtfT(ε(s))T2f(ε(s))dsdθ
with setting L=diag{l1,…,ln}>0,H=diag{h1,…,hn}>0, and Q=diag{q1,…,qn}>0. In the following, the weak infinitesimal operator ℒ of the stochastic process {εt,t≥0} is given in [32].
By employing (A1) and (A2) and directly computing ℒVi(εt)(i=1,2,3,4), it follows from any n×n matrices P1,P2 that
LV1(εt)≤2εT(t)Pν(t)+2[fT(ε(t))-εT(t)Σ]Lν(t)+2[εT(t)Σ¯-fT(ε(t))]Hν(t)+2βT(ε(t))Q×[-β(ε(t))+Kε(t)+K1ε(t-τ(t))+Af(ε(t))+Bf(ε(t-τ(t)))+D∫t-ϱ(t)tf(ε(s))ds]-2εT(t)ΓTQν(t)+trace[σT(t)[P+(L+H)(Σ¯-Σ)+Q(Π-Γ)]σ(t)]+2[εT(t)P1T+ν(t)P2T]×[[-β(ε(t))+Kε(t)+K1ε(t-τ(t))+Af(ε(t))+Bf(ε(t-τ(t)))+D∫t-ϱ(t)tf(ε(s))ds]-ν(t)-β(ε(t))+Kε(t)+K1ε(t-τ(t))+Af(ε(t))+Bf(ε(t-τ(t)))+D∫t-ϱ(t)tf(ε(s))ds],LV2(εt)≤[εT(t-τ0)Q1ε(t-τ0)+fT(ε(t-τ0))R1f(ε(t-τ0))]-(1-μ)[εT(t-τ(t))Q1ε(t-τ(t))+fT(ε(t-τ(t)))R1f(ε(t-τ(t)))]+[εT(t)Q2ε(t)+fT(ε(t))R2f(ε(t))]+[εT(t-τ0)×(Q3-Q2)ε(t-τ0)+fT(ε(t-τ0))(R3-R2)f(ε(t-τ0))]-[εT(t-τm)Q3ε(t-τm)+fT(ε(t-τm))R3f(ε(t-τm))],LV3(εt)=τ¯mνT(t)Z1ν(t)-∫t-τmt-τ0νT(s)Z1ν(s)ds+τ¯mtrace[σT(t)Z2σ(t)]-∫t-τmt-τ0trace[σT(s)Z2σ(s)]ds+τ0νT(t)S1ν(t)-∫t-τ0tνT(s)S1ν(s)ds+τ0trace[σT(t)S2σ(t)]-∫t-τ0ttrace[σT(s)S2σ(s)]ds,LV4(εt)=fT(ε(t))(ϱ02T1+ϱ¯m2T2)f(ε(t))-∫t-ϱ0tϱ0fT(ε(s))T1f(ε(s))ds-∫t-ϱmt-ϱ0ϱ¯mfT(ε(s))T2f(ε(s))ds≤fT(ε(t))(ϱ02T1+ϱ¯m2T2)f(ε(t))-[∫t-ϱ0tf(ε(s))ds]TT1[∫t-ϱ0tf(ε(s))ds]-(1+μ1)×[∫t-ϱ(t)t-ϱ0f(ε(s))ds]TT2[∫t-ϱ(t)t-ϱ0f(ε(s))ds]-(1+μ2)[∫t-ϱmt-ϱ(t)fT(ε(s))ds]T2[∫t-ϱmt-ϱ(t)f(ε(s))ds],
where μ1=(ϱm-ϱ(t))/ϱ¯m and μ2=(ϱ(t)-ϱ0)/ϱ¯m.
Now adding the terms on the right side of (3.8)–(3.11) to ℒV(εt) and employing (2.5), (3.1), it is easy to obtain
LV(εt)≤2εT(t)Pν(t)+2[fT(ε(t))-εT(t)Σ]Lν(t)+2[εT(t)Σ¯-fT(ε(t))]Hν(t)+2[εT(t)P1T+ν(t)P2T+βT(ε(t))Q]×[[∫t-ϱ0tf(ε(s))ds+∫t-ϱ(t)t-ϱ0f(ε(s))ds]-β(ε(t))+Kε(t)+K1ε(t-τ(t))+Af(ε(t))+Bf(ε(t-τ(t)))+D[∫t-ϱ0tf(ε(s))ds+∫t-ϱ(t)t-ϱ0f(ε(s))ds]]-2[εT(t)P1T+ν(t)P2T]ν(t)-2εT(t)ΓTQν(t)+[εT(t-τ0)Q1ε(t-τ0)+fT(ε(t-τ0))R1f(ε(t-τ0))]-(1-μ)[εT(t-τ(t))Q1ε(t-τ(t))+fT(ε(t-τ(t)))R1f(ε(t-τ(t)))]+[εT(t)Q2ε(t)+fT(ε(t))R2f(ε(t))]+[εT(t-τ0)(Q3-Q2)ε(t-τ0)+fT(ε(t-τ0))(R3-R2)f(ε(t-τ0))]-[εT(t-τm)Q3ε(t-τm)+fT(ε(t-τm))R3f(ε(t-τm))]+νT(t)(τ¯mZ1+τ0S1)ν(t)-∫t-τmt-τ0νT(s)Z1ν(s)ds-∫t-τ0tνT(s)S1ν(s)ds-∫t-τmt-τ0trace[σT(s)Z2σ(s)]ds-∫t-τ0ttrace[σT(s)S2σ(s)]ds+λ[εT(t)Π1TΠ1ε(t)+εT(t-τ(t))Π2TΠ2ε(t-τ(t))]+fT(ε(t))(ϱ02T1+ϱ¯m2T2)f(ε(t))-[∫t-ϱ0tf(ε(s))ds]TT1[∫t-ϱ0tf(ε(s))ds]-(1+μ1)×[∫t-ϱ(t)t-ϱ0f(ε(s))ds]TT2[∫t-ϱ(t)t-ϱ0f(ε(s))ds]-(1+μ2)[∫t-ϱmt-ϱ(t)fT(ε(s))ds]T2[∫t-ϱmt-ϱ(t)f(ε(s))ds].
Based on methods in [33] and (2.7), for any n×n diagonal matrices U>0,V>0,W>0,R>0, the following inequality can be achieved:
0≤-[xT(t)UΣ1x(t)-2xT(t)UΣ2f(x(t))+fT(x(t))Uf(x(t))]-[xT(t-τ(t))VΣ1x(t-τ(t))-2xT(t-τ(t))VΣ2f(x(t-τ(t)))+fT(x(t-τ(t)))Vf(x(t-τ(t)))]-[xT(t-τ0)WΣ1x(t-τ0)-2xT(t-τ0)WΣ2f(x(t-τ0))+fT(x(t-τ0))Wf(x(t-τ0))]-[xT(t-τm)RΣ1x(t-τm)-2xT(t-τm)RΣ2f(x(t-τm))+fT(x(t-τm))Rf(x(t-τm))].
From (A1), for any n×n diagonal matrix E, one can yield
0≤2[β(ε(t))-Γε(t)]TEε(t).
Furthermore, for any 13n×n constant matrices M,N,G, we can obtain
0=2ζT(t)M[ε(t)-ε(t-τ0)-∫t-τ0tν(s)ds-∫t-τ0tσ(s)dω(s)]+2ζT(t)N[ε(t-τ0)-ε(t-τ(t))-∫t-τ(t)t-τ0ν(s)ds-∫t-τ(t)t-τ0σ(s)dω(s)]+2ζT(t)G[ε(t-τ(t))-ε(t-τm)-∫t-τmt-τ(t)ν(s)ds-∫t-τmt-τ(t)σ(s)dω(s)],
where
ζT(t)=[[∫t-ϱmt-ϱ(t)f(ε(s)ds)]TεT(t)εT(t-τ0)εT(t-τm)fT(ε(t))fT(ε(t-τ0))fT(ε(t-τm))νT(t)εT(t-τ(t))fT(ε(t-τ(t)))[∫t-ϱ0tf(ε(s)ds)]T[∫t-ϱ(t)t-ϱ0f(ε(s)ds)]T[∫t-ϱmt-ϱ(t)f(ε(s)ds)]TβT(ε(t))].
Then together with the methods in [28, 29], combining (3.12)–(3.15) yields
LV(εt)≤ζT(t)[Ω+$+$T+τ0MS1-1MT+[τ(t)-τ0]NZ1-1NT+[τm-τ(t)]GZ1-1GT-μ1I1T2I1T-μ2I2T2I2T+MS2-1MT+NZ2-1NT+GZ2-1GT]×ζ(t)+h(t):=ζT(t)Δ(t)ζ(t)+h(t),
where Ω,$ are presented in (3.2) and
h(t)=[∫t-τ0tσ(s)dω(s)]TS2[∫t-τ0tσ(s)dω(s)]+[∫t-τ(t)t-τ0σ(s)dω(s)]TZ2[∫t-τ(t)t-τ0σ(s)dω(s)]+[∫t-τmt-τ(t)σ(s)dω(s)]TZ2[∫t-τmt-τ(t)σ(s)dω(s)]-∫t-τ0ttrace[σT(s)S2σ(s)]ds-∫t-τ(t)t-τ0trace[σT(s)Z2σ(s)]ds-∫t-τmt-τ(t)trace[σT(s)Z2σ(s)]ds.
Together with Lemmas 3.2 and 3.3, the nonlinear matrix inequalities in (3.2) can guarantee Δ(t)<0 to be true. Therefore, there must exist a negative scalar χ<0 such that
LV(εt)≤ζT(t)Δ(t)ζ(t)+h(t)≤χ[‖ε(t)‖2+‖ε(t-τ(t))‖2]+h(t).
Taking the mathematic expectation of (3.19), we can deduce Eh(t)=0,EℒV(εt)≤χE[∥ε(t)∥2+∥ε(t-τ(t))∥2], which indicates that the dynamics of the system (2.11) is globally asymptotically stable in the mean square. Based on V(εt) in (3.6) and directly computing, there must exist three positive scalars Θi>0,i=1,2,3 such that
V(εt)≤Θ1‖ε(t)‖2+Θ2∫t-τmaxt‖ε(v)‖2dv+Θ3∫t-τmt‖ε(v-τ(v))‖2dv.
Letting V¯(εt)=ektV(εt), we can deduce
EV¯(εt)-EV¯(ε0)=E∫0tL(eksV(εs))ds≤E∫0teks{k[Θ1‖ε(s)‖2+Θ2∫s-τmaxs‖ε(v)‖2dv+Θ3∫s-τms‖ε(v-τ(v))‖2dv]+χ[‖ε(s)‖2+‖ε(s-τ(s))‖2][Θ1‖ε(s)‖2+Θ2∫s-τmaxs‖ε(v)‖2dv+Θ3∫s-τms‖ε(v-τ(v))‖2dv]}ds.
By changing the integration sequence, it can be deduced that
∫0teks∫s-τmaxs‖ε(v)‖2dvds≤∫-τmaxt∫vv+τmaxeks‖ε(v)‖2dsdv≤τmaxekτmax∫-τmaxt‖ε(v)‖2ekvdv,∫0teks∫s-τms‖ε(v-τ(v))‖2dvds≤τmekτm∫-τmt‖ε(v-τ(v))‖2ekvdv.
Substituting the terms (3.22) into the relevant ones in (3.21), it is easy to have
EV¯(εt)≤EV¯(ε0)+E{[kΘ1+kΘ2τmaxekτmax+χ]∫0t‖ε(v)‖2ekvdv+[kΘ3τmekτm+χ]∫0t‖ε(v-τ(v))‖2ekvdv+h0(k)},
where h0(k)=kΘ2τmaxekτmax∫-τmax0∥ε(v)∥2ekvdv+kΘ3τmekτm∫-τm0∥ε(v-τ(v))∥2ekvdv. Choose one sufficiently small scalar k0>0 such that k0Θ1+k0Θ2τmaxek0τmax+χ≤0,k0Θ3τmek0τm+χ≤0. Then, EV¯(εt)≤Eh0(k0)+EV¯(ε0). Through directly computing, there must exist a positive scalar ϒ>0 such that
EV¯(ε0)+Eh0(k0)≤ϒsup-2τmax≤s≤0E‖φ(s)‖2.
Meanwhile, EV¯(εt)≥λmin(P)ek0tE∥ε(t)∥2. Thus with (3.24), one can obtain
E‖ε(t)‖2≤λmin-1(P)ϒsup-2τmax≤s≤0E‖φ(s)‖2e-k0t,∀t≥0,
which indicates that system (2.11) is globally exponentially stable in the mean square, and the proof is completed.
Remark 3.5.
As for systems (2.1) and (2.2), many present literatures have much attention to β(z(t))=Cz(t) with C positive-definite diagonal matrix, which can be checked as one special case of assumption (A3). Also in Theorem 3.4, it can be checked that Δ(t)<0 in (3.17) was not simply enlarged by Ω+$+$T+τ0MS1-1MT+τ¯mNZ1-1NT+τ¯mGZ1-1GT+MS2-1MT+NZ2-1NT+GZ2-1GT<0, but equivalently guaranteed by utilizing two matrix inequalities (3.2) and Lemma 3.3, which can be more effective than these techniques employed in [18, 28, 29]. Moreover, we compute and estimate ℒV5(εt) in (3.11) more efficiently than those present ones owing to that some previously ignored terms have been taken into consideration.
In order to show the design of the estimate gain matrices K and K1, a simple transformation is made to obtain the following theorem.
Theorem 3.6.
Supposing that assumptions (A1)–(A4) hold and setting ϵ1,ϵ2>0, then the system (2.1) and system (2.2) can exponentially achieve the master-slave synchronization in the mean square, if there exist n×n matrices P>0,Qj>0,Rj>0(j=1,2,3),Zi>0,Si>0,Ti>0(i=1,2),F,F1n×n diagonal matrices L>0,Q>0,H>0,U>0,V>0,W>0,R>0,E>0,13n×n matrices M,N,G, and one scalar λ≥0 such that the LMIs in (3.26)-(3.27) hold
-λI+P+(L+H)(Σ¯-Σ)+Q(Π-Γ)+τ¯mZ2+τ0S2≤0,[Ξ+$+$T-IiT2IiTΞ1*Φ]<0,[Ξ+$+$T-IiT2IiTΞ2*Φ]<0,i=1,2,
where Ii,Ξi(i=1,2),$,Φ are similar to the relevant ones in (3.2), and
Ξ=[Ξ1100Ξ1400Ξ17ϵ1F1ϵ1QBϵ1QDϵ1QD0Ξ1,13*Ξ2200WΣ200000000**Ξ3300RΣ20000000***Ξ4400Ξ4700000ATQ****Ξ5500000000*****-R-R30000000******Ξ77ϵ2F1ϵ2QBϵ2QDϵ2QD0-ϵ2Q*******Ξ88VΣ2000F1T********Ξ99000BTQ*********-T100DTQ**********-T20DTQ***********-T20************-Q-QT],
with
Ξ11=ϵ1F+ϵ1FT+Q2-UΣ1-2ΓE+λΠ1TΠ1,Ξ14=ϵ1QA+UΣ2,Ξ17=ϵ2FT+P-ϵ1Q+Σ¯H-ΣL-ΓQ,Ξ1,13=-ϵ1Q+FT+E,Ξ22=-WΣ1+Q1+Q3-Q2,Ξ33=-Q3-RΣ1,Ξ44=-U+R2+ϱ02T1+ϱ¯m2T2,Ξ47=L-H+ϵ2ATQ,Ξ55=-W+R1+R3-R2,Ξ77=-ϵ2Q-ϵ2Q+τ¯mZ1+τ0S1,Ξ88=-(1-μ)Q1-VΣ1+λΠ2TΠ2,Ξ99=-(1-μ)R1-V.
Moreover, the estimation gains K=Q-TF and K1=Q-TF1.
Proof.
Letting P1=ϵ1Q,P2=ϵ2Q and setting F=QTK,F1=QTK1 in (3.2) of Theorem 3.4, it is easy to derive the result and the detailed proof is omitted here.
Remark 3.7.
Theorem 3.6 presents one novel delay-dependent criterion guaranteeing the systems (2.1) and (2.2) to achieve the master-slave synchronization in an exponential way. The method is presented in terms of LMIs, therefore, by using LMI in MATLAB Toolbox, it is straightforward and convenient to check the feasibility of the proposed results without tuning any parameters. Moreover, the systems addressed in this paper can include some famous networks in [17, 19–21, 23] as its special cases or τ(t) is not differentiable.
Remark 3.8.
Through setting Q1=R1=0 in (3.6) and employing similar methods, Theorems 3.4 and 3.6 can be applicable without taking the upper bound on derivative of τ(t) into consideration, which means that Theorems 3.4 and 3.6 can be true even as μ is unknown.
Remark 3.9.
As we all know, most of n×n free-weighting matrices of M,N,G in Theorems 3.4 and 3.6 cannot help reduce the conservatism but only result in computational complexity. Thus we can choose the simplified slack matrices M,N,G as follows:
M=[M1M20n⋅11n]T,N=[0n⋅nN10n⋅5nN20n⋅5n]T,G=[0n⋅2nG10n⋅4nG20n⋅5n]T,
with n×n matrices Mi,Ni,Gi(i=1,2). Though the number of n×n matrix variables in (3.30) is much smaller than the one in (3.2) and (3.27), the numerical examples given in the paper still demonstrate that the simplified criteria can reduce the conservatism as effectively as Theorems 3.4 and 3.6 do.
4. Numerical Examples
In this section, two numerical examples will be given to illustrate the effectiveness of the proposed results.
Example 4.1.
Consider the drive system (2.1) and response one (2.2) of delayed neural networks as follows:
b(z)=[0.7z1+0.5tanh(z1)0.7z2+0.5tanh(z2)],A=[0.2-0.4-0.40.2],B=[0.20.20.20.2],D=[0.20.30.10.21],g(z)=[0.25(|z1+1|-|z1-1|)0.25(|z2+1|-|z2-1|)],σ(t,ε(t),ε(t-τ(t)))=0.1×[‖ε(t)‖00‖ε(t-τ(t))‖],τ(t)=1.0sin2(10t)+0.5,ϱ(t)=2cos2t+0.5.
Then it is easy to check that τ0=0.5,τm=1.5,ϱ0=0.5,ϱm=2.5,μ=10, and
Γ=[0.7000.7],Π=[1.2001.2],Σ=[-0.500-0.5],Σ¯=[0.5000.5],Π1=Π2=[0.1000.1].
By setting ϵ1=0.05,ϵ2=0.01 and utilizing Theorem 3.6, then the estimator gain matrices K and K1 in (2.10) can be worked out
K=Q-TF=[-2.3981-0.0575-0.0575-2.3994],K1=Q-TF1=[-0.21750.33470.3347-0.2175].
Furthermore, as for τ(t)=|sin(20t)|+0.5,ϱ(t)=2|cos(6t)|+0.5, and setting ϵ1=0.05,ϵ2=0.01, we can obtain the following estimator gain matrices by using Theorem 3.6 and Remark 3.8:
K=Q-TF=[-2.5374-0.0528-0.0528-2.5385],K1=Q-TF1=[-0.30210.38120.3812-0.3021],
which means that the obtained results still hold as the time delay is not differentiable. However, the methods proposed in [17–19] fail to solve the synchronization problem even without the distributed delay.
Example 4.2.
As a special case, we consider the master system (2.1) of delayed stochastic neural networks as follows:
dz(t)=[-Cz(t)+Ag(z(t))+Bg(z(t-τ(t)))+D∫t-ϱ(t)tg(z(s))ds+I]dt,
where C=[1001],A=[1.8-0.3-5.12.6],B=[-1.6-0.1-0.3-2.5],D=[2112],I=[00], and τ(t)=0.95+0.05sin2(40t), ϱ(t)=0.1. It can be verified that τ0=0.95,τm=1.0,μ=2, and ϱ0=ϱm=0.1. The activation functions can be taken as gi(s)=tanh(s),s∈R(i=1,2). The corresponding slave system can be
dy(t)=[-Cy(t)+Ag(y(t))+Bg(y(t-τ(t)))+D∫t-ϱ(t)tg(y(s))ds+I+u(t)]dt+σ(t,ε(t),ε(t-τ(t)))dω(t),
where σ(t,ε(t),ε(t-τ(t)))=[∥ε(t)∥00∥ε(t-τ(t))∥]. Then together with Theorem 3.6, ϵ1=0.05, and ϵ2=0.1, we can obtain part feasible solution to the LMIs in (3.26) and (3.27) by resorting to the Matlab LMI Toolbox:
Q=[0.2526000.2526],F=[-4.0139-0.04210.0486-4.0182],F1=[0.11330.00430.01680.1707].
Then the estimator gain matrices K,K1 can be deduced as follows:
K=Q-TF=[-15.8892-0.16670.1923-15.9062],K1=Q-TF1=[0.44860.01700.06660.6758].
It follows from Theorem 3.6 that the drive system with the initial condition [0.5,0.4]T for -1≤t≤0 synchronizes with the response system when the initial condition is [0.7,0.6]T for -1≤t≤0. The phase trajectories and state ones of drive system and response one and state trajectories of error system are shown in Figure 1. Therefore, from Figure 1, we can see that the master system synchronizes with the slave system.
Phase trajectories and state trajectories of drive system, response system and error system.
5. Conclusions
In this paper, we consider the synchronization control of stochastic neural networks with both time-varying and distributed time-varying delays. By using the Lyapunov functional and LMI technique, one sufficient condition has been derived to ensure the global exponential stability for the error system, and thus, the slave system can synchronize the master one. Then, the estimation gains can be obtained. The obtained results are novel since the addressed networks are of more general forms and some good mathematical techniques are employed. Finally, we give two numerical examples to verify the theoretical results.
Acknowledgment
This work is supported by the national Natural Science Foundation of China no. 60835001, no. 60875035, no. 60904020, no. 61004032, no. 61004046 and China Postdoctoral Science Foundation Funded Special Project no. 201003546.
PecoraL. M.CarrollT. L.Synchronization in chaotic systems1990648821824103826310.1103/PhysRevLett.64.821ZBL0938.37019LiaoT. L.TsaiS. H.Adaptive synchronization of chaotic systems and its application to secure communications2000119138713962-s2.0-003421705010.1016/S0960-0779(99)00051-XPerez-MunuzuriV.Perez-VillarV.ChuaL. O.Autowaves for image processing on a two-dimensional CNN array of excitable nonlinear circuits: flat and wrinkled labyrinths19934031741812-s2.0-002756243710.1109/81.222798ZouF.NossekJ. A.Bifurcation and chaos in cellular neural networks1993403166173123255910.1109/81.222797ZBL0782.92003GilliM.Strange attractors in delayed cellular neural networks199340118498532-s2.0-002770187510.1109/81.251826ChengC.-J.LiaoT.-L.HwangC.-C.Exponential synchronization of a class of chaotic neural networks200524119720610.1016/j.chaos.2004.09.0222110030ZBL1060.93519YanJ.-J.LinJ.-S.HungM.-L.LiaoT.-L.On the synchronization of neural networks containing time-varying delays and sector nonlinearity20073611-270772-s2.0-3375143097910.1016/j.physleta.2006.08.083HuangH.FengG.Synchronization of nonidentical chaotic neural networks with time delays20092278698742-s2.0-6944909405610.1016/j.neunet.2009.06.009SongQ.Design of controller on synchronization of chaotic neural networks with mixed time-varying delays20097213–15328832952-s2.0-6484909477110.1016/j.neucom.2009.02.011LiT.FeiS. M.ZhuQ.CongS.Exponential synchronization of chaotic neural networks with mixed delays20087113–15300530192-s2.0-5644911243210.1016/j.neucom.2007.12.029LiT.SongA. G.FeiS. M.GuoY. Q.Synchronization control of chaotic neural networks with time-varying and distributed delays2009715-62372238410.1016/j.na.2009.01.0792524444ZBL1171.34049ParkJ. H.Synchronization of cellular neural networks of neutral type via dynamic feedback controller200942312991304254702510.1016/j.chaos.2009.03.024ZBL1198.93182LiH.YueD.Synchronization stability of general complex dynamical networks with time-varying delays: a piecewise analysis method20092322149158255538910.1016/j.cam.2009.02.104ZBL1178.34095LiuY.WangZ.LiangJ.LiuX.Synchronization and state estimation for discrete-time complex networks with distributed delays2008385131413252-s2.0-5234909826810.1109/TSMCB.2008.925745ZhangH.XieY.WangZ.ZhengC.Adaptive synchronization between two different chaotic neural networks with time delay2007186184118452-s2.0-3634898797910.1109/TNN.2007.902958KarimiH. R.MaassP.Delay-range-dependent exponential H∞ synchronization of a class of delayed neural networks200941311251135253762910.1016/j.chaos.2008.04.051ZBL1198.93179YuW.CaoJ.Synchronization control of stochastic delayed neural networks20073732522602-s2.0-3375044557110.1016/j.physa.2006.04.105TangY.FangJ. A.MiaoQ.On the exponential synchronization of stochastic jumping chaotic neural networks with mixed delays and sector-bounded non-linearities2009727-9169417012-s2.0-6184910737110.1016/j.neucom.2008.08.007ParkJ. H.KwonO. M.Synchronization of neural networks of neutral type with stochastic perturbation20092314174317512-s2.0-6804912202310.1142/S0217984909019909LiX.CaoJ.Adaptive synchronization for delayed neural networks with stochastic perturbation2008345777979110.1016/j.jfranklin.2008.04.0122468196ZBL1169.93350TangY.QiuR.FangJ. A.MiaoQ.XiaM.Adaptive lag synchronization in unknown stochastic chaotic neural networks with discrete and distributed time-varying delays20083722444254433243254710.1016/j.physleta.2008.04.032XiaY.YangZ.HanM.Lag synchronization of unknown chaotic delayed yang-yang-type fuzzy neural networks with noise perturbation based on adaptive control and parameter identification2009207116511802-s2.0-6794912185610.1109/TNN.2009.2016842LiuZ. X.LiuS. L.ZhongS. M.YeM.p-th moment exponential synchronization analysis for a class of stochastic neural networks with
mixed delays20101518991909KwonO. M.ParkJ. H.Delay-dependent stability for uncertain cellular neural networks with discrete and distribute time-varying delays2008345776677810.1016/j.jfranklin.2008.04.0112468195ZBL1169.93400SamiduraiR.Marshal AnthoniS.BalachandranK.Global exponential stability of neutral-type impulsive neural networks with discrete and distributed delays20104110311210.1016/j.nahs.2009.08.0042570187ZBL1179.93143HorikawaY.KitajimaH.Bifurcation and stabilization of oscillations in ring neural networks with inertia200923823-24240924182-s2.0-7035056994410.1016/j.physd.2009.09.021LuH.Chaotic attractors in delayed neural networks20022982-31091162-s2.0-003701388310.1016/S0375-9601(02)00538-8ChenW. H.LuX.Mean square exponential stability of uncertain stochastic delayed neural networks2008372710611069239359610.1016/j.physleta.2007.09.009ZBL1217.92005HuangH.FengG.Delay-dependent stability for uncertain stochastic neural networks with time-varying delay20073811-2931032-s2.0-3424968555310.1016/j.physa.2007.04.020LiT.SongA.FeiS.WangT.Global synchronization in arrays of coupled Lurie systems with both time-delay and hybrid coupling20111611020267915710.1016/j.cnsns.2010.04.008YueD.TianE.ZhangY.PengC.Delay-distribution-dependent stability and stabilization of T-S fuzzy systems with probabilistic interval delay20093925035162-s2.0-6404909886010.1109/TSMCB.2008.2007496MaoX.1997Chichester, UKHorwoodWangZ.ShuH.LiuY.HoD. W. C.LiuX.Robust stability analysis of generalized neural networks with discrete and distributed time delays2006304886896224762910.1016/j.chaos.2005.08.166ZBL1142.93401