The asymptotic behavior of a class of switched stochastic cellular neural networks (CNNs) with mixed delays (discrete time-varying delays and distributed time-varying delays) is investigated in this paper. Employing the average dwell time approach (ADT), stochastic analysis technology, and linear matrix inequalities technique (LMI), some novel sufficient conditions on the issue of asymptotic behavior (the
mean-square ultimate boundedness, the existence of an attractor, and the mean-square exponential stability) are established. A numerical example is provided to illustrate the effectiveness of the proposed results.
1. Introduction
Since Chua and Yang’s seminal work on cellular neural networks (CNNs) in 1988 [1, 2], it has witnessed the successful applications of CNN in various areas such as signal processing, pattern recognition, associative memory, and optimization problems (see, e.g., [3–5]). From a practical point of view, both in biological and man-made neural networks, processing of moving images and pattern recognition problems require the introduction of delay in the signals transmitted among the cells [6, 7]. After the widely use of discrete delays, distributed delays arise because that neural networks usually have a spatial extent due to the presences of a multitude of parallel pathway with a variety of axon sizes and lengths. The mathematical model can be described by the following differential equations:
(1)dxi(t)=-dixi(t)+∑j=1naijfj(xj(t))+∑j=1nbijfj(xj(t-τi(t)))+∑j=1ncij∫t-hi(t)tfj(xj(s))ds+Ji,i=1,…,n,
where t≥0, n≥2 corresponds to the number of units in a neural network; xi(t) denotes the potential (or voltage) of cell i at time t; fj(·) denotes a nonlinear output function; di>0 denotes the rate with which the cell i resets its potential to the resting state when isolated from other cells and external inputs; aij, bij, cij denote the strengths of connectivity between cell i and j at time t, respectively; τi(t) and hi(t) correspond to the discrete time-varying delays and distributed time-varying delays, respectively.
Neural network is nonlinearity; in the real world, nonlinear problems are not exceptional, but regular phenomena. Nonlinearity is the nature of matter and its development [8, 9]. Although discrete delays combined with distributed delays can usually provide a good approximation for prime model, most real models are often affected by so many external perturbations which are of great uncertainty. For instance, in electronic implementations, it was realized that stochastic disturbances are mostly inevitable owing to thermal noise. Just as Haykin [10] point out that in real nervous systems, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes. Consequently, noise is unavoidable and should be taken into consideration in modeling. Moreover, it has been well recognized that a CNN could be stabilized or destabilized by certain stochastic inputs. Therefore, it is of significant importance to consider stochastic effects to the delayed neural networks. One approach to the mathematical incorporation of such effects is to use probabilistic threshold models. However, the previous literatures all focus on the stability of stochastic neural networks with delays [11–14]. Actually, studies on dynamical systems involve not only a discussion of the stability property, but also other dynamic behaviors such as the ultimate boundedness and attractor. However, there are very few results on the ultimate boundedness and attractor for stochastic neural networks [15–17]. Hence, discussing the asymptotic behavior of neural networks with mixed delays is valuable and meaningful.
On the other hand, neural networks often exhibit a special characteristic of network mode switching; that is, a neural network sometimes has finite modes that switch from one to another at different times according to a switching law generated from a switching logic. As an important class of hybrid systems, switched systems arise in many practical processes. In current papers, the analysis of switched systems has drawn considerable attention since they have numerous applications in control of mechanical systems, computer communities, automotive industry, electric power systems and many other fields [18–22]. Most recently, the stability analysis of switched neural systems has been further investigated which was mainly based on Lyapunov functions [23, 24]. It is worth noting that the average dwell time (ADT) approach is an effective method for the switched systems, which avoid the common Lyapunov function and can be adopted to obtain less conservative stability conditions. For instance, based on the average dwell time method, the problems of stability have been discussed for uncertain switched Cohen-Grossberg neural networks with interval time-varying delay and distributed time-varying delay in [25]. In [26], the average dwell time method has been utilized to get some sufficient conditions for the exponential stability and the weighted L2 gain for a class of switched systems.
However, it is worth emphasizing that when the activation functions are unbounded in some special applications, the existence of equilibrium point cannot be guaranteed [27]. Therefore, in these circumstances, the discussing of stability of equilibrium point for switched neural networks turns to be unreachable, which motivated us to consider the ultimate boundedness and attractor for the switched neural networks. Unfortunately, as far as we know, the issue of asymptotic behavior of switched systems with mixed time delays has not been investigated yet, let alone studying the asymptotic behavior of switched stochastic systems. Therefore, these researches are challenging and interesting since they integrate the switched hybrid system into the stochastic system and are thus theoretically and practically significant. Notice that the asymptotic behavior of switched stochastic neural networks with mixed delays should be studied intensively.
Motivated by the above analysis, the main purpose of this paper is to get sufficient conditions on the asymptotic behavior (the mean-square ultimate boundedness, the existence of an attractor, and mean-square exponential stability) for the switched stochastic system. This paper is organized as follows. In Section 2, the considered model of switched stochastic CNN with mixed delays is presented. Some necessary assumptions, definitions and lemmas are also given in this section. In Section 3, mean-square ultimate boundedness and attractor for the proposed model are studied. A numerical example is arranged to demonstrate the effectiveness of the theoretical results in Section 4, and we conclude this paper in Section 5.
2. Problem Formulation
In general, a stochastic cellular neural network with mixed delays can be described as follows:
(2)dx(t)=[∫t-h(t)t-Dx(t)+AF(x(t))+BF(x(t-τ(t)))+C∫t-h(t)tF(x(s))ds+J]dt+G(x(t),x(t-τ(t)))dw(t),
where x(t)=(x1(t),…,xn(t))T∈Rn, F(x(t))=(f1(x1(t)),…,fn(xn(t)))T, D=diag(d1,…,dn), A=(aij)n×n, B=(bij)n×n, C=(cij)n×n, J=(J1,…,Jn)T, τ(t)=(τ1(t),…,τn(t))T, h(t)=(h1(t),…,hn(t))T, G(·,·) is a n×n matrix valued function, and w(t)=(w1(t),…,wn(t))T is an n-dimensional Brownian motion defined on a complete probability space (Ω,ℱ,P) with a natural filtration {ℱt}t≥0 (i.e., ℱt=σ{w(s):0≤s≤t}).
By introducing switching signal into the system (2) and taking a set of neural networks as the individual subsystems, the switched system can be obtained, which is described as
(3)dx(t)=[∫t-h(t)t-Dσ(t)x(t)+Aσ(t)F(x(t))+Bσ(t)F(x(t-τ(t)))+Cσ(t)∫t-h(t)tF(x(s))ds+J]dt+Gσ(t)(x(t),x(t-τ(t)))dw(t),
where σ(t):[0,+∞)→Σ={1,2…m} is the switching signal. At each time instant t, the index σ(t)∈Σ (i.e., σ(t)=i) of the active subsystem means that the ith subsystem is activated.
For the convenience of discussion, it is necessary to introduce some notations. Rn denotes the n-dimensional Euclidean space. X≤Y (X<Y) means that each pair of corresponding elements of X and Y satisfies the inequality “≤(<)”. X is especially called a positive (negative) matrix if X>0 (<0). XT denotes the transpose of any square matrix X, and the symbol “*” within the matrix represents the symmetric term of the matrix. λmin(X) means the minimum eigenvalue of matrix X, and λmax(X) means the maximum eigenvalue of matrix X. I denotes unit matrix.
Let 𝒞([-τ*,0],Rn) denote the Banach space of continuous functions which mapping from [-τ*,0] to Rn with is the topology of uniform convergence. For any ∥φ∥∈𝒞([-τ*,0],Rn), we define ∥φ∥=max1≤i≤nsupt-τ*<s≤t|φi(s)|.
The initial conditions for system (3) are given in the form:
(4)x(t)=φ,φ∈𝒞ℱ0([-τ*,0],Rn),
where 𝒞ℱ0([-τ*,0],Rn) is the family of all ℱ0-measurable bounded 𝒞([-τ*,0],Rn)-valued random variables.
Throughout this paper, we assume the following assumptions are always satisfied.
The discrete time-varying delay τ(t) and distributed time-varying delay h(t) are satisfying
(5)0≤τ(t)≤τ,0≤h(t)≤h,τ*=max1≤i≤n{τ,h},
where τ, h, τ* are scalars.
There exist constants lj and Lj, i=1,2,…,n, such that
(6)lj≤fj(x)-fj(y)x-y≤Lj,∀x,y∈R,x≠y.
Moreover, we define
(7)Σ1=diag{l1L1,l2L2,…,lnLn},Σ2=diag{l1+L1,l2+L2,…,ln+Ln}.
We assume that G(t,x,y):R+×Rn×Rn→Rn×m is locally Lipschitz continuous and satisfies the following condition:
(8)trace[GT(t,x,y)G(t,x,y)]≤xTU1TU1x+yTU2TU2y+2xTU1TU2y,
where U1>0, U2>0 are constant matrices with appropriate dimensions.
Some definitions and lemmas are introduced as follows.
Definition 1 (see [15]).
System (2) is called mean-square ultimate boundedness if there exists a constant vector B~>0, such that, for any initial value φ∈𝒞ℱ0, there is a t′=t′(φ)>0, for all t≥t′, the solution x(t,φ) of system (2) satisfies
(9)E∥x(t,φ)∥2≤B~.
In this case, the set 𝔸={φ∈𝒞ℱ0∣E∥φ(s)∥2≤B~} is said to be an attractor of system (2) in mean square sense.
Clearly, proposition above equals to limt→∞supE∥x(t)∥2≤B~.
Definition 2 (see [28]).
For any switching signal σ(t), corresponding a switching sequence {(σ(t0),t0),…(σ(tk),tk),…,∣k=0,1,…}, where (σ(tk),tk) means the σ(tk)th subsystem, is activated during t∈[tk,tk-1), and k denotes the switching ordinal number. Given any finite constants T1, T2 satisfying T2>T1≥0 denotes the number of discontinuity of a switching signal σ(t) over the time interval (T1,T2) by Nσ(T1,T2). If Nσ(T1,T2)≤N0+(T2-T1)/Tα holds for Tα>0, N0>0, then Tα>0 is called the average dwell time. N0 is the chatter bound.
Lemma 3.
Let X and Y be any n-dimensional real vectors, P be a positive semidefinite matrix and a scalar ε>0. Then the following inequality holds:
(10)2XTPY≤εXTPX+ε-1YTPY.
Lemma 4 (see [29]).
For any positive definite constant matrix M∈ℛn×n, and a scalar r, if there exists a vector function η:[0,r]→ℛn such that the integrals ∫0rηT(s)Mη(s)ds and ∫0rη(s)ds are well defined, then
(11)∫0rηT(s)Mη(s)ds≥1r∫0rηT(s)dsM∫0rη(s)ds.
3. Main Results
Let 𝒞2,1:(Rn×R+;R+) denote the family of all nonnegative functions V(t,x) on Rn×R+ which are continuously twice differentiable in x and once differentiable in t. If V∈𝒞2,1:(Rn×R+;R+), define an operator ℒV associated with general stochastic system dx(t)=f(x(t),t)dt+G(x(t),x(t-τ(t)))dw(t) as
(12)ℒV(t,x)=Vt(t,x)+Vx(t,x)f(x(t),t)+12trace{GT(x(t),x(t-τ(t)))Vxx(t,x)×G(x(t),x(t-τ(t)))GT},
where
(13)Vt(t,x)=∂V(t,x)∂t,Vx(t,x)=(∂V(t,x)∂x1,…,∂V(t,x)∂xn)T,Vxx(t,x)=(∂V(t,x)∂xi∂xj)n×n.
Theorem 5.
If there are constants μ, ν such that τ˙(t)≤μ, h˙(t)≤ν, we denote g(μ), k(ν) as:
(14)g(μ)={(1-μ)e-ατ,μ≤1;1-μ,μ≥1,k(ν)={(1-ν)e-αh,ν≤1;1-ν,ν≥1.
For a given constant α>0, if there exist positive definite matrixes P=diag(p1,p2,…,pn), Q, R, S, Z, U1, U2, Yi=diag(Yi1,Yi2,…,Yin),i=1,2, such that the following condition holds:
(15)Δ1=[Φ11Φ12Φ13Φ140Φ16*Φ220Φ2400**Φ33000***Φ44Φ550****0Φ66]<0,Φ11=2αP-2DP+Q+τ2S-2Σ1Y1+αI+U1TPU1,Φ12=U1TPU2,Φ13=PA+Σ2Y1,Φ14=PB,Φ16=PC,Φ22=-g(μ)Q-2Σ1Y2+αI+U2TPU2,Φ24=Σ2Y2,Φ33=R+h2Z-2Y1+αI,Φ44=-k(ν)R-2Y2+αI,Φ55=-g(μ)S,Φ66=-k(ν)Z,
then system (2) is mean-square ultimate boundedness.
Proof.
Consider the positive definite Lyapunov functional as follows:
(16)V(t)=V1(t)+V2(t)+V3(t)+V4(t)+V5(t),
where
(17)V1(t)=eαtxT(t)Px(t),V2(t)=∫t-τ(t)txT(s)Qeαsx(s)ds,V3(t)=∫t-h(t)tFT(x(s))ReαsF(x(s))ds,V4(t)=τ∫-τ(t)0∫t+θtxT(s)Seαsx(s)dsdθ,V5(t)=h∫-h(t)0∫t+θtFT(x(s))ZeαsF(x(s))dsdθ.
Then, by Ito’s formula, the stochastic derivative of V(x,t) is
(18)dV(x,t)=ℒV(x,t)dt+Vx(x,t)G(x(t),x(t-τ(t)))dw(t),
the operator ℒV along the trajectory of system (2) can be obtained
(19)ℒV1(t)=∂V1(x(t),t)∂t+∂V1(x(t),t)∂x×[∫t-h(t)t-Dx(t)+AF(x(t))+BF(x(t-τ(t)))+C∫t-h(t)tF(x(s))ds+J]+12trace[GT(x(t),x(t-τ(t)))∂2V1(x(t),t)∂x2×G(x(t),x(t-τ(t)))∂2V1(x(t),t)∂x2]=αeαtxT(t)Px(t)+2eαtxT(t)P×[∫t-h(t)t-Dx(t)+AF(x(t))+BF(x(t-τ(t)))+C∫t-h(t)tF(x(s))ds+J]+eαttrace[GT(x(t),x(t-τ(t)))P×G(x(t),x(t-τ(t)))GT].
From Assumption (H3), Lemma 3, and (19), we can get
(20)ℒV1(t)≤2αeαtxT(t)Px(t)+2eαtxT(t)P×[∫t-h(t)t-Dx(t)+AF(x(t))+BF(x(t-τ(t)))+C∫t-h(t)tF(x(s))ds]+eαtα-1JTPJ+eαtxT(t)U1TPU1x(t)+xT(t-τ(t))U2TPU2x(t-τ(t))+2xT(t)U1TPU2x(t-τ(t)).
Similarly, calculating the operator ℒVi (i=2,3,4,5), along the trajectory of system (2), one can get
(21)ℒV2=eαtxT(t)Qx(t)-(1-τ˙(t))eα(t-τ(t))xT(t-τ(t))Qx(t-τ(t))≤eαtxT(t)Qx(t)-(1-μ)eα(t-τ)xT(t-τ(t))Qx(t-τ(t))≤eαtxT(t)Qx(t)-g(μ)eαtxT(t-τ(t))Qx(t-τ(t)),ℒV3≤eαtFT(x(t))RF(x(t))-k(ν)eαtFT(x(t-τ(t)))RF(x(t-τ(t))),ℒV4=τ[∫t-τ(t)tτ(t)eαtxT(t)Sx(t)-(1-τ˙(t))eα(t-τ(t))∫t-τ(t)txT(s)Sx(s)ds]≤τ2eαtxT(t)Sx(t)-τg(μ)eαt∫t-τ(t)txT(s)Sx(s)ds,ℒV5≤h2eαtFT(x(t))ZF(x(t))-hk(ν)eαt∫t-h(t)tFT(x(s))ZF(x(s))ds.
According to Lemma 4, the following inequalities can be obtained:
(22)∫t-τ(t)txT(s)Sx(s)ds≥1τ∫t-τ(t)txT(s)dsS∫t-τ(t)tx(s)ds,∫t-h(t)tFT(x(s))ZF(x(s))ds≥1h∫t-h(t)tFT(x(s))dsZ∫t-h(t)tF(x(s))ds.
Then, we can get
(23)ℒV4≤τ2eαtxT(t)Sx(t)-g(μ)eαt∫t-τ(t)txT(s)dsS∫t-τ(t)tx(s)ds,ℒV5≤h2eαtFT(x(t))ZF(x(t))-k(ν)eαt∫t-h(t)tFT(x(s))dsZ∫t-h(t)tF(x(s))ds.
On the other hand, it follows from Assumption (H2) that we can easily obtain
(24)[fi(xi(t))-fi(0)-Lixi(t)]×[fi(xi(t))-fi(0)-lixi(t)]≤0,[fi(xi(t-τ(t)))-fi(0)-Lixi(t-τ(t))]×[fi(xi(t-τ(t)))-fi(0)-lixi(t-τ(t))]≤0,i=1,2,…,n.
Then we obtain
(25)0≤δ1=-2∑i=1ny1i[fi(xi(t))-fi(0)-Lixi(t)]×[fi(xi(t))-fi(0)-lixi(t)],0≤δ2=-2∑i=1ny2i[fi(xi(t-τ(t)))-fi(0)-Lixi(t-τ(t))]×[fi(xi(t-τ(t)))-fi(0)-lixi(t-τ(t))],δ1=-2∑i=1ny1i[fi(xi(t))-Lixi(t)][fi(xi(t))-lixi(t)]-2∑i=1ny1ifi2(0)+2∑i=1ny1ifi(0)[2fi(xi(t))-(Li+li)xi(t)]≤-2∑i=1ny1i[fi(xi(t))-Lixi(t)][fi(xi(t))-lixi(t)]+∑i=1n[αfi2(xi(t))+4α-1fi2(0)y1i2+αxi2(t)+α-1fi2(0)y1i2(Li+li)2].
Similarly, one can get
(26)δ2≤-2∑i=1ny2i[fi(xi(t-τ(t)))-Lixi(t-τ(t))]×[fi(xi(t-τ(t)))-lixi(t-τ(t))]+[αfi2(xi(t-τ(t)))+4α-1fi2(0)y2i2+αxi2(t-τ(t))+α-1fi2(0)y2i2(Li+li)2(xi(t-τ(t)))].
Denote
(27)ζ(t)=[(∫t-h(t)tF(x(s))ds)TxT(t),xT(t-τ(t)),FT(x(t)),FT(x(t-τ(t))),(∫t-τ(t)tx(s)ds)T,(∫t-h(t)tF(x(s))ds)T]T,
and combing with (16)–(26), we can get
(28)dV=ℒV1dt+ℒV2dt+ℒV3dt+ℒV4dt+ℒV5dt+2PeαtxT(t)G(x(t),x(t-τ(t)))dw(t)≤eαtζT(t)Δ1ζ(t)dt+eαt𝒩1dt+2Peαtx(t)G(x(t),x(t-τ(t)))dw(t),
where
(29)𝒩1=α-1JTPJ+∑i=1n[4α-1fi2(0)y2i2+α-1fi2(0)y1i2(Li+li)2+4α-1fi2(0)y2i2+α-1fi2(0)y2i2(Li+li)2].
By integrating both sides of (28) in time interval t∈[t0,t] and then taking expectation results in
(30)Keαt∥x(t)∥2≤V(x(t))≤V(x(t0))+α-1eαt𝒩1+∫t0t2Peαtx(s)G(x(s),x(s-τ(s)))dw(s),
where K=λmin(P).
Therefore, one obtains
(31)E{V(x(t))}≤E{V(x(t0))}+E{α-1eαt𝒩1},
which implies
(32)E∥x(t)∥2≤e-αtE{V(x(t0))}+α-1𝒩1K.
If one chooses B~=(1+α-1𝒩1)/K>0, then, for initial value φ∈𝒞ℱ0, there is t′=t′(φ)>0, such that e-αtE{V(x(t0))}≤1 for all t≥t′. According to Definition 1, we have E∥x(t,φ)∥2≤B~ for all t≥t′. That is to say, system (2) is mean-square ultimate boundedness. This completes the proof.
Theorem 6.
If all of the conditions of Theorem 5 hold, then there exists an attractor 𝔸B~={φ∈𝒞ℱ0∣E∥φ(s)∥2≤B~} for the solutions of system (2).
Proof.
If one chooses B~=(1+α-1𝒩1)/K>0, Theorem 5 shows that, for any φ, there is t′>0, such that E∥x(t,φ)∥2≤B~ for all t≥t′. Let 𝔸B~ denote by 𝔸B~={φ∈𝒞ℱ0∣E∥φ(s)∥2≤B~}. Clearly, 𝔸B~ is closed, bounded, and invariant. Furthermore, limt→∞supinfy∈𝔸B~∥x(t,φ)-y∥=0. Therefore, 𝔸B~ is an attractor for the solutions of system (2). This completes the proof.
Corollary 7.
In addition to that all of the conditions of Theorem 5 hold, if J=0, G(t,0,0)=0, and fi(0)=0 for all i=1,2,…,n, then system (2) has a trivial solution x(t)≡0, and the trivial solution of system (2) is mean-square exponentially stable.
Proof.
If J=0 and fi(0)=0(i=1,2,…,n), then 𝒩1=0, and it is obvious that system (2) has a trivial solution x(t)≡0. From Theorem 5, one has
(33)E∥x(t,φ)∥2≤K*e-αt,∀φ,
where K*=E{V(x(t0))}/K. Therefore, the trivial solution of system (2) is mean-square exponentially stable. This completes the proof.
According to Theorem 5–Corollary 7, we will present conditions of mean-square ultimate boundedness for the switched systems (3) by applying the average dwell time method in the follow-up studies.
Theorem 8.
If there are constants μ, ν such that τ˙(t)≤μ, h˙(t)≤ν, we denote g(μ), k(ν) as
(34)g(μ)={(1-μ)e-ατ,μ≤1;1-μ,μ≥1,k(ν)={(1-ν)e-αh,ν≤1;1-ν,ν≥1.
For a given constant α>0, if there exist positive definite matrixs Qi,Ri, Si, Zi, U1i, U2i, Pi=
diag
(pi1,pi2,…,pin), Yi=
diag
(Yi1,Yi2,…,Yin),i=1,2, such that the following condition holds
(35)Δi1=[Φi11Φi12Φi13Φi140Φi16*Φi220Φi2400**Φi33000***Φi44Φi550****0Φi66]<0,
where
(36)Φi11=2αPi-2DPi+Qi+τ2Si-2Σ1Y1+αI+U1iTPU1i,Φi12=U1iTPU2i,Φi13=PiAi+Σ2Y1,Φi14=PiBi,Φ16=PiCi,Φi22=-g(μ)Qi-2Σ1Y2+αI+U2iTPU2i,Φi24=Σ2Y2,Φi33=Ri+h2Zi-2Y1+αI,Φi44=-k(ν)Ri-2Y2+αI,Φi55=-g(μ)Si,Φi66=-k(ν)Zi.
Then system (3) is mean-square ultimate boundedness for any switching signal with average dwell time satisfying
(37)Tα>Tα*=lnℛmaxα,
where ℛmax=maxk∈Σ,1≤i≤n{ℛik}.
Proof.
Define the Lyapunov functional candidate
(38)Vσ(t)=eαtxT(t)Pσ(t)x(t)+∫t-τ(t)txT(s)Qσ(t)eαsx(s)ds+∫t-h(t)tFT(x(s))Rσ(t)eαsF(x(s))ds+τ∫-τ(t)0∫t+θtxT(s)Sσ(t)eαsx(s)dsdθ+h∫-h(t)0∫t+θtFT(x(s))Zσ(t)eαsF(x(s))dsdθ.
From (16) and (32), we have the following result:
(39)E∥x(t)∥2≤ℛ0E∥x(t0)∥2e-α(t-t0)K+ΛK,
where Λ=α-1𝒩1, ℛ0 is a positive constant.
When t∈[tk,tk+1], the ikth subsystem is activated; from (39) and Theorem 5,we can get
(40)E∥x(t)∥2≤ℛikE∥x(tk)∥2e-α(t-tk)Kik+ΛKik=H¯ikE∥x(tk)∥2e-α(t-tk)+J¯ik,
where ℛik is a positive constant, Kik=λmin(Pi), H¯ik=ℛik/Kik, J¯ik=Λ/Kik.
Since the system state is continuous, it follows from (40) that
(41)E∥x(t)∥2≤ℛik∥x(tk)∥2e-α(t-tk)Kik+ΛKik=H¯ikE∥x(tk)∥2e-α(t-tk)+J¯ik≤⋯≤e∑v=0klnH¯iv-α(t-t0)E∥x(t0)∥2+[H¯ike-α(t-tk)J¯ik+H¯ikH¯ik-1e-α(t-tk-1)J¯ik-1+H¯ikH¯ik-1H¯ik-2e-α(t-tk-2)J¯ik-2+⋯+H¯ikH¯ik-1H¯ik-2⋯H¯i1e-α(t-t1)J¯i1+J¯ik]≤e(k+1)lnH¯max-α(t-t0)E∥x(t0)∥2+[H¯maxkJ¯max+H¯maxk-1J¯max+H¯maxk-2J¯max+⋯+H¯max2J¯max+H¯maxJ¯max+J¯max]≤H¯maxeklnH¯max-α(t-t0)E∥x(t0)∥2+J¯maxH¯max-1[H¯maxk+1-1]≤H¯maxelnH¯maxNσ(t0,t)-α(t-t0)E∥x(t0)∥2+J¯maxH¯max-1[H¯maxk+1-1]≤ℛmaxeN0lnℛmax-(α-(lnℛmax/Tα))(t-t0)Kmink+1E∥x(t0)∥2+Λ[(ℛmaxn+1/Kminn+1)-1]ℛmax-Kmin,
where Kmin=minik{Kik}, H¯max=maxik{H¯ik}.
If one chooses B~=(1/Kmin)+Λ[(ℛmaxn+1/Kminn+1)-1]/(ℛmax-Kmin)>0, then, for initial value φ∈𝒞ℱ0, there is t′=t′(φ)>0, such that ℛmaxeN0lnℛmax-(α-(lnℛmax/Tα))(t-t0)E∥x(t0)∥2≤1 for all t≥t′. According to Definition 1, we have E∥x(t,φ)∥2≤B~ for all t≥t′. That is to say, system (3) is mean-square ultimate boundedness, and the proof is completed.
Remark 9.
In this paper, we construct two piecewise functions g(μ), k(ν) to remove the restrictive condition μ<1 and ν<1 in the results, which have reduced the conservatism of the obtained results and also avoid the computational complexity.
Remark 10.
The condition (35) is given as in the form of linear matrix inequalities, which are more relaxing than the algebraic formulation. Furthermore, by using the MATLAB LMI toolbox, we can check the feasibility of (35) straightforward without tuning any parameters.
Theorem 11.
If all of the conditions of Theorem 8 hold, then there exists an attractor 𝔸B~′ for the solutions of system (3), where 𝔸B~′={φ∈𝒞ℱ0∣E∥φ(s)∥2≤B~}.
Proof.
If one chooses B~=(1/Kmin)+Λ[(ℛmaxn+1/Kminn+1)-1]/(ℛmax-Kmin)>0, Theorem 8 shows that, for any φ, there is t′>0, such that E∥x(t,φ)∥2≤B~ for all t≥t′. Let 𝔸B~′ denote by 𝔸B~′={φ∈𝒞ℱ0∣E∥φ(s)∥2≤B~}. Clearly, 𝔸B~′ is closed, bounded, and invariant. Furthermore, limt→∞supinfy∈𝔸B~′∥x(t,φ)-y∥=0. Therefore, 𝔸B~′ is an attractor for the solutions of system (3). This completes the proof.
Corollary 12.
In addition to all that of the conditions of Theorem 8 hold, if J=0, G(t,0,0)=0 and fi(0)=0 for all i=1,2,…,n, then system (3) has a trivial solution x(t)≡0, and the trivial solution of system (3) is mean-square exponentially stable.
Proof.
If J=0 and fi(0)=0 for all i=1,2,…,n, then it is obvious that system (3) has a trivial solution x(t)≡0. From Theorem 8, one has
(42)E∥x(t,φ)∥2≤K~*e-αt,∀φ,
where K~*=(ℛmaxeN0lnℛmax-(α-(lnℛmax/Tα))(t-t0)E∥x(t0)∥2/Kmink+1. Therefore, the trivial solution of system (3) is mean-square exponentially stable. This completes the proof.
Remark 13.
Assumption (H3) is less conservative than that in [17] since the constants lj and Lj are allowed to be positive, negative, or zero. Hence, the resulting activation functions f(·) could be nonmonotonic and are more general than the usual forms |fj(u)|≤Kj|u|, Kj>0, j=1,2,…,n. Moreover, unlike the bounded case, there will be no equilibrium point for the switched system (3) under the assumption (H3). For this reason, to investigate the asymptotic behavior (the ultimate boundedness and the existence of attractor) of switched system that contains mixed delays is more complex and challenge.
Remark 14.
In this paper, the chatter bound N0 is a positive integer, which is more practical in significance and can include the model N0=0 in [16, 25, 26] as a special case.
Remark 15.
If Σ=0, which implies that the switched delay system (3) reduces to the usual stochastic CNN with delays. In this case, attractor and ultimate boundedness are discussed in [17]. And when U1=U2=0, the model in our paper turns out to be a switched CNN with mixed delays; to the best of our knowledge, there are no published results in this aspect yet. Thus, the main results of this paper are novel. Moreover, when uncertainties appear in the switched stochastic CNN system (3), we can obtain the corresponding results, by applying the similar method as in [25].
4. Illustrative Examples
In this section, we shall give a numerical example to demonstrate the validity and effectiveness of our results. Consider the switched cellular neural networks with two subsystems.
Consider the switched stochastic cellular neural network system (3) with fi(xi(t))=0.5tanh(xi(t)),fi(0)=0 (i=1,2), τ(t)=0.25sin2(t), h(t)=0.3sin2(t), and the connection weight matrices as follows:
(43)A1=(0.30.10.20.2),B1=(0.200.30.5),C1=(0.2-0.10.30.1),U11=(0.10-0.10.2),U21=(0.20.100.1),A2=(0.20.40.10.3),B2=(0.10-0.10.2),C2=(0.30.20.10.2),U12=(0.20.100.3),U22=(0.100.20.1).
From assumptions (H1)–(H3), we can gain di=1,li=0,Li=0.5,(i=1,2), τ=0.25,h=0.3, and μ=0.5,ν=0.6.
Therefore, for α=0.5, by solving LMIs (35), we get
(44)P1=(1.4968001.4851),Q1=(1.6073-0.0528-0.05281.4567),R1=(1.86420.46980.46981.5241),S1=(2.74670.02250.02251.9941),Z1=(5.43730.06440.06444.5969),P2=(1.4316001.4528),Q2=(1.65410.02290.02291.8391),R2=(1.08370.45400.45401.2710),S2=(1.68880.43560.43561.6165),Z2=(4.57360.56980.56984.4524).
Using (37), we can get the average dwell time Ta*=1.3445.
5. Conclusions
In this paper, we studied the switched stochastic cellular neural networks with discrete time-varying delays and distributed time-varying delays. With the help of the average dwell time approach, the novel multiple Lyapunov-Krasovkii functionals methods, and some inequality techniques, we obtain the new sufficient conditions guaranteeing the mean-square ultimate boundedness, the existence of an attractor, and the mean-square exponential stability. A numerical example is also given to demonstrate our results. Furthermore, our derived conditions are presented in the forms of LMIs, which are more relaxing than the algebraic formulation and can be easily checked in practice by the effective LMI toolbox in MATLAB.
Acknowledgments
The authors are extremely grateful to Prof. Zhichun Yang and the anonymous reviewers for their constructive and valuable comments, which have contributed much to the improvement of this paper. This work was jointly supported by the National Natural Science Foundation of China under Grant no. 11101053, the Key Project of Chinese Ministry of Education under Grant no. 211118, the Excellent Youth Foundation of Educational Committee of Hunan Provincial no. 10B002, the Hunan Provincial NSF no. 11JJ1001, National Science and technology Major Projects of China no. 2012ZX10001001-006, the Scientific Research Funds of Hunan Provincial Science and Technology Department of China no. 2012SK3096.
ChuaL. O.YangL.Cellular neural networks: theory198835101257127210.1109/31.7600MR960777ZBL0663.94022ChuaL. O.YangL.Cellular neural networks: applications198835101273129010.1109/31.7601MR960778LiuD.MichelA.Celular neural networks for associative memories19934011912110.1109/82.219843ChenH.-C.HungY.-C.ChenC.-K.LiaoT.-L.ChenC.-K.Image-processing algorithms realized by discrete-time cellular neural networks and their circuit implementations20062951100110810.1016/j.chaos.2005.08.067MR2227671ZBL1142.68575VenetianerP.RoskaT.Image compression by delayed CNNs199845205215RoskaT.BorosT.ThiranP.ChuaL.Detecting simple motion using cellular neural networksProceedings of the International Workshop Cellular Neural Networks Application1990127138RoskaT.ChuaL.Cellular neural networks with nonlinear and delay-type template199220469481WenF.YangX.Skewness of return distribution and coefficient of risk premium200922336037110.1007/s11424-009-9170-xMR2538868WenF.LiZ.XieC.DavidS.Study on the fractal and chaotic features of the Shanghai composite index2012202133140HaykinS.1994Englewood Cliffs, NJ, USAPrentice-HallHuangC.CaoJ.Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays20097233523356HuangC.CaoJ.On pth moment exponential stability of stochastic Cohen-Grossberg neural networks with time-varying delays201073986990RakkiyappanR.BalasubramaniamP.Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays2008198252653310.1016/j.amc.2007.08.053MR2405955ZBL1144.34375ZhaoH.DingN.ChenL.Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays20094041653165910.1016/j.chaos.2007.09.044MR2531255ZBL1198.34173LiB.XuD.Mean square asymptotic behavior of stochastic neural networks with infinitely distributed delay20097233113317WanL.ZhouQ.WangP.LiJ.Ultimate boundedness and an attractor for stochastic Hopfield neural networks with time-varying delays201213295395810.1016/j.nonrwa.2011.09.001MR2846894ZBL1238.34145WanL.ZhouQ.Attractor and ultimate boundedness for stochastic cellular neural networks with delays20111252561256610.1016/j.nonrwa.2011.03.005MR2813202ZBL1225.34091YuW.CaoJ.LuW.Synchronization control of switched linearly coupled neural networks with delay201073858866WuL.FengZ.ZhengW.Exponential stability analysis for delayed neural networks with switching parameters: average dwell time approach20102113961407WuC. W.ChuaL. O.A simple way to synchronize chaotic systems with applications to secure communication systems1993316191627YuW.CaoJ.YuanK.Synchronization of switched system and application in communication200837244384445MaiaC.GoncalvesM.Application of switched adaptive system to load forecasting200878721727WuH.LiaoX.FengW.GuoS.ZhangW.Robust stability analysis of uncertain systems with two additive time-varying delay components200933124345435310.1016/j.apm.2009.03.008MR2566985ZBL1173.93024YangX.HuangC.ZhuQ.Synchronization of switched neural networks with mixed delays via impulsive control2011441081782610.1016/j.chaos.2011.06.006MR2837280LianJ.ZhangK.Exponential stability for switched Cohen-Grossberg neural networks with average dwell time201163333134310.1007/s11071-010-9807-2MR2769517LiT.-F.ZhaoJ.DimirovskiG. M.Stability and L2-gain analysis for switched neutral systems with mixed time-varying delays201134892237225610.1016/j.jfranklin.2011.08.008MR2845319HuangC.CaoJ.Convergence dynamics of stochastic Cohen-Grossberg neural networks with unbounded distributed delays201122561572HespanhaJ.MorseA.Stability of switched systems with average dwell timeProceedings of the 38th IEEE Conference on Decision and Control199926552660GuK.An integral inequality in the stability problem of time-delay systemsProceedings of the 39th IEEE Conference on Decision and Control200028052810