An innovative stability analysis approach for a class of discrete-time stochastic neural networks (DSNNs) with time-varying delays is developed. By constructing a novel piecewise Lyapunov-Krasovskii functional candidate, a new sum inequality is presented to deal with sum items without ignoring any useful items, the model transformation is no longer needed, and the free weighting matrices are added to reduce the conservatism in the derivation of our results, so the improvement of computational efficiency can be expected. Numerical examples and simulations are also given to show the effectiveness and less conservatism of the proposed criteria.
1. Introduction
In the past decades, neural networks (NNs) have attracted considerable attention due to their potential applications in associative memory, pattern recognition, optimization and signal processing, and so forth [1–3]. It is well known to us that stability is one of the preconditions in the design of neural networks. For example, if a neural network is employed to solve some optimization problems, it is highly desirable for the NNs to have a unique globally stable equilibrium. Therefore, stability analysis of NNs is a very important issue and has been studied extensively [4–11]. It is worth noting that most of the NNs have been analyzed by using a continuous-time model. However, when it comes to the implementation of continuous-time networks for the sake of computer-based simulation, experimentation or computation, it is necessary to discretize the continuous-time networks to formulate a discrete-time system. Under mild or no restriction on the discretization of step size, the dynamic characteristics of the continuous-time counterpart can be inherited by the discrete-time analogue to a certain extent, and the discrete-time model also remains similar to some other properties of the continuous-time system.
On the other hand, as a result of the finite switching speed of amplifiers and the inherent communication time of neurons, time delays are frequently encountered in neural networks in electronic implementations. Time delays can change the dynamic behaviors of neural networks evidently, which is very often the sources of instability, oscillation, and poor performance. Therefore, stability analysis of neural networks with time delays has been studied extensively during the past years; see [12–14] and the references therein. In practice, when modeling real neural systems, stochastic disturbances are probably part of the main sources leading to unwilling behaviors of neural networks. It has been proved that certain stochastic inputs could make the neural network unstable. Therefore, it is necessary to take into account both time delay and stochastic external flucation when modeling neural networks.
Recently, stochastic discrete-time system has been studied extensively; [15] presented a necessary and sufficient condition for the existence of H2/H∞ control to transform H2/H∞ controller design into solving coupled matrix-valued equation for discrete-time system with state and disturbance-dependent noise. In [16], the robust filtering analysis and synthesis of nonlinear stochastic systems with state and exogenous disturbance-dependent noise are presented. Instead of solving the Hamilton-Jacobi inequalities, a more convenient algorithm for practical applications is given by solving several linear matrix inequalities, and a few examples have shown the effectiveness of the proposed methods. In [17], based on a nonlinear stochastic bounded real lemma and an exponential estimate formula, an exponential mean square H∞ filtering design of nonlinear stochastic time-delay systems is presented via solving a Hamilton-Jacobi inequality.
But as pointed out in some previous articles, the discretization cannot preserve the dynamics of continuous-time system even for a small sampling period. Therefore, the study on the dynamics of discrete-time neural networks is crucially needed; see [18–22] and the references therein.
Based on the discussions previously mentioned, the problem of stability analysis for discrete stochastic neural networks (DSNNs) with time-varying delays has been investigated recently. In [23], the problem of exponentially stability analysis for uncertain discrete-time stochastic neural networks with time-varying delays is investigated by utilizing Lyapunov-Krasovskii method of converting the addressed stability analysis problem into a convex optimization problem. In [24], combining with the free weighting matrix method and a new Lyapunov-Krasovskii functional candidate, a delay-dependent stability condition has been obtained, which proves to be less conservative than [23]. In [25], a new Lyapunov-Krasovskii functional candidate and the delay partition idea are used to solve the problem of asymptotic stability analysis in the mean square for a class of DSNNs with time-varying delay; the stability analysis problem is converted into a feasibility problem of LMIs, and a numerical example is provided to show the usefulness of the proposed condition.
In [26], the global exponential stability problem for a class of discrete-time uncertain stochastic neural networks with time-varying delays is studied, and an improved result is obtained.
In [27], the midpoint of the time delay variation interval is introduced, and the variation interval is divided into two subintervals; a new Lyapunov-Krasovskii functional candidate is constructed, and the variation in the two subintervals are checked by LMIs; some novel delay-dependent stability criteria for the addressed neural networks are derived, and less conservation is obtained.
In [28], a new Lyapunov functional candidate with the idea of delay partitioning is introduced; the effects of both variation range and distribution probability of the time delay are taken into account at the same time; the time-varying delay is characterized by introducing a Bernoulli stochastic variable; the distribution probability of time delay is translated into parameter matrices of the transferred DSNNs model, so the conservation has been reduced further. However, one of the main issues in the stability criteria is how to reduce the possible conservatism induced by introduction of the Lyapunov-Krasovskii functional candidate when dealing with time delay, which leaves much room for further research by using the latest analysis techniques.
In this paper, we develop an innovative stability analysis approach for a class of discrete-time stochastic neural networks with time-varying delays. By constructing a novel piecewise Lyapunov-Krasovskii functional, a new sum inequality is presented to deal with sum items without ignoring any useful items, and the model transformation is no longer needed in the derivation of our results. All results are expressed in the form of LMIs, whose feasibility can be easily checked by using the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required, so the improvement of computational efficiency can be expected. Numerical examples are also given to show the effectiveness and less conservatism of the proposed criteria.
Notation. Throughout this paper, if not explicit, matrices are assumed to have compatible dimensions. The notation M>(≥,<,≤)0 means that the symmetric matrix M is positive definite (positive semidefinite, negative, negative semidefinite). The superscript T stands for the transpose of a matrix; the shorthand diag{⋯} denotes the block diagonal matrix; ∥·∥ represents the Euclidean norm for vector or the spectral norm of matrices; and λM(A),λm(A) denote the maximal and minimal eigenvalue of matrix A, respectively. I refers to an identity matrix of appropriate dimensions, 𝔼{·} stands for the mathematical expectation, and * means the symmetric terms. Sometimes, the arguments of a function will be omitted in the analysis when no confusion can arise.
2. System Description
Consider the following n-neuron discrete stochastic neural networks with mixed time-varying delays:(1)x(k+1)=Ax(k)+Bf(x(k))+Wg(x(k-τ(k)))+σ(k,x(k),x(k-τ(k)))ω(k),
where x(k)=[x1(k),x2(k),…,xn(k)]T∈ℝn is the neuron state vector; f(x(k))=[f1(x1(k)),f2(x2(k)),…,fn(xn(k))]T∈ℝn and g(x(k))=[g1(x1(k)),g2(x2(k)),…,gn(xn(k))]T∈ℝn denotes the neuron activation function. A=diag{a1,…,an} with |ai|<1(i=1,2,…,n). B,W are the connection weight matrix and the delayed connection weight matrix, respectively, τ(k) represents the transmission time-varying delay, σ:ℤ×ℝn×ℝn→ℝn is a continuous function, and ω(k) is a Brown Motion defined on the complete probability space (Ω,ℱ,{Ft}t) with 𝔼{ω(k)}=0, 𝔼{ω2(k)}=1,𝔼{w(i)w(j)}=0(i≠j).
For further discussion, we introduce the following assumptions and lemmas.
Assumption 1.
There exist two positive constants β1 and β1 such that
(2)σ(k,x(k),x(k-τ(k)))Tσ(k,x(k),x(k-τ(k)))≤≤β1xT(k)x(k)+β2xT(k-τ(k))x(k-τ(k)).
Assumption 2.
For i∈1,2,…,n, the neuron activation functions in the DSNNS in (1) satisfy
(3)li-≤fi(x)-fi(y)x-y≤li+∀x,y∈ℝ,x≠y,fi(0)=0,ρi-≤gi(x)-gi(y)x-y≤ρi+∀x,y∈ℝ,x≠y,gi(0)=0,
where li-,li+,ρi+,ρi- are some constants.
Remark 3.
Assumption 2 previously mentioned on the activation function was widely used in many papers; see [16–21, 23–26], for example.
Lemma 4 (see [<xref ref-type="bibr" rid="B27">27</xref>]).
For any symmetric constant matrix Q∈ℝn×n, Q>0, integers τm<τM, and vector valued function y(k)=x(k+1)-x(k), one has
(4)-(τM-τm)∑i=k-τMk-τm-1yT(i)Qy(i)≤[x(k-τm)x(k-τM)]T[-QQQ-Q][x(k-τm)x(k-τM)].
Remark 5.
Lemma 4 is called discrete Jensen inequality, which is a very important tool for us to get the main results in this paper. The Lemma has been used in some literatures such as [28, 29].
Lemma 6 (see [<xref ref-type="bibr" rid="B31">30</xref>]).
For any constant matrix Ri∈ℝn×n,Ri=RiT≥0, a scalar λ, a positive integer time-varying τ(k)∈[τ1,τ3], and vector function η:[-τ3,-τ1]→ℝn, such that the following sum is well defined, the following inequalities hold:
(5)Ω=-(τ3-τ1)∑θ=k-τ3k-τ1-1η(θ)Riη(θ)≤ξ1T(k)[1+π1]ℛiξ1(k)+ξ2T(k)[1+π2]ℛiξ2(k).
Furthermore, if
(6)λ+ξ1T(k)3ℛiξ1(k)+ξ2T(k)ℛiξ2(k)≤0,λ+ξ1T(k)ℛiξ1(k)+ξ2T(k)3ℛiξ2(k)≤0,
then
(7)λ+ξ1T(k)[1+π1]ℛiξ1(k)+ξ2T(k)[1+π2]ℛiξ2(k)≤0,
where
(8)π1=-1,π2=0,whenτ(k)=τ1,π1=τ3-τ(k)τ(k)-τ1,π2=1π1,whenτ1<τ(k)<τ3,π1=0,π2=-1,whenτ(k)=τ3
and ξ1(k)=[xT(k-τ1)xT(k-τ(k))]T, ξ2(k)=[xT(k-τ(k))xT(k-τ3)]T, η(k)=x(k+1)-x(k),
(9)ℛi=[-RiRiRi-Ri].
Remark 7.
Lemma 6 is given and proved in [29], which is an effective method to reduce the conservation when studying the time delay stability problem for discrete system; see the literature mentioned previously [30]. Our main study is based on Lemma 6. It is worth mentioning that if ξ1T(k)π1ℛiξ1(k) and ξ2T(k)π2ℛiξ2(k) in the proof of (6) are directly ignored, then the following inequality can be derived:
(10)Ω≤ξ1T(k)ℛiξ1(k)+ξ2T(k)ℛiξ2(k).
Compared with the literature [12–18], none of useful items is ignored in (5); what is more, (5) provides a tighter bound to deal with sum terms than those based on (10). However, since (6) is related to time-varying delay items π1 and π2, it cannot be directly solved based on MATLAB LMI toolbox. To tackle this problem, (6) is used to transfer the time-varying matrix inequality to a set of solvable LMIs. That is, the additional information ξ1T(k)π1ℛiξ1(k) and ξ2T(k)π2ℛiξ2(k) in (10) is effectively expressed by (7), and then the less conservative results can be expected.
3. Main Results
For convenience of presentation, we use the following notations:
(11)Σ1=diag{l1-l1+,l2-l2+,…,ln-ln+},Σ2=diag{l1-+l1+2,l2-+l2+2,…,ln-+ln+2},Σ3=diag{ϱ1-ϱ1+,ϱ2-ϱ2+,…,ϱn-ϱn+},Σ4=diag{ϱ1-+ϱ1+2,ϱ2-+ϱ2+2,…,ϱn-+ϱn+2},ξ(k)=[xT(k),xT(k-τm),xT(k-τ0),xT(k-τ(k)),=xT(k-τM),yT(k),fT(x(k)),gT(k-τ(k))]T.
In this section, a new delay-dependent stability criteria is proposed for system (1) with time-varying delay satisfying (3); the sufficient conditions of stability are given as follows by Theorem 8.
Theorem 8.
For given τm,τ0 and τM, diagonal matrices Σ1,Σ2, Σ3 and Σ4, the system (1) is said to be exponentially stable, if there exist scalar λ*, diagonal matrices Λ1,Λ2 and symmetric matrices P>0,Qi>0,Ri>0(i=1,2,3,4) such that the following LMIs (12), (13) and (14) hold for α=0,1:
(12)P<λ*I,(13)Ψ111(α)<0,(14)Ψ112(α)<0,
where
(15)Ψ111(α)=2(1-α)Θ1+2αΘ2+Θ1+Θ2+Θ3,Ψ112(α)=2(1-α)Θ¯1+2αΘ¯2+Θ¯1+Θ¯2+Θ¯3,Θ1=[000000000-R200000000000000000-R200000R2000000000000000000000000000000],Θ2=[000000000000000000-R20000000-R2-R2000000000000000000000000000000000000],Θ¯1=[000000000000000000-R30000000R3-R3000000000000000000000000000000000000],Θ¯2=[000000000000000000000000000-R30000000R3-R3000000000000000000000000000],Θ3=[Θ11Θ12Θ13Θ14Θ15Θ16Θ17Θ18*Θ22000Θ260Θ28**Θ33Θ34Θ35Θ360Θ38***Θ440Θ46Θ47Θ48****Θ55Θ560Θ58*****Θ66Θ67Θ68******Θ77Θ78*******Θ88],Θ11=(τM-τm+1)R4-R1+∑i=14Qi-1τmR1-Λ1Σ1+λ*β1I+∑i=14++U1(I-A)+(I-A)TU1T,Θ12=R1+U2(I-A),Θ13=U3(I-A),Θ14=U4(I-A),Θ15=U5(I-A),Θ16=U6(I-A)+U1,Θ17=U8(I-A)-U1B+Λ1Σ2,Θ18=U8(I-A)-U1W,Θ22=-Q1-τmR1,Θ26=U2,Θ27=-U2B,Θ28=-U2W,Θ33=Q2-1τM-τ0R3,Θ34=R2,Θ35=1τM-τ0R3,Θ36=U3,Θ37=-U3B,Θ38=-U3W,Θ44=-R4-Q3-Λ2Σ1+λ*β2I,Θ46=U4,Θ47=-U4B,Θ48=-U4W+Λ2Σ2,Θ55=-Q4-1τM-τ0R3,Θ56=U5,Θ57=-U5B,Θ58=-U5W,Θ66=U6+U6T+P+1τmR1+1τ0-τmR2+1τM-τ0R3,Θ67=-U6B,Θ68=-U6W,Θ77=-V1-U7B-U7TBT,Θ78=-U7W-BTU8T,Θ88=-V2-U8W-U8TWT,Θ¯3=[Θ11Θ12Θ13Θ14Θ15Θ16Θ17Θ18*Θ¯22Θ¯230000Θ28**Θ¯33Θ34Θ¯35Θ360Θ38***Θ440Θ46Θ47Θ48****Θ¯55Θ560Θ58*****Θ66Θ67Θ68******Θ77Θ78*******Θ88],
where
(16)Θ¯22=-Q1-τmR1-1τ0-τmR2,Θ¯23=-1τ0-τmR2,Θ¯33=Q2-1τ0-τmR2,Θ¯35=0,Θ55=-Q4.
The other terms in Θ¯3 have the same expression as that in Θ3.
Proof.
Define y(k)=x(k+1)-x(k),η(k)=[xT(k)yT(k)]T.
Construct the following Lyapunov-Krasovskii functional candidates:
(17)V(k)=∑i=14Vi(k),
where
(18)V1(k)=xT(k)Px(k),V2(k)=∑i=k-τ(k)k-1xT(k)R4x(k)+∑j=k+1-τMk-τm∑j=ik-1xT(j)R4x(j),V3(k)=∑i=k-τmk-1xT(i)Q1x(i)+∑i=k-τ0k-1xT(i)Q2x(i)+∑i=k-τ(k)k-1xT(i)Q3x(i)+∑i=k-τMk-1xT(i)Q4x(i),V4(k)=∑i=-τm-1∑θ=k+ik-1yT(θ)R1y(θ)+∑i=-τ0-τm-1∑θ=k+ik-1yT(θ)R2y(θ)+∑i=-τM-τ0-1∑θ=k+ik-1yT(θ)R3y(θ),
where if (τM-τm)/2 is an integer, then τ0=(τM+τm)/2, else τ0=(τM+τm+1)/2.
By calculating the difference of V(k) along the solution of the system (1), and taking the mathematical expectation, we have
(19)𝔼{ΔV(k)}=𝔼{∑i=14ΔVi(k)},
where
(20)𝔼{ΔV1(k)}=𝔼{xT(k+1)PxT(k+1)-xT(k)Px(k)}=𝔼{2xT(k)Py(k)+σT(k,x(k),x(k-τ(k)))xx×Pσ(k,x(k),x(k-τ(k)))xx+yT(k)Py(k)2xT}.
From Assumption 2, we can obtain that
(21)σ(k,x(k),x(k-τ(k)))Tσ(k,x(k),x(k-τ(k)))≤≤λ*(β1xT(k)x(k)+β2xT(k-τ(k))x(k-τ(k))).
So we can get that
(22)𝔼{ΔV1(k)}≤𝔼{(2xT(k)Py(k)+yT(k)Py(k)mm+λ*(β2xTβ1xT(k)x(k)mmmmm+β2xT(k-τ(k))x(k-τ(k)))},(23)𝔼{ΔV2(k)}=𝔼{∑j=k+1-τMk-τm∑j=ik-1xT(k)R4x(k)-xT(k-τ(k))×××R4x(k-τ(k))+∑j=k+1-τk+1k-τmxT(k)R4x(k)××+∑j=k+1-τmk-1xT(k)Rx(k)××-∑j=k+1-τkk-1xT(k)R4x(k)××+∑j=k+2-τMk+1-τm∑j=ikxT(j)R4x(j)××-∑j=k+1-τMk-τm∑j=ik-1xT(j)R4x(j)}≤𝔼{xT(k)R4x(k)-xT(k-τ(k))R4x(k-τ(k))+++∑j=k+1-τMk-τmxT(k)R4x(k)+++∑j=k+1-τMk-τm(xT(k)R4x(k)+++∑j=k+1-τMk-τm--xT(j)R4x(j))xT}=𝔼{(τM-τm+1)xT(k)R4x(k)++-xT(k-τ(k))R4x(k-τ(k))},(24)𝔼{ΔV3(k)}=𝔼{xT(k)∑i=14Qix(k)-xT(k-τm)Q1x(k-τm)-xT(k-τ0)Q2x(k-τ0)-xT(k-τ(k))Q3x(k-τ(k))-xT(k-τM)Q4x(k-τM)∑i=14},(25)𝔼{ΔV4(k)}=𝔼{∑j=k-τMk-τ0-1τmyT(k)R1y(k)-∑j=k-τmk-1yT(k)R1y(k)+(τ0-τm)yT(k)R2y(k)-∑j=k-τ0k-τm-1yT(k)R2y(k)+(τM-τ0)yT(k)R3y(k)-∑j=k-τMk-τ0-1yT(k)R3y(k)}.
Now, we are in the position to prove that 𝔼{ΔV(k)}<0 holds for both τM≥τk≥τ0 and τ0≥τk≥τm.
Case I (when τ0≥τk≥τm). Using Lemmas 4 and Lemma 6 to deal with the second sum items in the right side of (25), we have
(26)-∑j=k-τmk-1yT(k)R1y(k)≤1τm[x(k)x(k-τm)]T[-R1R1R1-R1][x(k)x(k-τm)](27)-∑j=k-τMk-τ0-1yT(k)R3y(k)≤1(τM-τ0)[x(k-τ0)x(k-τM)]T[-R3R3R3-R3][x(k-τ0)x(k-τM)](28)-∑j=k-τ0k-τm-1yT(k)R2y(k)≤1(τ0-τm)ξ1T(k)[1+π¯1]ℛ2ξ1(k)+ξ3T(k)[1+π¯2]ℛ2ξ3(k),
where
(29)ξ1(k)=[xT(k-τm)xT(k-τ(k))]T,ξ3(k)=[xT(k-τ0)xT(k-τ(k))]T.
At the same time, for any matrices U of appropriate dimensions, we have
(30)𝔼{ξT(k)(U+UT)[(A-I)x(k)+Bf(x(k))ξT(k)(U+UT)+++Wg(x(k-τ(k)))ξT(k)(U+UT)+++σ(k,x(k),x(k-τ(k)))ω(k)ξT(k)(U+UT)++-y(k)]ξT}=0,
where
(31)U=[U1T,U2T,U3T,U4T,U5T,U6T,U7T,U8T]T.
In addition, it can be deduced from Assumption 2 that there exist two positive diagonal matrices Λi=diag{εi,1,εi,2,…,εi,n},(i=1,2) such that
(32)0≤-∑i=1nε1,i[fi(xi(k))-li-xi(k)][fi(xi(k))-li+xi(k)]-∑i=1nε2,i[gi(xi(k-τ(k)))-li-xi(k-τ(k))]-∑i=1n××[gi(xi(k-τ(k)))-li+xi(k-τ(k))]=-xT(k)Λ1Σ1x(k)+2fT(x(k))Λ1Σ2x(k)-fT(x(k))Λ1f(x(k))-xT(k-τ(k))Λ2Σ3x(k-τ(k))+2gT(x(k-τ(k)))×Λ2Σ4x(k-τ(k))-gT(x(k-τ(k)))Λ2g(x(k-τ(k))).
By substituting (22)–(25) into (19), adding (30) and (32) into the right side of (19), and using (26) and (27), we can get that
(33)𝔼{ΔV(k)}≤𝔼{λ1+1(τ0-τm)ξ1T(k)[1+π¯1]ℛ2ξ1(k)+++ξ3T(k)[1+π¯2]ℛ2ξ3(k)1(τ0-τm)},
where λ1=ξT(k)Θ3ξ(k), π¯1=(τ0-τ(k))/(τ(k)-τm) and π¯1=1/π¯2,
(34)ξ(k)=[xT(k)xT(k-τm)xT(k-τ0)xT(k-τ(k))xT(k-τM)fT(x(k))fT(x(k-τ(k)))]T,ξ(k)=[xT(k)xT(k-τm)xT(k-τ0)xT(k-τ(k))xT(k-τM)yT(x(k))fT(x(k))fT(x(k-τ(k)))]T.
So (12) and (13) imply that
(35)λ1+ξ1T(k)3ℛ2ξ1(k)+ξ2T(k)ℛ2ξ2(k)≤0,λ1+ξ1T(k)ℛ2ξ1(k)+ξ2T(k)3ℛ2ξ2(k)≤0.
So from Lemma 6, (35) guarantees that 𝔼{ΔV(k)}≤0, so there exists a positive scalar c1 such that
(36)𝔼{ΔV(k)}≤c1𝔼{∥x(k)∥2}.
Case II (when τM≥τk≥τ0). Keeping (26) and by utilizing Lemmas 4 and 6 to deal with the accumulative items of (27) and (28), we can get
(37)-∑j=k-τmk-1yT(k)R1y(k)≤≤1τm[x(k)x(k-τm)]T[-R1R1R1-R1][x(k)x(k-τm)]-∑j=k-τMk-τ0-1yT(k)R3y(k)≤1(τM-τ0)ξ¯2T(k)[1+π¯1]ℛ3ξ¯2(k)-∑j=k-τMk-τ0-1yT(k)R3y(k)≤+ξ¯3T(k)[1+π¯2]ℛ3ξ¯3(k),-∑j=k-τ0k-τm-1yT(k)R2y(k)≤≤1τ0-τm[x(k-τm)x(k-τ0)]T[-R2R2R2-R2][x(k-τm)x(k-τ0)],
where
(38)ξ¯2(k)=[xT(k-τ0)xT(k-τ(k))]T,ξ¯3(k)=[xT(k-τ(k))xT(k-τM)]T.
So the whole difference of Lyapunov-Krasovskii functional candidate is given as follows:
(39)𝔼{ΔV(k)}≤λ2+ξ¯2T(k)[1+π^1]ℛ3ξ¯2(k)+ξ¯3T(k)[1+π^2]ℛ3ξ¯3(k),
where λ2=ξT(k)[Θ¯3ξ(k), π^1=(τM-τ(k))/(τ(k)-τ0) and π^2=1/π^1. For τ(k)≠τ0 and τ(k)≠τM, we can also get that there exists a positive scalar c2 such that
(40)𝔼{ΔV(k)}≤c2𝔼{∥x(k)∥2}.
Combining Cases I and II, we can conclude that (12), (13) and (14) guarantee that
(41)𝔼{ΔV(k)}<-min{c1,c2}𝔼{∥x(k)∥2}.
Defining a new function 𝕍(k,x(k))=μkV(k,x(k)) and then using the similar analysis method of Theorem 1 in [18], we can easily get that the system (1) is globally exponentially stable in the mean square sense. This completes the proof.
Remark 9.
It can be seen from the proof previously mentioned that no model transformation has been employed to deal with the sum terms, and none of the useful items are ignored in the proof.
If we neglect the effect of the stochastic term ω(k) in (1), then β1=β2=0 and (1) will reduce to
(42)x(k+1)=Ax(k)+Bf(x(k))+Wf(x(k-τ(k))).
For system (42), we can obtain the following corollary based on Theorem 8.
Corollary 10.
For given τm, τ0, and τM and diagonal matrices Σ1,Σ2, Σ3, and Σ4, the system (42) is said to be exponentially stable, if there exist diagonal matrices Λ1,Λ2, symmetric positive definite matrices P>0,Qi>0 and Ri>0(i=1,2,3) such that (12) and the following LMIs hold for α=0,1:
(43)Θ4=[Θ11Θ12Θ13Θ14Θ15Θ16Θ17Θ18*Θ22000Θ260Θ28**Θ33Θ34Θ35Θ360Θ38***Θ440Θ46Θ47Θ48****Θ55Θ560Θ58*****Θ66Θ67Θ68******Θ77Θ78*******Θ88],Θ¯4=[Θ¯11Θ12Θ13Θ14Θ15Θ16Θ17Θ18*Θ¯22Θ¯2300Θ260Θ28**Θ¯33Θ34Θ¯35Θ360Θ38***Θ440Θ46Θ47Θ48****Θ¯55Θ560Θ58*****Θ66Θ67Θ68******Θ77Θ78*******Θ88],Θ¯11=(τM-τm+1)R4-R1+∑i=14Qi-1τmR1-Λ1Σ1+U1(I-A)+(I-A)TU1T,Θ44=-R4-Q3-Λ2Σ1.
Remark 11.
The system (42) has been studied by many researchers, many stability criteria have been proposed, and many improved analysis results have been obtained; see [20–22].
4. Numerical Example
In this section, three examples are given to demonstrate the benefits of the proposed method.
Example 1.
Consider the following stochastic discrete neural networks [25, 26]:
(44)x(k+1)=Ax(k)+Bf(x(k))+Wg(x(k-τ(k)))+σ(x(k),x(k-τ(k)))ω(k)
with the following parameters:
(45)A=[-0.100-0.2],B=[-0.10.1-0.10.5],W=[0.050.10.50.5],β1=0.2,β2=0.2,f1(s)=sin(0.2s)-0.6cos(s),f2(s)=tanh(-0.4s),g1(s)=tanh(0.83s)+0.6cos(s),g2(s)=tanh(0.2s).
So it can be verified that
(46)Σ1=diag{-0.64,0},Σ2=diag{0,-0.2},Σ3=diag{-0.6,0},Σ4=diag{0.2,0.1}.
For τm=2, by using Matlab LMI toolbox, the maximum allowable value is τM=11. Setting τm=2 and τM=5 in Theorem 8, we can solve a set of feasible solutions for the LMIs (12), (13) and (14) which are listed as follows:
(47)P=[2.8554-0.1283-0.12831.3300],R1=[0.00440.00210.00210.0022],R2=[0.04210.00370.00370.0128],R3=[0.04890.00240.00240.0136],Q1=[0.1793-0.0356-0.03560.1487],Q2=[0.42790.00510.00510.0779],Q3=[0.3817-0.0218-0.02180.1063],Λ1=[0.0651000.0904],Λ2=[0.0803000.3411],λ*=4.7129.
At the same time, we set τM=5; the state curve can be obtained as Figure 1 by using Matlab simulation software. From Figure 1, we can see that the system (1) is asymptotically stable.
In our study, our purpose is to compare the maximum upper bound τM for different τm. Now assume the different lower bound τm; by solving the LMIs (12), (13) and (14), we can get the maximum delays of upper bound, which are listed in Table 1. From Table 1, we can see that Theorem 8 is less conservative than the criterion proposed in [25, 26].
Allowable upper bound of τM for various τm.
Methods
τm=2
τm=4
τm=6
τm=8
τm=10
τm=15
τm=20
[26]
6
8
10
12
14
19
24
[25]
9
11
13
15
17
22
27
Theorem 8
106
106
106
106
106
106
106
Trajectories of x(k) of system in Example 1.
Remark 12.
According to Theorem 8, we can obtain that system (1) with the pervious parameters is mean square exponentially stable, and meantime it is very easy to check that our results have improved the conclusions in [26] by the Matlab LMI Toolbox. If we negelect the uncertainty effect and take ρ1=0.2, ρ2=0.2 then apply the criteria in [26], the value of τM for mean square exponentially stable of system (1) is 55, respectively. While by using Theorem 8 in this paper, taking τM 62 into LMIs (12), (13) and (14), we find that LMIs (12), (13) and (14) are feasible, which shows that our result is less conservative than [26] under the identical conditions.
Example 2.
Consider the neural networks (1) with the following parameters [23]:
(48)A=[0.40000.50000.4],B=[0.3-0.10.20-0.30.2-0.1-0.1-0.2],W=[0.20.10.1-0.20.30.10.3-0.30.1].
Take the activation function as follows:
(49)f1(s)=tanh(0.6s)-0.2sin(s),f2(s)=tanh(-0.4s),f3(s)=tanh(-0.2s),g1(s)=tanh(-0.4s)+0.2sin(s),g2(s)=tanh(0.2s),g3(s)=tanh(0.4s).
From the previous parameters, it can be verified that
(50)Σ1=[-0.1600000000],Σ2=[-0.30000.20000.1],Σ3=[-0.1200000000],Σ4=[0.2000-0.1000-0.2].
For the Corollary 1 in [23], using the previous parameters, we find that it is unsolvable, but it is solvable in our criteria in Theorem 8. By virtue of the Matlab Toolbox, we can obtain the feasible solutions as follows:
(51)P=[41.9042-0.0335-0.2435-0.033543.37110.3170-0.24350.317043.0218],Λ1=[5.765700011.348800011.7506],Λ2=[19.868800025.456000022.0029],R1=[2.25280.0223-0.02310.02231.81450.0270-0.02310.02701.8773],R2=[0.50950.0818-0.05190.081825.4560-0.0313-0.0519-0.03130.6545],R3=[0.54940.0367-0.02310.03670.5644-0.0065-0.0231-0.00650.4522],λ*=46.4939.
Example 3.
Consider the neural networks (1) with the following parameters [23, 27]:
(52)A=[0.40000.50000.4],B=[0.3-0.10.20-0.30.2-0.1-0.1-0.2],(53)W=[0.20.10.1-0.20.30.10.3-0.30.1].
Take the activation function as follows:
(54)f1(s)=tanh(0.2s),f2(s)=tanh(-0.4s),f3(s)=tanh(-0.2s),g1(s)=tanh(-0.12s),g2(s)=tanh(0.2s),g3(s)=tanh(0.4s).
From the previous parameters, it can be verified that
(55)Σ1=[000000000],Σ2=[0.2000-0.40000.2],Σ3=[000000000],Σ4=[-0.120000.20000.4].
For this example, these conditions in [23] cannot be satisfied. For τm=2, it has been verified that the maximum allowable time delay for system (42) is τM=18 in [27]. By letting τm=2 and τM=26 in Example 3, we find that systems (42) are feasible, which implies that the exponential stability result proposed in Corollary 10 in this paper provides less conservatism than in [23, 27].
5. Conclusion
An effective sum inequality has been introduced to derive the delay-dependent stability criteria for a class of discrete stochastic neural networks system with an interval time-varying delay. By choosing piecewise Lyapunov-Krasovskii functional candidate and employing the proposed sum inequalities, significant performance improvement has been achieved with noticeably reducing the number of LMIs scalar decision variables. All results are given by the form of LMIs. Numerical examples show that the achieved results are less conservative than some existing literatures.
ArikS.An analysis of exponential stability of delayed neural networks with time varying delaysCaoJ.XiaoM.Stability and Hopf bifurcation in a simplified BAM neural network with two time delaysCaoJ.YuanK.LiH. X.Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delaysLouX.YeQ.CuiB.Exponential stability of genetic regulatory networks with random delaysYangH.ChuT.ZhangC.Exponential stability of neural networks with variable delays via LMI approachHuangH.QuY.LiH. X.Robust stability analysis of switched Hopfield neural networks with time-varying delay under uncertaintySongC.GaoH.ZhengW. X.A new approach to stability analysis of discrete-time recurrent neural networks with time-varying delayCaoJ.WangJ.Global asymptotic and robust stability of recurrent neural networks with time delaysXuS.LamJ.HoD. W. C.A new LMI condition for delay-dependent asymptotic stability of delayed Hopfield neural networksZhangY.YueD.TianE.New stability criteria of neural networks with interval time-varying delay: a piecewise delay methodWangZ.LiuY.LiuX.On global asymptotic stability of neural networks with discrete and distributed delaysShaoH.HanQ. L.New stability criteria for linear discrete-time systems with interval-like time-varying delaysWuZ.SuH.ChuJ.ZhouW.Improved delay-dependent stability condition of discrete recurrent neural networks with time-varying delaysSongQ.LiangJ.WangZ.Passivity analysis of discrete-time stochastic neural networks with time-varying delaysZhangW.HuangY.ZhangH.Stochastic H_{2}/H_{∞} control for discrete-time systems with state and disturbance dependent noiseZhangW.ChenB. S.TsengC. S.Robust H_{∞} filtering for nonlinear stochastic systemsZhangW. H.FengG.LiQ. H.Robust H∞ filtering for general nonlinear stochastic state-delayed systemsWeiX.ZhouD.ZhangQ.On asymptotic stability of discrete-time non-autonomous delayed Hopfield neural networksHuS.WangJ.Global robust stability of a class of discrete-time interval neural networksZhuX. L.WangY.YangG. H.New delay-dependent stability results for discrete-time recurrent neural networks with time-varying delayXiongW.CaoJ.Global exponential stability of discrete-time Cohen-Grossberg neural networksWangL.XuZ.Sufficient and necessary conditions for global exponential stability of discrete-time recurrent neural networksLiuY.WangZ.LiuX.Robust stability of discrete-time stochastic neural networks with time-varying delaysSongQ.WangZ.A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delaysOuY.LiuH.SiY.FengZ.Stability analysis of discrete-time stochastic neural networks with time-varying delaysLuoM.ZhongS.WangR.KangW.Robust stability analysis for discrete-time stochastic neural networks systems with time-varying delaysZhangY.XuS.ZengZ.Novel robust stability criteria of discrete-time stochastic recurrent neural networks with time delayZhangY.YueD.TianE.Robust delay-distribution-dependent stability of discrete-time stochastic neural networks with time-varying delayZhangX. M.HanQ. L.A new finite sum inequality approach to delay-dependent H_{∞} control of discrete-time systems with time-varying delayPengC.Improved delay-dependent stabilisation criteria for discrete systems with a new finite sum inequality