MPEMathematical Problems in Engineering1563-51471024-123XHindawi Publishing Corporation82690810.1155/2009/826908826908Research ArticleRobust Stability Analysis of Fuzzy Neural Network with DelaysZhaoKaihong1, 2LiYongkun1Kalmar-NagyTamas1Department of MathematicsYunnan UniversityKunmingYunnan 650091Chinaynu.edu.cn2Department of MathematicsYuxi Normal UniversityYuxiYunnan 653100Chinayxnu.cn20091801200920091904200906072009291220092009Copyright © 2009This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We investigate local robust stability of fuzzy neural networks (FNNs) with time-varying and S-type distributed delays. We derive some sufficient conditions for local robust stability of equilibrium points and estimate attracting domains of equilibrium points except unstable equilibrium points. Our results not only show local robust stability of equilibrium points but also allow much broader application for fuzzy neural network with or without delays. An example is given to illustrate the effectiveness of our results.

1. Introduction

For the study of current neural network, two basic mathematical models are commonly adopted: either local field neural network models or static neural network models. The basic model of local field neural network is described as

ẋi(t)=-xi(t)+j=1nωijgj(xj(t))+Ii,i=1,2,,n, where gj denotes the activation function of the jth neuron; xi is the state of the ith neuron; Ii is the external input imposed on the ith neuron; ωij denotes the synaptic connectivity value between the ith neuron and the jth neuron; n is the number of neurons in the network. With the same notations, static neural network models can be written as

ẋi(t)=-xi(t)+gi(j=1nωijxj(t)+Ii),i=1,2,,n.

It is well known that local field neural network not only models Hopfield-type networks  but also models bidirectional associative memory networks  and cellular neural networks . Many deep theoretical results have been obtained for local field neural network; we can refer to  and references cited therein. Meanwhile static neural network has a great potential of applications. It not only includes the recurrent back-propagation network  but also includes other extensively studied neural network such as the optimization type network introduced in  and the brain-state-in-a-box (BSB) type network [19, 20]. In the past few years, there has been increasing interest in studying dynamical characteristics such as stability, persistence, periodicity, local robust stability of equilibrium points, and domains of attraction of local field neural network (see).

However, in mathematical modeling of real world problems, we will encounter some other inconvenience, for example, the complexity and the uncertainty or vagueness. Fuzzy theory is considered as a more suitable setting for the sake of taking vagueness into consideration. Based on traditional cellular neural networks (CNNs), Yang and Yang proposed the fuzzy CNNs (FCNNs) , which integrates fuzzy logic into the structure of traditional CNNs and maintains local connectedness among cells. Unlike previous CNNs structures, FCNNs have fuzzy logic between its template input and/or output besides the sum of product operation. FCNNs are very useful paradigm for image processing problems, which is a cornerstone in image processing and pattern recognition. Therefor, it is necessary to consider both the fuzzy logic and delay effect on dynamical behaviors of neural networks. Nevertheless, to the best of our knowledge, there are few published papers considering the local robust stability of equilibrium points and domain of attraction for the fuzzy neural network (FNNs).

Therefore, in this paper, we will study the local robust stability of fuzzy neural network with time-varying and S-type distributed delays:

u̇i(t)=-ci(λ)ui(t)+gi(j=1n-τ(λ)0uj(t+θ)dωij(θ,λ)+Ii)+j=1naij(λ)fj(uj(t))+j=1nαij(λ)fj(uj(t-τj(t)))+j=1nβij(λ)fj(uj(t-τj(t))),i=1,2,,n, where αij(λ) and βij(λ) are elements of fuzzy feedback MIN template and fuzzy feedback MAX template, respectively. aij(λ) are elements of feedback template. ui(t) stands for state of the ith neurons. τj(t) is the transmission delay and fj(t) is the activation function. and denote the fuzzy AND and fuzzy OR operation, respectively. λΞR is the parameter. The main purpose of this paper is to investigate local robust stability of equilibrium points of FNNs (1.3). Sufficient conditions are gained for local robust stability of equilibrium points. Meanwhile, the attracting domains of equilibrium points are also estimated.

Throughout this paper, we always assume the following

aij(λ),αij(λ), and βij(λ) are bounded in Ξ  (i,j=1,2,,n).

infλΞci(λ)>0,  0τ(λ)τ and ωij(θ,λ)  (i,j=1,2,,n) are nondecreasing bounded variation function on [-τ(λ),0] with ωij(θ,λ)>0, and   -τ(λ)0uj(t+θ)dωij(θ,λ) is Lebesgue-Stieltjes integral. I=(I1,I2,,In)T is a constant vector which denotes an external input.

gi(·),  i=1,2,,n are second-order differentiable, bounded, and Lipschitz continuous. There exist positive constants Li and Bi such that |gi(x)-gi(y)|Li|x-y| and |gi(x)|Bi for any x,yR.

The activation functions fi(u(t)) with fi(0)=0 bounded and Lipschitz continuous; that is, there are some numbers μi>0 and li>0 such that |fi(u)|μi and |fi(u)-fi(v)|li|u-v| for any u,vR,i=1,2,,n.

Functions τj(t),j=1,2,,n are nonnegative, bounded, and continuously differentiable defined on R+ and 0τj(t)τ(λ).

The rest of this paper is organized as follows. In Section 2, we will give some basic definitions and basic results about the attracting domains of FNNs (1.3). In Section 3, we discuss the local robust stability of equilibrium points of FNNs (1.3). In Section 4, an example is given to illustrate the effectiveness of our results. Finally, we make a conclusion in Section 5.

2. Preliminaries

As usual, we denote by C([-τ(λ),0],Rn) the set of all real-valued continuous mappings from [-τ(λ),0] to Rn equipped with supremum norm · defined by

ϕ=max1insup-τ(λ)<t0|ϕi(t)|, where ϕ=(ϕ1,ϕ2,,ϕn)TC([-τ(λ),0],Rn). Denote by u(t,ϕ,λ) the solution of FNNs (1.3) with initial condition ϕC([-τ(λ),0],Rn).

Definition 2.1.

A vector u*(λ)=(u1*(λ),u2*(λ),,un*(λ))T is said to be an equilibrium point of FNNs (1.3) if for each i=1,2,,n, one has ci(λ)ui*(λ)=gi(j=1nω̃ij(λ)uj*(λ)+Ii)+j=1naij(λ)fj(uj*(λ))+j=1nαij(λ)fj(uj*(λ))+j=1nβij(λ)fj(uj*(λ)),i=1,2,,n, where ω̃ij(λ)=:-τ(λ)0dωij(θ,λ). Denote by Ω the set of all equilibrium points of FNNs (1.3).

Definition 2.2.

Let u*(λ)Ω.u*(λ) is said to be a locally robust attractive equilibrium point if for any given λΞ, there is a neighborhood Yλ(u*(λ))C([-τ(λ),0],Rn) such that ϕYλ(u*(λ)) implies that limtu(t,ϕ,λ)-u*(λ)=0. Otherwise, u*(λ) is said not to be a locally robust attractive equilibrium point. Denote by Ω0 the set of all not locally robust attractive equilibrium points of FNNs (1.3).

Definition 2.3.

Let D,D̃ be subsets of Rn and let u(t,ϕ,λ) be a solution of FNNs (1.3) with ϕC([-τ(λ),0],Rn).

For any given λΞ, if u(σ,ϕ,λ)D for some σ0 implies that u(t,ϕ,λ)D for all tσ, then D is said to be an attracting domain of FNNs (1.3).

For any given λΞ, if ϕ(θ)D̃ for all θ[-τ(λ),0] implies that u(t,ϕ,λ) converges to u*(λ), then D̃ is said to be an attracting domain of u*(λ)Ω.

Correspondingly, the union of all attracting domains of equilibrium points of Ω is said to be an attracting domain of Ω.

For a class of differential equation with the term of fuzzy AND and fuzzy OR operation, there is the following useful inequality.

Lemma 2.4 ([<xref ref-type="bibr" rid="B26">26</xref>]).

Let u=(u1,u2,,un)T and v=(v1,v2,,vn)T be two states of (1.3); then one has |j=1nαijfj(uj)-j=1nαijfj(vj)|j=1n|αij||fj(uj)-fj(vj)|,|j=1nαijfj(uj)-j=1nαijfj(vj)|j=1n|αij||fj(uj)-fj(vj)|.

Lemma 2.5.

Let u(t) be any solution of FNNs (1.3). Then u(t) is uniformly bounded. Moreover, H is an attracting domain of FNNs (1.3), where H=:H1×H2××Hn,Hi=[-Bi+MiinfλΞci(λ),Bi+MiinfλΞci(λ)],  i=1,2,n.

Proof.

By (1.3) and Lemma 2.4, we have d+dt|ui(t)|-infλΞci(λ)|ui(t)|+Bi+Mi, where Mi=nmax1jn{μjsupλΞ(|aij(λ)|+|αij(λ)|+|βij(λ)|)},  i=1,2,n. By using differential inequality, we have for tσ, |ui(t)|exp((σ-t)infλΞci(λ))[|ui(σ)|-Bi+MiinfλΞci(λ)]+Bi+MiinfλΞci(λ),i=1,2,n, which leads to the uniform boundedness of u(t). Furthermore, given any |ui(σ)|(Bi+Mi)/infλΞci(λ),i=1,2,n, we get for all tσ, |ui(t)|Bi+MiinfλΞci(λ). Hence H is an attracting domain of FNNs (1.3). The proof is complete.

By Lemma 2.4, we have the following theorem.

Theorem 2.6.

All equilibrium points of FNNs (1.3) lie in the attracting domain H, that is, ΩH.

3. Local Robust Stability of Equilibrium Points

In this section, we should investigate local robust stability of equilibrium points of FNNs (1.3). We derive some sufficient conditions to guarantee local robust stable of equilibrium points in Ω/Ω0 and estimate the attracting domains of these equilibrium points.

Theorem 3.1.

Let u*(λ)=(u1*(λ),u2*(λ),,un*(λ))TΩ. If there exist positive constants βi  (i=1,2,,n) such that for each i=1,2,,nj=1nβj(supλΞ{ω̃ji(λ)|ġj(κj(λ))|+lj(|aij(λ)|+|αij(λ)|+|βij(λ)|)})<βiinfλΞci(λ), where κi(λ)=j=1nω̃ij(λ)uj*(λ)+Ii, then one has the following.

u*(λ)Ω/Ω0, that is, u*(λ) is locally robust stable.

Let

R¯=:  2miniN+  {βiinfλΞci(λ)k=1nj=1nβjmaxζR|g̈j(ζ)|supλΞ(ω̃ji(λ)ω̃jk(λ))      -j=1nβj(supλΞ{ω̃ji(λ)|ġj(κj(λ))|+lj(|aij(λ)|+|αij(λ)|+|βij(λ)|)})k=1nj=1nβjmaxζR|g̈j(ζ)|supλΞ(ω̃ji(λ)ω̃jk(λ))}. Then every solution u(t,ϕ,λ) of FNNs (1.3) with ϕO(u*(λ)) satisfies limt+u(t,ϕ,λ)-u*(λ)=0, where O(u*(λ))={ϕC([-τ(λ),0],Rn):ϕ-u*(λ)<R¯i=1n(βi/min1inβi)}.

The open set

u*(λ)ΩB(u*(λ))=:{uRn:u-u*(λ)<R¯i=1n(βi/min1inβi)} is an attracting domain of Ω,and  B(u*(λ)) is an attracting domain of u*(λ).

The proof of Theorem 3.1 relies on the following lemma.

Lemma 3.2.

Let u*(λ)=(u1*(λ),u2*(λ),,un*(λ))TΩ satisfying (3.1). Let u(t,ϕ,λ) be an arbitrary solution of FNNs (1.3) other than u*, where ϕC([-τ(λ),0],Rn). Let V(t)=i=1nβi|ui(t,ϕ,λ)-ui*(λ)|, where βi is given by (3.1). Then one has the following.

If uσ(·,ϕ,λ)-u*(λ)<R¯ for some σ0, then D+V(σ)<0.

If ϕ-u*(λ)<R¯/i=1n(βi/min1inβi) and supσ-τsσV(s)sup-τs0V(s) for some σ0, then uσ(·,ϕ,λ)-u*(λ)<R¯.

If ϕ-u*(λ)<R¯/(i=1nβi/min1inβi), then D+V(t)<0 for all t0.

Proof.

Under transformation y(t)=u(t,ϕ,λ)-u*(λ), we get that d+|yi(t)|dt-ci(λ)|yi(t)|+j=1n|aij(λ)||yj(t)|+j=1n(|αij(λ)|+|βij(λ)|)|yj(t-τj(t))|  +j=1n|ġi(κi(λ))|-τ(λ)0|yj(t+θ)|dωij(θ,λ)+|g̈i(ζi)|2(j=1n-τ(λ)0|yj(t+θ)|dωij(θ,λ))2, due to gi(j=1n-τ(λ)0uj(t+θ)dωij(θ,λ)+Ii)-gi(j=1n-τ(λ)0uj*(λ)dωij(θ,λ)+Ii)=ġi(j=1nω̃ij(λ)uj*(λ)+Ii)j=1n-τ(λ)0|yj(t+θ)|dωij(θ,λ)+|g̈i(ζi)|2(j=1n-τ(λ)0|yj(t+θ)|dωij(θ,λ))2, where ζi lies between j=1n-τ(λ)0uj(t+θ)dωij(θ,λ)+Ii and j=1n-τ(λ)0uj*(λ)dωij(θ,λ)+Ii. From (3.7), we can derive that d+V(t)dti=1nβi{-infλΞci(λ)|yi(t)|+j=1nlj|aij(λ)||yj(t)|+j=1nlj(|αij(λ)|+|βij(λ)|)|yj(t-τj(t))|  +[|ġi(κi(λ))|+|g̈i(ζi)|2j=1n-τ(λ)0|yj(t+θ)|dωij(θ,λ)]×j=1n-τ(λ)0|yj(t+θ)|dωij(θ,λ)}i=1nβi{-infλΞci(λ)|yi(t)|+j=1nljsupλΞ(|aij(λ)|+|αij(λ)|+|βij(λ)|)supt-τst|yj(s)|+[|ġi(κi(λ))|+|g̈i(ζi)|2j=1nω̃ij(λ)supt-τst|yj(s)|]×j=1nω̃ij(λ)supt-τst|yj(s)|}i=1nβi{-infλΞci(λ)|yi(t)|+j=1nljsupλΞ(|aij(λ)|+|αij(λ)|+|βij(λ)|)supt-τst|yj(s)|+[|ġi(κi(λ))|+|g̈i(ζi)|2j=1nω̃ij(λ)supt-τst|yj(s)|]×j=1nω̃ij(λ)supt-τst|yj(s)|}i=1n{-βiinfλΞci(λ)+j=1nβjljsupλΞ(|aij(λ)|+|αij(λ)|+|βij(λ)|)  +j=1nβjω̃ji(λ)[|ġj(κj(λ))|+|g̈j(ζj)|2j=1nω̃jk(λ)supt-τst|yk(s)|]j=1n}supt-τst|yi(s)|. As uσ(·,ϕ,λ)-u*(λ)<R¯, we have for each i=1,2,,n,supt-τst|yi(s)|<R¯, which imply that D+V(σ)<0.

Since min1in{βi}uσ(·,ϕ,λ)-u*(λ)supσ-τsτV(s) and sup-τs0V(s)=i=1nβi{sup-τs0|ui(s,ϕ,λ)-ui*(λ)|}i=1nβiϕ-u*(λ), we have uσ(·,ϕ,λ)-u*(λ)i=1n(βi/min1inβi)ϕ-u*(λ)<R¯.

Since ϕ-u*(λ)<R¯/i=1n(βi/min1inβi)<R¯, from (A1), we know that D+V(0)<0. We assert that (A3) holds. Otherwise, there exist t0>0 such that D+V(t0)0 and D+V(t)<0 for all t[0,t0). This implies that V(t) is strictly monotonically decreasing on the interval [0,t0]. It is obvious that supt0-τst0V(s)sup-τst0V(s). By using (A2), we get that ut0(·,ϕ,λ)-u*(λ)<R¯. From (A1),D+V(t0)<0. This leads to a contradiction. Hence D+V(t)<0 for all t0.

Now we are in a position to complete the proof of Theorem 3.1.

Proof.

Let u(t,ϕ,λ) be an arbitrary solution of FNNs (1.3) other than u*(λ) and satisfy ϕ-u*(λ)<R¯/i=1n(βi/min1inβi). It follows from (A3) that D+V(t)<0 for all t0, that is, supt-τstV(s)sup-τs0V(s) for all t0. Together with (A2) we get ut(·,ϕ,λ)-u*(λ)<R¯ for all t0. Take χi=βiinfλΞci(λ)-j=1nβj(supλΞ{ω̃ji(λ)|ġj(κj(λ))|}+ljsupλΞ(|aij(λ)|+|αij(λ)|+|βij(λ)|)),ηi=k=1nj=1nβjmaxζR|g̈j(ζ)|supλΞ(ω̃ji(λ)ω̃jk(λ))R¯. It is obvious that χi-ηi>0 for each i=1,2,,n. From (3.9) we have D+V(t)<-min1in{χi-ηi}i=1nsupt-τst|yi(s)|-min1in{χi-ηi}i=1n|yi(s)|. By integrating both sides of above inequality from 0 to t, we have V(t)+min1in{χi-ηi}0ti=1n|yi(s)|V(0). It follows that lim suptmin1in{χi-ηi}0ti=1n|yi(s)|V(0)<. Note that u(t,ϕ,λ) is bounded on R+ by Lemma 2.4; it follows from FNNs (1.3) that u̇ is bounded on R+. Hence |u(t,ϕ,λ)-u*(λ)| is uniformly continuous on R+. From Lemma 2.5, we get that limti=1n|ui(t,ϕ,λ)-ui*(λ)|=0. So the assertions of (1) and (2) hold. Let us consider an arbitrary solution u(t,ϕ,λ) of FNNs (1.3) satisfying ϕ(s)B(u*(λ)) for all s[-τ(λ),0] and some u*(λ)Ω. Then it is obvious that ϕ-u*(λ)<R¯i=1nβi/min1inβi. From (2), we get limtu(t,ϕ,λ)-u*(λ)=0. Hence B(u*(λ)) is an attracting domain of u*(λ). Consequently, the open set u*(λ)ΩB(u*(λ)) is an attracting domain of Ω. The proof is complete.

Corollary 3.3.

Let u*(λ)=(u1*(λ),u2*(λ),,un*(λ))TΩ. If there exist positive constants βi  (i=1,2,,n) such that for each i=1,2,,nj=1nβj({ω̃ji|ġj(κj)|}+ljsupλΞ(|aij(λ)|+|αij(λ)|+|βij(λ)|))<βici, where κi=j=1nω̃ijuj*(λ)+Ii, then one has the following.

u*(λ)Ω/Ω0, that is, u*(λ) is locally asymptotically stable.

Let

R¯=:  2miniN+{βicik=1nj=1nβjmaxζR|g̈j(ζ)|ω̃jiω̃jk  -j=1nβj({ω̃ji|ġj(κj)|}+ljsupλΞ(|aij(λ)|+|αij(λ)|+|βij(λ)|))k=1nj=1nβjmaxζR|g̈j(ζ)|ω̃jiω̃jk}. Then every solution u(t,ϕ,λ) of FNNs (1.3) with ϕO(u*(λ)) satisfies limt+u(t,ϕ,λ)-u*(λ)=0, where O(u*(λ))={ϕC([-τ,0],Rn):ϕ-u*(λ)<R¯i=1n(βi/min1inβi)}.

The open set

u*(λ)ΩB(u*(λ))=:{uRn:u-u*(λ)<R¯i=1n(βi/min1inβi)} is an attracting domain of Ω,  and  B(u*(λ)) is an attracting domain of u*(λ).

4. Illustrative Example

For convenience of illustrative purpose, we only consider simple fuzzy neural network with time-varying and S-type distributed delays satisfying

ωij(θ,λ)={ωij(λ),θ=0,τ(λ)τ.0,-τθ<0, Then fuzzy neural network with two neurons can be modeled by

u̇i(t)=-ci(λ)ui(t)+gi(j=12uj(t)ωij(λ)+Ii)+j=12aij(λ)fj(uj(t))+j=12αij(λ)fj(uj(t-τj(t)))+j=12βij(λ)fj(uj(t-τj(t))),i=1,2. Take

c1(λ)=tanh(4-2sinλ),  ω11(λ)=4.02-2sinλ,ω12(λ)=0.02,c2(λ)=tanh(2.3-cosλ),  ω21(λ)=0.01,ω22(λ)=2.31-cosλ,g1(ξ)=g2(ξ)=tanhξ,  Ξ=[0,π2],I1=-0.02,I2=-0.01,τj(t)=τarctan2πt,j=1,2,f1(ξ)=f2(ξ)=sinπξ,aij(λ)=αij(λ)=βij(λ)=-sinλ100,i,j=1,2.

It is easy to check that (H1)(H5) hold and Li=Bi=μi=li=1 for i=1,2. We can check that

i=12(supλ[0,π/2]ωi1(λ)Li+l1supλ[0,π/2](|ai1(λ)|+|αi1(λ)|+|βi1(λ)|))=4.1>infλ[0,π/2]c1(λ)=tanh2.

From simple calculations, we know that [-1.06/tanh2,1.06/tanh2]×[-1.06/tanh1.3,1.06/tanh1.3] is an attracting domain of FNNs (4.2). All equilibrium points of FNNs (4.2) lie in [-1.06/tanh2,1.06/tanh2]×[-1.06/tanh1.3,1.06/tanh1.3]. From some calculations, we have two equilibrium points O1=(1,0),O2=(0,1). For equilibrium O2=(1,1), we have κ1(λ)=4-2sinλ,κ2(λ)=2.3-cosλ and supλ[0,π/2]|ġ1(κ1(λ))|=0.0680,supλ[0,π/2]|ġ2(κ2(λ))|=0.0386. Taking β1=β2=1, we get

j=12(supλ[0,π/2]ω1j(λ)|ġj(κj(λ))|+ljsupλ[0,π/2](|a1j(λ)|+|α1j(λ)|+|β1j(λ)|))<0.04<tanh2=infλ[0,π/2]c1(λ),j=12(supλ[0,π/2]ω2j(λ)|ġj(κj(λ))|+ljsupλ[0,π/2](|a2j(λ)|+|α2j(λ)|+|β2j(λ)|))<0.04<tanh1.3=infλ[0,π/2]c2(λ). Similarly, we can check that (3.1) holds for Ok  (k=1,2). Therefore, from Theorem 3.1, the four equilibrium points Ok  (k=1,2) are locally robust stable and their convergent radius is 0.04.

Remark 4.1.

The above example implies that the system has multiple equilibrium points under the (relevant) assumption of monotone nondecreasing activation functions. These equilibrium points do not globally converge to the unique equilibrium point.

5. Conclusions

In this paper, we derive some sufficient conditions for local robust stability of fuzzy neural network with time-varying and S-type distributed delays and give an estimate of attracting domains of stable equilibrium points except isolated equilibrium points. Our results not only show local robust stability of equilibrium points but also allow much broader application for fuzzy neural network with or without delays. An example is given to show the effectiveness of our results.

Acknowledgment

This work is supported by the National Natural Sciences Foundation of China under Grant 10971183.

HopfieldJ. J.Neurons with graded response have collective computational properties like those of two-state neuronsProceedings of the National Academy of Sciences of the United States of America19848110308830922-s2.0-000446989710.1073/pnas.81.10.3088KoskoB.Bidirectional associative memoriesIEEE Transactions on Systems, Man, and Cybernetics1988181496010.1109/21.87054MR931862ChuaL. O.YangL.Cellular neural networks: theoryIEEE Transactions on Circuits and Systems198835101257127210.1109/31.7600MR960777ZBL0663.94022ChengC.-Y.LinK.-H.ShihC.-W.Multistability in recurrent neural networksSIAM Journal on Applied Mathematics20066641301132010.1137/050632440MR2246057ZBL1106.34048YangX. F.LiaoX. F.TangY.EvansD. J.Guaranteed attractivity of equilibrium points in a class of delayed neural networksInternational Journal of Bifurcation and Chaos in Applied Sciences and Engineering200616927372743MR227347710.1142/S0218127406016410ZBL1154.34389ChenY. M.ychen@vega.math.ualberta.caGlobal stability of neural networks with distributed delaysNeural Networks20021578678712-s2.0-003426687810.1016/S0893-6080(02)00039-4ZhaoH.z286753@163.netGlobal asymptotic stability of Hopfield neural network involving distributed delaysNeural Networks2004171475310.1016/S0893-6080(03)00077-7ZBL1082.68100CaoJ.WangJ.Global asymptotic and robust stability of recurrent neural networks with time delaysIEEE Transactions on Circuits and Systems I2005522417426MR212048710.1109/TCSI.2004.841574MohamadS.Global exponential stability in DCNNs with distributed delays and unbounded activationsJournal of Computational and Applied Mathematics20072051161173MR232483210.1016/j.cam.2006.04.059ZBL1123.45006MohamadS.GopalsamyK.AkçaH.Exponential stability of artificial neural networks with distributed delays and large impulsesNonlinear Analysis: Real World Applications20089387288810.1016/j.nonrwa.2007.01.011MR2392382ZBL1154.34042HuangZ. K.XiaY. H.WangX. H.The existence and exponential attractivity of κ-almost periodic sequence solution of discrete time neural networksNonlinear Dynamics2007501-2132610.1007/s11071-006-9139-4MR2344928HuangZ. K.huangdoc@tom.comWangX. H.GaoF.The existence and global attractivity of almost periodic sequence solution of discrete-time neural networksPhysics Letters A20063503-41821912-s2.0-000840281210.1016/j.physleta.2005.10.022AlmeidaL. B.Backpropagation in perceptrons with feedbackNeural Computers1988New York, NY, USASpringer199208PinedaF. J.Generalization of back-propagation to recurrent neural networksPhysical Review Letters1987591922292232MR91369110.1103/PhysRevLett.59.2229RohwerR.ForrestB.Training time-dependence in neural networksProceedings of the 1st IEEE International Conference on Neural Networks1987San Diego, Calif, USA701708FortiM.TesiA.New conditions for global stability of neural networks with application to linear and quadratic programming problemsIEEE Transactions on Circuits and Systems I1995427354366MR135187110.1109/81.401145ZBL0849.68105XiaY. S.WangJ.A general methodology for designing globally convergent optimization neural networksIEEE Transactions on Neural Networks199896133113432-s2.0-002769644810.1109/72.728383XiaY. S.WangJ.On the stability of globally projected dynamical systemsJournal of Optimization Theory and Applications20001061129150MR178011110.1023/A:1004611224835ZBL0971.37013LiJ.-H.MichelA. N.PorodW.Analysis and synthesis of a class of neural networks: linear systems operating on a closed hypercubeIEEE Transactions on Circuits and Systems198936111405142210.1109/31.41297MR1020129ZBL0689.94004VargaI.ElekG.ZakS. H.On the brain-state-in-a-convex-domain neural modelsNeural Networks199697117311842-s2.0-002762938010.1016/0893-6080(96)00028-7QiaoH.hqiao@co.umist.ac.ukPengJ.jgpeng@xjtu.edu.cnXuZ.-B.ZhangB.b.zhang@coventry.ac.ukA reference model approach to stability analysis of neural networksIEEE Transactions on Systems, Man, and Cybernetics B20033369259362-s2.0-003333252310.1109/TSMCB.2002.804368XuZ.-B.QiaoH.hqiao@co.umist.ac.ukPengJ.jgpeng@mail.xjtu.edu.cnZhangB.b.zhang@coventry.ac.ukA comparative study of two modeling approaches in neural networksNeural Networks2004171738510.1016/S0893-6080(03)00192-8ZBL1082.68099WangM.WangL.Global asymptotic robust stability of static neural network models with S-type distributed delaysMathematical and Computer Modelling2006441-2218222MR223044410.1016/j.mcm.2006.01.013ZBL1139.93023LiP.CaoJ. D.jdcao@seu.edu.cnStability in static delayed neural networks: a nonlinear measure approachNeurocomputing20066913–15177617812-s2.0-034751163610.1016/j.neucom.2005.12.031HuangZ. K.XiaY. H.Exponential p-stability of second order Cohen-Grossberg neural networks with transmission delays and learning behaviorSimulation Modelling Practice and Theory20071566226342-s2.0-3114443670510.1016/j.simpat.2006.12.003YangT.YangL.-B.The global stability of fuzzy cellular neural networkIEEE Transactions on Circuits and Systems I19964310880883MR141690610.1109/81.538999