AANS Advances in Artificial Neural Systems 1687-7608 1687-7594 Hindawi Publishing Corporation 10.1155/2014/252674 252674 Research Article Long Time Behavior for a System of Differential Equations with Non-Lipschitzian Nonlinearities Tatar Nasser-Eddine Kisi Ozgur Department of Mathematics and Statistics King Fahd University of Petroleum and Minerals Dhahran 31261 Saudi Arabia kfupm.edu.sa 2014 1192014 2014 10 05 2014 07 09 2014 08 09 2014 14 9 2014 2014 Copyright © 2014 Nasser-Eddine Tatar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We consider a general system of nonlinear ordinary differential equations of first order. The nonlinearities involve distributed delays in addition to the states. In turn, the distributed delays involve nonlinear functions of the different variables and states. An explicit bound for solutions is obtained under some rather reasonable conditions. Several special cases of this system may be found in neural network theory. As a direct application of our result it is shown how to obtain global existence and, more importantly, convergence to zero at an exponential rate in a certain norm. All these nonlinearities (including the activation functions) may be non-Lipschitz and unbounded.

1. Introduction

Of concern is the following system: (1) x i ( t ) = - a i ( t ) x i ( t ) + j = 1 m f i j ( t , x j ( t ) , - t K i j ( t , s , x j ( s ) ) d s ) + c i ( t ) , with continuous data x j ( t ) = x 0 j ( t ) , t ( - , 0 ] , coefficients a i ( t ) 0 , and inputs c i ( t ) , i = 1 , , m . The functions f i j and K i j are nonlinear continuous functions. This is a general nonlinear version of several systems that arise in many applications (see  and Section 4 below).

The literature is very rich of works on the asymptotic behavior of solutions for special cases of system (1) (see for instance ). Here the integral terms represent some kind of distributed delays but discrete delays may be recovered as well by considering delta Dirac distributions. Different sufficient conditions on the coefficients, the functions, and the kernels have been established ensuring convergence to equilibrium or (uniform, global, and asymptotic) stability. In applications it is important to have global asymptotic stability at a very rapid rate like the exponential rate. Roughly speaking, it has been assumed that the coefficients a i ( t ) must dominate the coefficients of some “bad” similar terms that appear in the estimations. For the nonlinearities (activation functions), the first assumptions of boundedness, monotonicity, and differentiability have been all weakened to a Lipschitz condition. According to [8, 20] and other references, even this condition needs to be weakened further. Unfortunately, we can find only few papers on continuous but not Lipschitz continuous activation functions. Assumptions like partially Lipschitz and linear growth, α -inverse Hölder continuous or inverse Lipschitz, non-Lipschitz but bounded were used (see [16, 21, 22]).

For Hölder continuous activation functions we refer the reader to , where exponential stability was proved under some boundedness and monotonicity conditions on the activation functions and the coefficients form a Lyapunov diagonally stable matrix (see also [24, 25] for other results without these conditions).

There are, however, a good number of papers dealing with discontinuous activation functions under certain stronger conditions like M -Matrix, the LMI condition (linear matrix inequality) and some extra conditions on the matrices and growth conditions on the activation functions (see [20, 2637]). Global asymptotic stability of periodic solutions have been investigated, for instance, in [38, 39].

Here we assume that the functions f i j and K i j are (or bounded by) continuous monotone nondecreasing functions that are not necessarily Lipschitz continuous and they may be unbounded (like power type functions with powers bigger than one). We prove that, for sufficiently small initial data, solutions decay to zero exponentially.

The local existence and global existence are standard; see the Gronwall-type Lemma 1 below and the estimation in our theorem. However, the uniqueness of the equilibrium is not an issue here (even in case of constant coefficients) as we are concerned with convergence to zero rather than stability of equilibrium.

After the Preliminaries section, where we present our main hypotheses and the main lemma used in our proof, we state and prove the convergence result in Section 3. The section is ended by some corollaries and important remarks. In the last section we give an application, where this type of systems (or special cases of it) appears in real world problems.

2. Preliminaries

Our first hypothesis (H1) is (2) | f i j ( t , x j ( t ) , - t K i j ( t , s , x j ( s ) ) d s ) | b i j ( t ) | x j ( t ) | α i j ( - t l i j ( t - s ) ψ i j ( | x j ( s ) | ) d s ) β i j , hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh i , j = 1 , , m , where b i j are nonnegative continuous functions, l i j are nonnegative continuously differentiable functions, ψ i j are nonnegative nondecreasing continuous functions, and α i j , β i j 0 ,   i , j = 1 , , m . The interesting cases are when α i j and β i j are all nonzero.

Let I R , and let g 1 , g 2 : I R { 0 } . We write g 1 g 2 if g 2 / g 1 is nondecreasing in I . This ordering as well as the monotonicity condition may be dropped as is mentioned in Remark 8 below.

Lemma 1 (see [<xref ref-type="bibr" rid="B26">40</xref>]).

Let a ( t ) be a positive continuous function in J = [ α , β ) , k j ( t ) , j = 1 , , n nonnegative continuous functions for α t < β ,   g j ( u ) , j = 1 , , n nondecreasing continuous functions in R + , with g j ( u ) > 0 for u > 0 , and u ( t ) a nonnegative continuous functions in J . If g 1 g 2 g n in ( 0 , ) , then the inequality (3) u ( t ) a ( t ) + j = 1 n α t k j ( s ) g j ( u ( s ) ) d s , t J , implies that (4) u ( t ) ω n ( t ) , α t < β 0 , where ω 0 ( t ) = sup 0 s t a ( s ) , (5) ω j ( t ) = G j - 1 [ G j ( ω j - 1 ( t ) ) + 0 t k j ( s ) d s ] , j = 1 , , n , G j ( u ) = u j u d x g j ( x ) , u > 0 ( u j > 0 , j = 1 , , n ) , and β 0 is chosen so that the functions ω j ( t ) , j = 1 , , n , are defined for α t < β 0 .

In our case we will need the following notation and hypotheses.

(H2) Assume that ψ i j ( u ) > 0 for u > 0 and the set of functions u ( t ) α i j + β i j , ψ i j ( u ( t ) ) may be ordered as h 1 h 2 h n (after relabelling). Their corresponding coefficients b ~ i j ( t ) = exp [ 0 t a ( σ ) d σ ] b i j ( t ) ( a ( t ) = min 1 i m a i ( t ) ) and l i j ( 0 ) will be renamed λ k , k = 1 , , n .

We define x ( t ) = i = 1 m | x i ( t ) | , t > 0 ,   x 0 ( t ) = i = 1 m | x 0 i ( t ) | , t 0 , (6) c ( t ) = 0 t exp [ 0 s a ( σ ) d σ ] i = 1 m | c i ( s ) | d s , t > 0 , ω 0 ( t ) = x 0 ( 0 ) + i , j = 1 m - 0 l i j ( - σ ) ψ i j ( x 0 ( σ ) ) d σ + c ( t ) , ω j ( t ) = H j - 1 [ H j ( ω j - 1 ( t ) ) + α t λ j ( s ) d s ] , j = 1 , , n , H j ( u ) = u j u d x h j ( x ) , u > 0 ( u j > 0 , j = 1 , , n ) , ω ~ 0 ( t ) = ω 0 ( 0 ) + i , j = 1 m 0 | l i j ( s ) | - s 0 ψ i j ( u 0 ( σ ) ) d σ d s , u 0 ( σ ) = i , j = 1 m - σ | l i j ( σ - τ ) | ψ i j ( x 0 ( τ ) ) d τ , σ < 0 , ω ~ j ( t ) = H j - 1 [ H j ( ω ~ j - 1 ( t ) ) + 0 t λ ~ j ( s ) d s ] , j = 1 , , n , where λ ~ j are the relabelled coefficients corresponding to b ~ i j ( t ) and l i j ( 0 ) + 0 | l i j ( σ ) | d σ .

3. Exponential Convergence

In this section it is proved that solutions converge to zero in an exponential manner provided that the initial data are small enough.

Theorem 2.

Assume that the hypotheses (H1) and (H2) hold and - 0 l i j ( - σ ) ψ i j ( x 0 ( σ ) ) d σ < ,   i , j = 1 , , m . Then, (a) if l i j ( t ) 0 , i , j = 1 , , m , there exists β 0 > 0 such that (7) x ( t ) ω n ( t ) exp [ - 0 t a ( s ) d s ] , 0 t < β 0 .

(b) If l i j ( t ) , i , j = 1 , , m are of arbitrary signs, l i j ( t ) are summable, and the integral term in ω ~ 0 ( t ) is convergent then there exists a β 1 > 0 such that the conclusion in (a) is valid on 0 t < β 1 with ω ~ n instead of ω n .

Proof.

It is easy to see from (1) and the assumption (H1) that for t > 0 and i = 1 , , m we have (8) D + | x i ( t ) | - a i ( t ) | x i ( t ) | + j = 1 m | f i j ( t , x j ( t ) , - t K i j ( t , s , x j ( s ) ) d s ) | + c i ( t ) , or, for t > 0 , (9) D + x ( t ) - min 1 i m { a i ( t ) } x ( t ) + i , j = 1 m b i j ( t ) | x j ( t ) | α i j × ( - t l i j ( t - s ) ψ i j ( | x j ( s ) | ) d s ) β i j + i = 1 m | c i ( t ) | , where D + denotes the right Dini derivative. Hence (10) D + x ( t ) - a ( t ) x ( t ) + i , j = 1 m b i j ( t ) | x ( t ) | α i j ( - t l i j ( t - s ) ψ i j ( x ( s ) ) d s ) β i j + i = 1 m | c i ( t ) | , t > 0 and consequently (11) D + { x ( t ) exp [ 0 t a ( s ) d s ] } exp [ 0 t a ( s ) d s ] i , j = 1 m b i j ( t ) | x ( t ) | α i j × ( - t l i j ( t - s ) ψ i j ( x ( s ) ) d s ) β i j + exp [ 0 t a ( s ) d s ] i = 1 m | c i ( t ) | , hhhhhhhhhhhhhhhhhhh t > 0 . Thus (by a comparison theorem in ) (12) x ~ ( t ) x ( 0 ) + c ( t ) + j = 1 m 0 t { ( - s l i j ( s - σ ) ψ i j ( x ( σ ) ) d σ ) β i j i = 1 m b ~ i j ( s ) | x ( s ) | α i j hhhhhh × ( - s l i j ( s - σ ) ψ i j ( x ( σ ) ) d σ ) β i j } d s , hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh t > 0 , where (13) x ~ ( t ) = x ( t ) exp [ 0 t a ( s ) d s ] . Let y ( t ) denote the right hand side of (12). Clearly x ~ ( t ) y ( t ) ,   t > 0 ,  and for   t > 0 (14) D + y ( t ) = D + c ( t ) + i , j = 1 m b ~ i j ( t ) | x ( t ) | α i j × ( - t l i j ( t - σ ) ψ i j ( x ( σ ) ) d σ ) β i j . We designate by z i j ( t ) the integral term in (14); that is, (15) z i j ( t ) = - t l i j ( t - σ ) ψ i j ( x ( σ ) ) d σ and z ( t ) = i , j = 1 m z i j ( t ) . A differentiation of z ( t ) gives (16) z ( t ) = i , j = 1 m l i j ( 0 ) ψ i j ( x ( t ) ) + i , j = 1 m - t l i j ( t - σ ) ψ i j ( x ( σ ) ) d σ . (a) Consider l i j ( t ) 0 ,   i , j = 1 , , m

In this situation (of fading memory) we see from (14) and (16) that if u ( t ) = y ( t ) + z ( t ) , then (17) D + u ( t ) D + c ( t ) + i , j = 1 m [ b ~ i j ( t ) ( u ( t ) ) α i j + β i j + l i j ( 0 ) ψ i j ( u ( t ) ) ] , hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhl t > 0 . Therefore (18) u ( t ) u ( 0 ) + c ( t ) + i , j = 1 m 0 t [ b ~ i j ( s ) ( u ( s ) ) α i j + β i j + l i j ( 0 ) ψ i j ( u ( s ) ) ] d s , hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh t > 0 , where u ( 0 ) = x ( 0 ) + i , j = 1 m - 0 l i j ( - σ ) ψ i j ( x 0 ( σ ) ) d σ . Now we can apply Lemma 1 to obtain (19) x ~ ( t ) u ( t ) ω n ( t ) , 0 t < β 0 with ω 0 ( t ) = u ( 0 ) + c ( t ) and ω n ( t ) is as in the “Preliminaries” section.

(b) Consider l i j ( t ) ,   i , j = 1 , , m of arbitrary signs.

From expressions (14) and (16) we derive that (20) D + u ( t ) D + c ( t ) + i , j = 1 m [ b ~ i j ( t ) ( u ( t ) ) α i j + β i j + l i j ( 0 ) ψ i j ( u ( t ) ) ] + i , j = 1 m 0 | l i j ( σ ) | ψ i j ( u ( t - σ ) ) d σ , t > 0 . The derivative of the auxiliary function (21) u ~ ( t ) = u ( t ) + i , j = 1 m 0 | l i j ( s ) | t - s t ψ i j ( u ( σ ) ) d σ d s , t 0 is equal to (with the help of (20) and (21)) (22) D + u ~ ( t ) = D + u ( t ) + i , j = 1 m 0 | l i j ( s ) | [ ψ i j ( u ( t ) ) - ψ i j ( u ( t - s ) ) ] d σ d s D + c ( t ) + i , j = 1 m [ b ~ i j ( t ) ( u ( t ) ) α i j + β i j + l i j ( 0 ) ψ i j ( u ( t ) ) ] + i , j = 1 m 0 | l i j ( σ ) | ψ i j ( u ( t - σ ) ) d σ + i , j = 1 m 0 | l i j ( s ) | [ ψ i j ( u ( t ) ) - ψ i j ( u ( t - s ) ) ] d s D + c ( t ) + i , j = 1 m { b ~ i j ( t ) ( u ~ ( t ) ) α i j + β i j [ l i j ( 0 ) + 0 | l i j ( s ) | d s ] ψ i j + [ l i j ( 0 ) + 0 | l i j ( s ) | d s ] ψ i j ( u ~ ( t ) ) } , hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh t > 0 . Therefore (23) u ~ ( t ) u ~ ( 0 ) + c ( t ) + i , j = 1 m 0 t { b ~ i j ( s ) ( u ~ ( s ) ) α i j + β i j [ l i j ( 0 ) + 0 | l i j ( σ ) | d σ ] hhhhhhhh + [ l i j ( 0 ) + 0 | l i j ( σ ) | d σ ] ψ i j ( u ~ ( s ) ) } d s with (24) u ~ ( 0 ) = x ( 0 ) + i , j = 1 m - 0 l i j ( - σ ) ψ i j ( x 0 ( σ ) ) d σ + i , j = 1 m 0 | l i j ( s ) | - s 0 ψ i j ( u 0 ( σ ) ) d σ d s , u 0 ( σ ) = z ( σ ) = i , j = 1 m z i j ( σ ) = i , j = 1 m - σ l i j ( σ - τ ) ψ i j ( x 0 ( τ ) ) d τ , σ < 0 .

Applying Lemma 1 to (23) we obtain (25) x ~ ( t ) u ~ ( t ) ω ~ n ( t ) , 0 t < β 1 and hence (26) x ~ ( t ) ω ~ n ( t ) , 0 t < β 1 , where ω ~ 0 ( t ) = u ~ ( 0 ) and (27) ω ~ j ( t ) = H j - 1 [ H j ( ω ~ j - 1 ( t ) ) + 0 t λ ~ j ( s ) d s ] , j = 1 , , n , and β 0 is chosen so that the functions ω ~ j ( t ) , j = 1 , , n , are defined for 0 t < β 1 .

Corollary 3.

If, in addition to the hypotheses of the theorem, we assume that (28) 0 χ k ( s ) d s ω k - 1 d z h k ( z ) , k = 1 , , n , χ k ( s ) = λ k ( s ) , λ ~ k ( s ) then we have global existence of solutions.

Corollary 4.

If, in addition to the hypotheses of the theorem, we assume that ω n ( t ) ( ω ~ n ( t ) ) grows up at the most polynomially (or just slower than exp [ 0 t a ( s ) d s ] ) ,    then solutions decay at an exponential rate if 0 t a ( s ) d s as t .

Corollary 5.

In addition to the hypotheses of the theorem, assume that l i j ( t ) L i j l i j ( t ) ,   i , j = 1 , , m , for some positive constants L i j and ψ i j ( t ) are in the class H (that is ψ i j ( α u ) ξ i j ψ i j ( u ) ,   α > 0 ,   u > 0 ,   i , j = 1 , , m ) . Then solutions are bounded by a function of the form exp [ - ( 0 t a ( s ) d s - L t ) ] , where l = max { L i j , i , j = 1 , , m } .

Remark 6.

We have assumed that α i j and β i j are greater than one but the case when they are smaller than one may be treated similarly. When their sum is smaller than one we have global existence without adding any extra condition.

Remark 7.

The decay rate obtained in Corollary 5 is to be compared with the one in the theorem (case (b)). It appears that the estimation in Corollary 5 holds for more general initial data (not as small as the ones in case (b)). However, the decay rate is smaller than the one in (b) besides assuming that 0 t a ( s ) d s - L t as t .

Remark 8.

If we consider the following new functions, then the monotonicity condition and the order imposed in the theorem may be dropped: (29) ϕ 1 ( t ) = max 0 s t g 1 ( s ) , ϕ k ( t ) = max 0 s t { g k ( s ) ϕ k - 1 ( s ) } ϕ k - 1 ( t ) and ψ ( t ) = ϕ k ( t ) / ϕ k - 1 ( t ) .

4. Application

(Artificial) Neural networks are built in an attempt to perform different tasks just as the nervous system. Typically, a neural network consists of several layers (input layer, hidden layers, and output layer). Each layer contains one or more cells (neurons) with many connections between them. The cells in one layer receive inputs from the previous layer, make some transformations, and send the results to the cells of the subsequent layer.

One may encounter neural networks in many fields such as control, pattern matching, settlement of structures, classification of soil, supply chain management, engineering design, market segmentation, product analysis, market development forecasting, signature verification, bond rating, recognition of diseases, robust pattern detection, text mining, price forecast, botanical classification, and scheduling optimization.

Neural networks not only can perform many of the tasks a traditional computer can do, but also excel in, for instance, classifying incomplete or noisy data, predicting future events, and generalizing.

The system (1) is a general version of simpler systems that appear in neural network theory  like (30) x i ( t ) = - a i x i ( t ) + j = 1 m f i j ( x j ( t ) ) + c i ( t ) , or (31) x i ( t ) = - a i x i ( t ) + j = 1 m - t l i j ( t - s ) f i j ( x j ( s ) ) d s + c i ( t ) .

It is well established by now that (for constant coefficients and constant c i ( t ) ) solutions converge in an exponential manner to the equilibrium. Notice that zero in our case is not an equilibrium. This equilibrium exists and is unique in case of Lipschitz continuity of the activation functions. In our case the system is much more general and the activation functions as well as the nonlinearities are not necessarily Lipschitz continuous. However, in case of Lipschitz continuity and existence of a unique equilibrium we expect to have exponential stability using the standard techniques at least when we start away from zero.

For the system (32) x i ( t ) = - a i x i ( t ) + j = 1 m b i j | x j ( t ) | α i j ( - t l i j ( t - s ) ψ i j ( | x j ( s ) | ) d s ) β i j + c i ( t ) , (where ψ i j may be taken as power functions; see also Corollary 5) our theorem gives sufficient conditions guaranteeing the estimation (33) x ( t ) ω n ( t ) exp [ - 0 t a ( s ) d s ] , 0 t < β 0 . Then, Corollaries 3 and 4 provide practical situations where we have global existence and decay to zero at an exponential rate.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The author is grateful for the financial support and the facilities provided by King Fahd University of Petroleum and Minerals through Grant no. IN111052.

Cao J. Yuan K. Li H.-X. Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays IEEE Transactions on Neural Networks 2006 17 6 1646 1651 10.1109/TNN.2006.881488 2-s2.0-33947139816 Crespi B. Storage capacity of non-monotonic neurons Neural Networks 1999 12 10 1377 1389 10.1016/S0893-6080(99)00074-X 2-s2.0-0033485893 de Sandre G. Forti M. Nistri P. Premoli A. Dynamical analysis of full-range cellular neural networks by exploiting differential variational inequalities IEEE Transactions on Circuits and Systems. I. Regular Papers 2007 54 8 1736 1749 10.1109/TCSI.2007.902607 MR2370284 2-s2.0-34547988902 Feng C. Plamondon R. On the stability analysis of delayed neural networks systems Neural Networks 2001 14 9 1181 1188 10.1016/S0893-6080(01)00088-0 2-s2.0-0034801030 Hopfield J. J. Neural networks and physical systems with emergent collective computational abilities Proceedings of the National Academy of Sciences of the United States of America 1982 79 8 2554 2558 10.1073/pnas.79.8.2554 MR652033 2-s2.0-0020118274 Hopfield J. J. Tank D. W. Computing with neural circuits: a model Science 1986 233 4764 625 633 10.1126/science.3755256 2-s2.0-0022504321 Inoue J. I. Retrieval phase diagrams of non-monotonic Hopfield networks Journal of Physics A: Mathematical and General 1996 29 16 4815 4826 10.1088/0305-4470/29/16/008 MR1418775 2-s2.0-0000639431 Kosko B. Neural Network and Fuzzy System—A Dynamical System Approach to Machine Intelligence 1991 New Delhi, India Prentice-Hall of India Yanai H.-F. Amari S.-I. Auto-associative memory with two-stage dynamics of nonmonotonic neurons IEEE Transactions on Neural Networks 1996 7 4 803 815 10.1109/72.508925 2-s2.0-0242472478 Liu X. Jiang N. Robust stability analysis of generalized neural networks with multiple discrete delays and multiple distributed delays Neurocomputing 2009 72 7–9 1789 1796 10.1016/j.neucom.2008.06.005 2-s2.0-61849092560 Mohamad S. Gopalsamy K. Akça H. Exponential stability of artificial neural networks with distributed delays and large impulses Nonlinear Analysis. Real World Applications 2008 9 3 872 888 10.1016/j.nonrwa.2007.01.011 MR2392382 2-s2.0-38949136712 Park J. On global stability criterion for neural networks with discrete and distributed delays Chaos, Solitons and Fractals 2006 30 4 897 902 10.1016/j.chaos.2005.08.147 2-s2.0-33745186884 Park J. On global stability criterion of neural networks with continuously distributed delays Chaos, Solitons and Fractals 2008 37 2 444 449 10.1016/j.chaos.2006.09.021 2-s2.0-40749144474 Qiang Z. Run-Nian M. A. Jin X. Global exponential convergence analysis of Hopfield neural networks with continuously distributed delays Communications in Theoretical Physics 2003 39 3 381 384 10.1088/0253-6102/39/3/381 MR2002210 2-s2.0-0037445105 Wang Y. Xiong W. Zhou Q. Xiao B. Yu Y. Global exponential stability of cellular neural networks with continuously distributed delays and impulses Physics Letters A 2006 350 1-2 89 95 10.1016/j.physleta.2005.10.084 2-s2.0-30644467615 Wu H. Global exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations Nonlinear Analysis: Real World Applications 2009 10 4 2297 2306 10.1016/j.nonrwa.2008.04.016 MR2508439 2-s2.0-61849163302 Zhang Q. Wei X. P. Xu J. Global exponential stability of Hopfield neural networks with continuously distributed delays Physics Letters A 2003 315 6 431 436 10.1016/S0375-9601(03)01106-X MR2008893 2-s2.0-0042262759 Zhao H. Global asymptotic stability of Hopfield neural network involving distributed delays Neural Networks 2004 17 1 47 53 10.1016/S0893-6080(03)00077-7 2-s2.0-0347511634 Zhou J. Li S. Yang Z. Global exponential stability of Hopfield neural networks with distributed delays Applied Mathematical Modelling. Simulation and Computation for Engineering and Environmental Systems 2009 33 3 1513 1520 10.1016/j.apm.2008.02.006 MR2478578 2-s2.0-55549116016 Gavaldà R. Siegelmann H. T. Discontinuities in recurrent neural networks Neural Computation 1999 11 3 715 745 10.1162/089976699300016638 2-s2.0-0033112241 Wu H. Tao F. Qin L. Shi R. He L. Robust exponential stability for interval neural networks with delays and non-Lipschitz activation functions Nonlinear Dynamics 2011 66 4 479 487 10.1007/s11071-010-9926-9 MR2859579 2-s2.0-82255183314 Wu H. Xue X. Stability analysis for neural networks with inverse Lipschitzian neuron activations and impulses Applied Mathematical Modelling 2008 32 11 2347 2359 10.1016/j.apm.2007.09.002 MR2439705 2-s2.0-47049108325 Forti M. Grazzini M. Nistri P. Pancioni L. Generalized Lyapunov approach for convergence of neural networks with discontinuous or non-LIPschitz activations Physica D. Nonlinear Phenomena 2006 214 1 88 99 10.1016/j.physd.2005.12.006 MR2200797 ZBL1103.34044 2-s2.0-31144460192 Tatar N.-E. Hopfield neural networks with unbounded monotone activation functions Advances in Artificial Neural Systems 2012 2012 5 571358 10.1155/2012/571358 Tatar N.-E. Control of systems with Hölder continuous functions in the distributed delays Carpathian Journal of Mathematics 2014 30 1 123 128 Bao G. Zeng Z. Analysis and design of associative memories based on recurrent neural network with discontinuous activation functions Neurocomputing 2012 77 1 101 107 10.1016/j.neucom.2011.08.026 2-s2.0-80955145744 Forti M. Nistri P. Global convergence of neural networks with discontinuous neuron activations IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 2003 50 11 1421 1435 10.1109/TCSI.2003.818614 MR2024569 2-s2.0-0242696111 Huang Y. Zhang H. Wang Z. Dynamical stability analysis of multiple equilibrium points in time-varying delayed recurrent neural networks with discontinuous activation functions Neurocomputing 2012 91 21 28 10.1016/j.neucom.2012.02.016 2-s2.0-84861193650 Li L. Huang L. Dynamical behaviors of a class of recurrent neural networks with discontinuous neuron activations Applied Mathematical Modelling 2009 33 12 4326 4336 10.1016/j.apm.2009.03.014 MR2566983 2-s2.0-68049124938 Li L. Huang L. Global asymptotic stability of delayed neural networks with discontinuous neuron activations Neurocomputing 2009 72 16-18 3726 3733 10.1016/j.neucom.2009.05.016 2-s2.0-69249208666 Li Y. Wu H. Global stability analysis for periodic solution in discontinuous neural networks with nonlinear growth activations Advances in Difference Equations 2009 2009 14 798685 MR2519559 10.1155/2009/798685 Liu X. Cao J. Robust state estimation for neural networks with discontinuous activations IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics 2010 40 6 1425 1437 10.1109/TSMCB.2009.2039478 2-s2.0-79953207030 Liu J. Liu X. Xie W.-C. Global convergence of neural networks with mixed time-varying delays and discontinuous neuron activations Information Sciences 2012 183 92 105 10.1016/j.ins.2011.08.021 MR2847019 2-s2.0-80055058410 Qin S. Xue X. Global exponential stability and global convergence in finite time of neural networks with discontinuous activations Neural Processing Letters 2009 29 3 189 204 10.1007/s11063-009-9103-7 2-s2.0-67649088057 Wang J. Huang L. Guo Z. Global asymptotic stability of neural networks with discontinuous activations Neural Networks 2009 22 7 931 937 10.1016/j.neunet.2009.04.004 ZBL1160.92002 2-s2.0-69449098824 Wang Z. Huang L. Zuo Y. Zhang L. Global robust stability of time-delay systems with discontinuous activation functions under polytopic parameter uncertainties Bulletin of the Korean Mathematical Society 2010 47 1 89 102 10.4134/BKMS.2010.47.1.089 MR2604235 ZBL1181.93067 2-s2.0-77749249629 Wu H. Global stability analysis of a general class of discontinuous neural networks with linear growth activation functions Information Sciences 2009 179 19 3432 3441 10.1016/j.ins.2009.06.006 MR2574350 2-s2.0-67650945002 Cai Z. Huang L. Existence and global asymptotic stability of periodic solution for discrete and distributed time-varying delayed neural networks with discontinuous activations Neurocomputing 2011 74 17 3170 3179 10.1016/j.neucom.2011.04.027 2-s2.0-80052956138 Papini D. Taddei V. Global exponential stability of the periodic solution of a delayed neural network with discontinuous activations Physics Letters A 2005 343 1–3 117 128 10.1016/j.physleta.2005.06.015 2-s2.0-21744455449 Pinto M. Integral inequalities of Bihari-type and applications Funkcialaj Ekvacioj 1990 33 3 387 403 MR1086768 Lakshmikhantam V. Leela S. Differential and Integral Inequalities: Theory and Applications 1969 55-I New York, NY, USA Academic Press Mathematics in Sciences and Engineering, Edited by Bellman, R.