Multistability and Instability of Competitive Neural Networks with Mexican-Hat-Type Activation Functions

and Applied Analysis 3 (3) employed in this paper are more general than activation functions (4). More precisely, the contributions of this paper are three-fold as follows. Firstly, we define four index subsets and present sufficient condition under which the CNNs with activation functions (3) have multiple equilibria, by tracking the dynamics of each state component and applying fixed point theorem.The index subsets are defined in terms of maximum and minimum values, which are different from and less restrictive than those given in [33]. Furthermore, we discuss the exact existence of equilibria for CNNs. Secondly, based on some analysis method, we analyze the dynamical behaviors of each equilibrium point for CNNs, including local stability and instability.The dynamical behaviors of such system are much more complex than those of Hopfield neural networks considered in [33], due to the complexity of the networks structure and generality of activation functions. Thirdly, specializing the model and activation functions to those in [33], we show that the obtained results extend and improve the very recent works in [33]. Finally, two examples with their simulations are given to verify and illustrate the validity of the obtained results. 2. Main Results Firstly, we define the four index subsets as follows:


Introduction
In the past decades, some famous neural network models, including Hopfield neural networks, cellular neural networks, Cohen-Grossberg neural networks, and bidirectional associative memory neural networks, had been proposed in order to solve some practical problems. It should be mentioned that in the above network models only the neuron activity is taken into consideration. That is, there exists only one type of variables, the state variables of the neurons in these models. However, in a dynamical network, the synaptic weights also vary with respect to time due to the learning process, and the variation of connection weights may have influences on the dynamics of neural network. Competitive neural networks (CNNs) constitute an important class of neural networks, which model the dynamics of cortical cognitive maps with unsupervised synaptic modifications. In this model, there are two types of state variables: that of the short-term memory (STM) describing the fast neural activity and that of long-term memory (LTM) describing the slow unsupervised where = 1, 2, . . . , , = 1, 2, . . . , , ( ) is the neuron current activity level, ( ( )) is the output of neurons, ( ) is the synaptic efficiency, is the constant external stimulus, represents the connection weight between the th neuron and the th neuron, is the strength of the external stimulus, is the constant input, and > 0 and ≥ 0 denote disposable scaling constants.
The qualitative analysis of neural dynamics plays an important role in the design of practical neural networks. To solve problems of optimization and signal processing, neural networks have to be designed in such a way that, for a given external input, they exhibit only one globally stable state (i.e., monostability). This matter has been treated in [1][2][3][4][5][6][7]. On the other hand, if neural networks are used to analyze associative memories, the coexistence of multiple locally stable equilibria or periodic orbits is required (i.e., multistability or multiperiodicity), since the addressable memories or patterns are stored as stable equilibria or stable periodic orbits. In monostability analysis, the objective is to derive conditions that guarantee that each network contains only one steady state, and all the trajectories of the network converge to it, whereas in multistability analysis, the networks are allowed to have multiple equilibria or periodic orbits (stable or unstable). In general, the usual global stability conditions are not adequately applicable to multistable networks.
Recently, the multistability or multiperiodicity of neural networks has attracted the attention of many researchers. In [8,9], based on decomposition of state space, the authors investigated the multistability of delayed Hopfield neural networks and showed that the n-neuron neural networks can have 2 stable orbits located in 2 subsets of R . Cao et al. [10] extended the above method to the Cohen-Grossberg neural networks with nondecreasing saturated activation functions with two corner points. In [11,12], the multistability of almostperiodic solution in delayed neural networks was studied. Kaslik and Sivasundaram [13,14] firstly revealed the effect of impulse on the multistability of neural networks. In [15][16][17], high-order synaptic connectivity was introduced into neural networks and the multistability and multiperiodicity were considered, respectively, for high-order neural networks based on decomposition of state space, Cauchy convergence principle, and inequality technique. In [18][19][20][21][22], the authors indicated that under some conditions, there exist 3 equilibria for the n-neuron neural networks and 2 of which are locally exponentially stable. In [23], the Hopfield neural networks with nondecreasing piecewise linear activation functions with 2 corner points were considered. It was proved that under some conditions, the n-neuron neural networks can have and only have (2 + 1) equilibria, ( + 1) of which are locally exponentially stable and others are unstable. In [24], the multistability of neural networks with + step stair activation functions was discussed based on an appropriate partition of the n-dimensional state space. It was shown that the n-neuron neural networks can have (2 + 2 − 1) equilibria, ( + ) of which are locally exponentially stable. In particular, the case of = was previously discussed in [25]. For more references, see [26][27][28][29][30][31][32] and references therein.
It is well known that the type of activation functions plays a very important role in the multistability analysis of neural networks. In the abovementioned and most existing works, the activation functions employed in multistability analysis were mainly focused on sigmoidal activation functions and nondecreasing saturated activation functions, which are all monotonously increasing. In this paper, we will consider a class of continuous Mexican-hat-type activation functions, which are defined as follows (see Figure 1): where , , , , ,1 , ,2 , ,1 , ,2 are constants with −∞ < < < < +∞, ,1 > 0, and ,2 < 0, = 1, 2, . . . , . In particular, when = −1, = 1, = 3, = −1, ,1 = 1, ,2 = −1, ,1 = 0, and ,2 = 2 ( = 1, 2, . . . , ), the above activation functions reduce to the following special activation functions employed in [33]: It is necessary to point out that the Mexican-hat-type activation functions are not monotonously increasing, which are totally different from sigmoidal activation functions and nondecreasing saturated activation functions. Hence, the results and methods mentioned above cannot be applied to neural networks with activation functions (3). Very recently, the multistability and instability of Hopfield neural networks with activation functions (4) were studied in [33]. Inspired by [33], in this paper, we will investigate the multistability and instability of CNNs with activation functions (3). It should be noted that the structure of CNNs differs from and is more complex than that in [33]. Moreover, the activation functions Abstract and Applied Analysis 3 (3) employed in this paper are more general than activation functions (4). More precisely, the contributions of this paper are three-fold as follows.
Firstly, we define four index subsets and present sufficient condition under which the CNNs with activation functions (3) have multiple equilibria, by tracking the dynamics of each state component and applying fixed point theorem. The index subsets are defined in terms of maximum and minimum values, which are different from and less restrictive than those given in [33]. Furthermore, we discuss the exact existence of equilibria for CNNs.
Secondly, based on some analysis method, we analyze the dynamical behaviors of each equilibrium point for CNNs, including local stability and instability. The dynamical behaviors of such system are much more complex than those of Hopfield neural networks considered in [33], due to the complexity of the networks structure and generality of activation functions.
Thirdly, specializing the model and activation functions to those in [33], we show that the obtained results extend and improve the very recent works in [33].
Finally, two examples with their simulations are given to verify and illustrate the validity of the obtained results.

Main Results
Firstly, we define the four index subsets as follows: where V = ( ). It is easy to see that ≤ ( ) ≤ V for ∈ R.
Remark 1. In this paper, the index subsets are defined in terms of maximum and minimum values, which are different from those given in [33], where they are defined in terms of absolute values. In general, our conditions are less restrictive, which have been shown in [16]. Proof. By the definition of index subset N 2 , = ( ) and V = ( ), we obtain It follows from (6) and (7) that Noting that < and substituting ( ) = ,1 + ,1 and ( ) = ,1 + ,1 into (8), we can derive that − ( + −1 ) ,1 < 0 ( ∈ N 2 ).
It follows from the second equation of system (2) that which leads to Let ( ( ), ( )) be a solution of system (2) with initial state In the following, we will discuss the dynamics of state components ( ) for ∈ N ( = 1, 2, 3, 4), respectively.

Lemma 4.
All the state components ( ), ∈ N 1 , will flow to the interval (−∞, ] when tends to +∞. Proof. According to the different location of (0), there are two cases for us to discuss.
Proof. We prove it in the following three cases due to the different location of (0).
In summary, wherever the initial state (0) ( ∈ N 3 ) is located in, ( ) would flow to and enter the interval [ , ] and stay in it finally. Lemma 6. All the state components ( ), ∈ N 4 , will flow to the interval [ , +∞) when tends to +∞.
Proof. Similar to the proof of Lemmas 4 and 5, we will prove it in the following two cases.
6 Abstract and Applied Analysis Therefore, ( ) would never get out of [ , +∞). By the same method, we also get that once ( 0 ) ∈ [ , +∞) for some 0 > 0, then ( ) would stay in this interval for all ≥ 0 .
Then, for ∈ N 2,1 , if there exists some * ≥ 0 such that ( * ) = , then we get from the definition of index subset N 2 thaṫ Abstract and Applied Analysis 7 and if there exists some * * ≥ 0 such that ( * * ) = , theṅ That is, the trajectory ( ) with (0) ∈ [ , +∞), ∈ N 2,3 , would enter and stay in the interval [ , ], which implies that there does not exist any equilibria with the corresponding th state component located in [ , +∞).
Combining with Lemmas 4-6, it can be concluded that ( ), ∉ N 2,2 , would never escape from the corresponding interval of Ω . Furthermore, denotẽ Then, from (26) Equivalently, the above equations can be rewritten as , any equilibrium point of system (2) satisfies the following equations: In the subset region , define a map Γ as follows: ] , ∈ N 2,2 ,
That is, * is located in the interior of Ω .
Furthermore, if the following conditions hold, then system (2) with activation functions (3) can have and only have 3 ♯N 2 equilibria.
(ii) Furthermore, if the following conditions Proof. In the following, we will discuss the dynamical behaviors of 3 ♯N 2 equilibria in two cases, respectively.
Abstract and Applied Analysis  Similarly, it follows from the definition of N 3 that which implies that Therefore, condition (89) holds for ∈ N 3 .
From Theorem 10 and Remark 11, we can obtain Corollary 12 as follows.

Remark 14.
In this paper, we study the multistability and instability of CNNs with activation functions (3). The models are different from and more general than those in [33], and the considered activation functions (3) are also more general than those employed in [33]. Moreover, the index subsets defined in this paper are less restrictive than those defined in [33].
Remark 15. Compared with the results reported in [33], it can be seen that Corollary 13 above is consistent with Theorem 1 in [33]. That is, if we specialize the system and activation functions in Theorem 10 to those considered in [33], we can obtain the main result in [33]. Therefore, Theorem 10 extends and improves the main result in [33].

Two Illustrative Examples
For convenience, we consider the following two-dimensional CNNs: ( ) = − ( ) + It is easy to see that N 2 = {1, 2}. In addition, by simple computations, we have Therefore, the conditions in Corollary 12 hold. According to Corollary 12, system (100) has exactly 3 2 = 9 equilibria, 2 2 = 4 equilibria are locally stable and others are unstable. In fact, by direct computations, we can obtain the nine equilibria (−2.