Stability Analysis of a Class of Neural Networks with State-Dependent State Delay

,e differential equations with state-dependent delay are very important equations because they can describe some problems in the real world more accurately. Due to the complexity of state-dependent delay, it also brings challenges to the research.,e value of delay varying with the state is the difference between state-dependent delay and time-dependent delay. It is impossible to know exactly in advance how far historical state information is needed, and then the problem of state-dependent delay is more complicated compared with time-dependent delay. ,e dominating work of this paper is to solve the stability problem of neural networks equipped with state-dependent state delay. We use the purely analytical method to deduce the sufficient conditions for local exponential stability of the zero solution. Finally, a few numerical examples are presented to prove the availability of our results.

State-dependent delay (SDD) is all around us [19][20][21][22]. With limited natural resources, Antarctic whales and seals tend to mature longer if their populations are large [19]. In the problem of car following, it is inevitable to encounter the phenomenon that the time delay changes with the state, which contains physiological time delay, mechanical time delay, and motion time delay [20,21]. In addition, in the blood circulation system, the concentration of nutrients regulates the mitotic cycle of hematopoietic stem cells; thus, the mitotic cycle of stem cells is affected by the concentration of cells in the region [22]. In these cases, in order to describe change and evolution of things more accurately and make the research results more realistic and modest with nuanced understanding, we must adopt differential equations with SDD.
Neurodynamics help to identify the highly complex and precise multilevel nonlinear brain system. e processing of neural information involves the coupling and cooperation of multiple levels and regions. In this way, the nerve activity about the structural neural network is worthy of study and discussion from the perspective of models and evolution, and then to some extent, the cognitive function of the specific functional neural network is realized by evolutive neurodynamics [13]. e work about evolutive neurodynamics will be helpful to understand the information processing mechanism and the neural energy coding rule in the nervous system and also provides the basis for the research of the potential mechanism of cognitive function.
In this paper, the local evolution characteristics of neural networks with state-dependent state delay (SDSD) will be discussed. e topic introduced here may interest researchers in engaging the theory and application of the new neural network model having kinematics and dynamics feature. Prior to this, there have been many studies on the stability of nonlinear systems with SDSD. Hartung [23] described a type of nonlinear functional differential equation with SDSD and analyzed the stability conditions of periodic solutions based on the linearization method. Exponential stability conditions of nonlinear systems with SDSD were reported via comparing with time-dependent delay systems in [24]. Fiter and Fridman [25] developed the Lyapunov-Krasovskii functional method to discuss asymptotic stability about some particular linear systems with SDSD. Li and Wu [26] considered a class of nonlinear differential systems with SDD pulses. By pulse control theory, uniformly stable, uniformly asymptotically stable, and exponentially stable results were presented. To derive stability criteria of nonlinear systems with SDSD, Li and Yang [27] initially created purely analytical frame structure. To the best of the authors' knowledge, although the neural networks are widely used and the theoretical results are abundant, the research on stability of the neural network with SDSD is still blank. Moreover, since time delay is an inevitable factor in neural networks, it is of great significance to study the problem of neural networks with SDSD. en, the primary contributions of this paper are generalized as follows: (1) A general neural network model with SDSD is established. Neural network model with SDSD may interest all those professionals and academics in processing operations who would desire to utilize the capabilities of control systems about capturing rich history information for cost effective and yet robust events to be portrayed. Such research also contributes to reveal influences of neurodynamics evoked by SDSD characteristics. (2) Locally exponentially stable sufficient conditions of the neural networks with SDSD are obtained. e purely analytical method employed by us demonstrates that it is possible to analyze computational neurodynamics without a few additional restrictions. e purely analytical method itself intents to address the universality of analysis framework. Definitely, it may be extended to a more general class of nonlinear systems with SDSD. e rest of this paper is arranged as follows. In Section 2, we will present a specific neural network model. e results of our work are described in Section 3. In Section 4, two numerical examples and simulation results are given to verify the validity of our results. Finally, in Section 5, the thesis is summarized, and the future work is prospected.

Preliminaries and Model Description
2.1. Notations. Let R and R + represent the sets of real numbers and nonnegative real numbers, respectively. R n denotes the n-dimension Euclidean space. For a matrix E, λ max (E) is used to denote its maximum eigenvalue. P(E) stands for the minimum value of all elements of matrix E. e vector 1-norm and 2-norm are severally expressed by ‖·‖ 1 and ‖·‖ 2 .

Some Preliminaries and Problem Formulation.
Based on the work of Li and Yang [27], we put forward the following neural network model with SDSD, which is described by for the sake of presentation; we also give the compact form of system (1) as follows: where n stands for the number of neurons in the network, _ X(t) denotes the upper right derivative of X(t), X � X(t) � (x 1 (t), x 2 (t), . . . , x n (t)) T , and x i (t) represents the state of the ith neuron. A is a diagonal matrix, for i � 1, 2, . . . , n, a i > 0, and B and D are constant matrices with corresponding dimensions. g(X(t)) � (g 1 (x 1 )(t), g 2 (x 2 (t)), . . . , g n (x n (t))) T and f(X(t − τ(t, X))) � (f 1 (x 1 (t − τ(t, X))), f 2 (x 2 (t − τ(t, X))), . . . , f n (x n (t− τ(t, X)))) T are the excitation functions of the ith neuron at time t and t − τ(t, X), respectively.

2
Discrete Dynamics in Nature and Society where ‖ · ‖ is the vector norm matching with the content of the paper. Remark 1. X(t) is right-upper derivable, which implies that the solution of system (2) can be continuous but not smooth. e state delay τ(t, X) is related to the state of each neuron. For subsequent analysis, we need the following assumptions for system (1) and (2).
rough Assumption 1, this ensures that X � 0 is a constant solution of systems (1) and (2).

Assumption 3.
e state delay τ(t, X) ∈ C(R + × R n , [0, η]) is locally Lipschitz continuous, namely, for any Γ 1 , Γ 2 ∈ R n , there always exists a constant ℓ τ > 0 such that For ease of expression, let Definition 1 (see [28]). e zero solution of system (2) is said to be locally exponentially stable (LES) in region M; if there exist constants c > 0 and Lyapunov exponent ζ > 0, for any t ≥ t 0 , we have where and M is called a local exponential attraction set of the zero solution.
Consider the arbitrariness of ϵ, let ϵ ⟶ 0, and then we obtain i.e., e reasoning process of eorem 1 is completed. □

Theorem 2. B and D are symmetric matrices; then, under Assumptions 1-4, the zero equilibrium of system (2) is LES if
and μ � 2ζ > 0 satisfies where ζ is the Lyapunov exponent.
Proof. Suppose X(t; t 0 , Ψ) is a solution of system (2) with initial state . en, for any ϵ ∈ (0, μ), we claim that Firstly, when t � t 0 , (18) holds. Next, we prove that (18) holds on (t 0 , +∞). In contrast to (18), there are some instants on (t 0 , +∞) to make (18) untenable, and then we can find an instant t q ≥ t 0 to satisfy the following three conditions: (3) ere exists a right neighbor of t q (U 0 + (t q , ξ)) such that ∀t ξ ∈ U 0 Discrete Dynamics in Nature and Society Combining the definition of V 0 , V(t), t q and condition (1), we have and then from (17) and (19), we obtain d dt is is in contradiction with (9), so (18) holds. Consider the arbitrariness of ϵ, let ϵ ⟶ 0, and then we could be capable of getting i.e., e reasoning process of eorem 2 is completed. □ Remark 2. In eorem 2, by taking the value of ϖ in Lemma 1 to be 1, we can get the following inequality: We give a particular case of system (2), considering the following one-dimensional system: where a, b, d, δ, λ ∈ R.

Remark 5.
e proofs of eorems 1 and 2 also provide an estimate of the locally exponentially convergent rate ζ which could be obtained by solving transcendental equation (9) or (17).

Remark 6.
Remarkably, the Lyapunov exponent in eorems 1 and 2 is state-dependent, so only when the initial value is bounded can we find common ζ to make e (ζ− ϵ) V(t) ≤ V(0). Furthermore, due to the effect of SDSD, the results in our paper are local features, not global features.

Concluding Remarks
We devote to resolve the problem of local dynamic property for neural networks with SDSD in this paper. rough pure analysis method and technique of reduction to absurdity, we obtain a certain number of sufficient conditions for local exponential stability of neural networks with SDSD. Based on our results, we know that the Lyapunov exponent relies on the state on account of the effect of SDSD. It also indicates that the exponential stability results derived are local rather than global. Consequently, we can take the global dynamics as a topic in the prospective research. In addition, we can also develop SDSD system methods, for instance, eventtriggered control.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.