Global Stability Analysis of Neural Networks with Constant Time Delay via Frobenius Norm

This paper deals with the global asymptotic robust stability (GARS) of neural networks (NNs) with constant time delay via Frobenius norm. The Frobenius norm result has been utilized to find a new sufficient condition for the existence, uniqueness, and GARS of equilibrium point of the NNs. Some suitable Lyapunov functional and the slope bounded functions have been employed to find the new sufficient condition for GARS of NNs. Finally, we give some comparative study of numerical examples for explaining the advantageous of the proposed result along with the existing GARS results in terms of network parameters.


Introduction
Neural networks (NNs) operate on principles similar to the human nervous system. It has a huge number of processors. ese types of processors operate in parallel and are organized in layers. e initiating layer takes raw input, similar to the raw information received by humans. All subsequent layers receive input from the layer before it. Again, pass the output to the next layer. Finally, the end layer sends the final output. Most nodes are interconnected in layers.
NNs have been studied by many of the researchers because of their applications in different fields. e modern technology is mostly based on computational models which are known as artificial neural networks (ANNs). Nowadays artificial intelligence plays the most important role in electrical and electronics world. ANNs are the backbone of this artificial intelligence. In recent years, the role of ANNs has been developed due to their applications in various disciplines. e machine learning uses the distinct variety of NNs such as Feedforward NNs-artificial neuron, radial basis function NNs, multilayer perceptron, convolutional NNs, recurrent neural networks (RNNs), modular NNs, and sequence-to-sequence models. Moreover, NNs have wide applications in engineering areas [1][2][3][4][5][6] such as radar systems, signal classification, 3D reconstruction, face identification, object recognition, medical diagnosis, visualization, machine translation, combinatorial optimization, and signal processing. Also they may have great applications in nonengineering areas such as sales forecasting, risk management, and target marketing.
In NNs, time delay plays an important role in different areas such as video lip reading and speech recognition. Due to this delay parameter, the convergence of solution of the given system can be affected. e convergence of solution of neural system is to make the NNs to be stable. So, the concept of global stability analysis plays an important role in the convergence of solution of NNs. Also that the different kinds of stability analysis such as global asymptotic robust stability (GARS), exponential stability, and complete stability of NNs, have been studied by many researchers in [7][8][9][10][11][12][13][14]. ese different types of stability results of the delayed NNs have been discussed based on the methods of Lyapunov stability theory, linear matrix inequalities, nonsmooth analysis, and M-matrix theory in the previous literature. erefore, the GARS analysis of NNs under parameter uncertainties is the most important problem. Recently, it has been exclusively studied by many authors in [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34].
From the motivation of the above concepts, GARS of NNs has been investigated in this paper. e objective of this paper is to obtain a new sufficient condition for the GARS of the equilibrium point of the delayed neural system using the Lyapunov stability theory and Frobenius norm under parameter uncertainties.
e Frobenius norm is always an upper bound for the spectral norm (‖ · ‖ 2 ). Moreover, the Frobenius norm is much easier to compute than the spectral norm. For calculating the Frobenius norm, it is not necessary to find the eigenvalues rather by using the traces (sum of the diagonals of a matrix). So, the calculation of finding eigenvalues is pretty difficult for higher dimensional matrices. Moreover, some matrices having real entries may give complex eigenvalues. By utilizing the Frobenius norm, we avoid such situations. erefore, the Frobenius norm is important for calculating the upper bound of connection weight matrices. By utilizing the concept of homeomorphism, we find a new sufficient condition for the existence and uniqueness of the equilibrium point of NNs. Finally, we will give some comparative studies of numerical example to illustrate the effectiveness of our results for the NNs.
. For any vector w � (w 1 , w 2 , . . . , w n ) T , |w| is defined as |w| � (|w 1 |, |w 2 |, . . . , |w n |) T . For any matrix S � (s ij ) n×n with real entries |S| will be defined as |S| � (|s ij |) n×n . e minimum and maximum eigenvalues of S are denoted by λ min (S) and λ max (S), respectively. tr(S) be the trace of the matrix S. at is, the sum of the diagonal values of the matrix S. If S � (s ij ) n×n is a symmetric matrix and w T Sw > 0( ≥ 0), for any real vector w � (w 1 , w 2 , . . . , w n ) T , then S is said to be positive definite matrix (positive semidefinite matrix). Consider the two positive definite matrices H � (h ij ) n×n and S � (s ij ) n×n . en, H < S implies that w T Hw < w T Sw for any real vector w � (w 1 , w 2 , . . . , w n ) T .

Problem Statement and Fundamentals
In this paper, we consider the following delayed neural networks: where i � 1, 2, . . . , n and n denotes the total neurons. w i (t) denotes the i th neuron state of the vector at time t. c i represents the rate of charge for the i th neuron. r ij and d ij are the connection weight matrices with and without time delay, respectively. f j (·) denotes the activation functions at time t and t − τ. Here τ denotes the constant time delay. J i represents the vector with constant input between the neurons. e matrix vector form of equation (1) is as follows: e most common approach for handling the delayed neural system is to make the connection weight matrices D � (d ij ) n×n and R � (r ij ) n×n , and the matrix C � diag(c i > 0) in an interval as follows: Assumption 1. (see [24]). e activation functions f i are assumed to be slope bounded; that is, there exist some positive constants k i such that the following conditions hold: is class of functions will be denoted by f ∈ k. e functions of this class do not require to be bounded, differentiable, and monotonically increasing.
Lemma 2 (see [24]). If G(x) ∈ C 0 (C 0 means that the set of continuous functions on R n ) satisfies the following conditions, then G(x) is a homeomorphism on R n :

Existence and Uniqueness of Equilibrium Point
is section will focus on the new sufficient condition for the existence of equilibrium point of our model (2) which is unique. By using the Frobenius norm, we prove a new sufficient condition for existence of equilibrium point of our NNs model (2) which is unique. where , and R � (r ij ) with r ij � max(|r ij |, |r ij |). en, for each constant vector J, the neural network model (2) satisfying (3) has a unique equilibrium point.
Proof. Define the following map: Here every solution of G(w) � 0 is an equilibrium point of system (2). For proving this theorem, it is enough to prove that G(w) is a homeomorphism on R n .
Let w, v ∈ R n be the two vectors such that w ≠ v. en, Since w ≠ v and C � diag(c i > 0). erefore, from equation (9), Mathematical Problems in Engineering en, we get From Lemma 1, we get Applying the results (11)- (13) in (10), we get Given that Ω 1 > 0, we have where λ min (Ω 1 ) is the smallest eigen value of the positive From the above inequality, we write By applying the properties of norm for the above inequalities, we have Using the above inequalities in (20), we have the following inequality: where ‖f(0)‖ 2 , ‖G(0)‖ 1 , and ‖M‖ ∞ are finite. Moreover,

Global Stability Analysis
In this section, we prove that the obtained sufficient conditions for the existence and uniqueness of the equilibrium point in the previous eorem 1 will also give the sufficient conditions for the GARS of the neural system (2). Furthermore, we denote the equilibrium point of 1 by w * and use some proper transformation say y i (·) � w i (·) − w * i , i � 1, 2, . . . , n. After giving such transformation, the network model (1) can be put in the following form: . . , n Moreover, Assumption 1 holds for the function g, i.e., f ∈ k gives that g ∈ k with g i (0) � 0, i � 1, 2, . . . , n. By using this transformation, the equilibrium point w * of 2 is shifted to the origin of (20). Now, our focus is to show that GARS for the origin of the transformed model (20) instead of focusing the GARS for w * . e matrix form of (20) is as follows: where y(t) � (y 1 (t), y 2 (t), . . . , y n (t)) T ∈ R n is the new state vector, g(y(t)) � (g 1 (y 1 (t)), g 2 (y 2 (t)), . . . , g n (y n (t))) T ∈ R n , and g(y(t − τ)) � (g 1 (y 1 (t − τ)), g 2 (y 2 (t − τ)), . . . , g n (y n (t − τ))) T ∈ R n . Theorem 2. Suppose that g ∈ k and there exist matrices where M max � maximum(m i ), D * � 1/2(D + D), D * � 1/2 (D − D), and R � (r ij ) with r ij � max(|r ij |, |r ij |). en, origin of the neural network model (21) satisfying the network parameters 3 is globally asymptotically robust stable.
Proof. Consider the following positive definite Lyapunov functional:

Mathematical Problems in Engineering
where m i , δ, η, and μ are the positive constants which will be determined later. e following equation is the time derivative of equation (23) along the trajectories of the model (21):

Comparisons with Numerical Examples
In this section, we will compare our sufficient conditions of GARS with the previous existing results of GARS. For the comparison, the previous results of GARS are restated as follows.

Mathematical Problems in Engineering
Theorem 3 (see [24]). Suppose that g ∈ k and there exist matrices K � diag(k i > 0) and M � diag(m i > 0) such that en, origin of system (21) satisfying the network parameters 3 is globally asymptotically robust stable.
Theorem 4 (see [15]). Suppose that g ∈ k and there exist matrices (37) en origin of system (21) satisfying the network parameters 3 is globally asymptotically robust stable., where Z � (z ij ) n×n with z ii � − 2m i d ii and z ij � − maximum(|m i d ij + m j d ji |, |m i d ij + m j d ji |), for i ≠ j.Now we demonstrate the advantages of our result with some examples as follows.
Example 1. Consider the following network parameters of the neural network model 2: Let From the above matrices, we get Using the above parameters, ‖R‖ F � 4.
In this example, we compare our sufficient condition Ω 1 with the result Ω 3 by taking M as an identity matrix. Now, Ω 1 and Ω 3 are calculated as follows: Ω 1 > 0, provided c > 9.12. For the sufficient condition Ω 1 > 0, system 2 will become GARS whenever c > 9.12. For calculating Ω 3 , we need the matrix Z, and it is calculated using the matrices M, D, and D. e matrix Z is given as follows: 8 Mathematical Problems in Engineering Ω 3 > 0, provided c > 9.5. For the sufficient condition Ω 3 > 0, system 2 will become GARS whenever c > 9.5. Consider the fixed network parameters C � 20I, D � D, and R � R, τ is a constant, and the activation function g(y(t)) � tanh(y(t)). e state trajectories are depicted in Figures 1 and 2  Remark 1. From the above Example 1, Ω 3 is valid for c > 9.5. Moreover, our result Ω 1 is valid in the range 9.12 < c < 9.5, but Ω 3 does not hold. erefore, we conclude that Ω 1 is less conservative than Ω 3 for the network parameters of this example.

Example 2.
Consider the neural network model 1 with the following network parameters: Let Using the above parameters, ‖R‖ F � 8. In this example, we compare our sufficient condition Ω 1 with the result Ω 2 by taking M as an identity matrix. Now, Ω 1 and Ω 2 are calculated as follows:    Ω 1 > 0, provided c > 11. For the sufficient condition Ω 1 > 0, system 2 will become GARS whenever c > 11. Now, Ω 2 is calculated as follows: Ω 2 > 0, provided c > 11.3. For the sufficient condition Ω 2 > 0, system 2 will become GARS whenever c > 11.3. Consider the fixed network parameters C � 10I, D � D, and R � R, τ is a constant, and the activation function g(y(t)) � tanh(y(t)). e state trajectories are depicted in  Remark 2. From Example 2, Ω 2 is valid for c > 11.3. Moreover, our result Ω 1 is valid in the range 11 < c < 11.3, but Ω 2 does not hold. erefore, we conclude that Ω 1 is less conservative than Ω 2 for the network parameters of this example. From the above examples, our sufficient conditions Ω 1 is less conservative than those imposed by the results Ω 2 and Ω 3 . Hence, our sufficient condition is more advantageous than the previous results for the above network parameters. Our sufficient condition may have less advantage than the existing stability conditions for the different sets of network parameters. However, all such results will give sufficient conditions.

Conclusion
In this paper, global stability of NNs has been studied by using the Frobenius norm result under parameter uncertainties. Since the Frobenius norm is the easiest norm by its calculation when compared with spectral norm. By using this Frobenius norm, we have obtained a new sufficient condition for the GARS of NNs model 2. Finally, we discussed some numerical examples to illustrate the effectiveness of our result with the previous results.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.