State Estimation for Discrete-Time Stochastic Neural Networks with Mixed Delays

This paper investigates the analysis problem for stability of discrete-time neural networks (NNs) with discreteand distributetime delay. Stability theory and a linear matrix inequality (LMI) approach are developed to establish sufficient conditions for the NNs to be globally asymptotically stable and to design a state estimator for the discrete-time neural networks. Both the discrete delay and distribute delays employ decomposing the delay interval approach, and the Lyapunov-Krasovskii functionals (LKFs) are constructed on these intervals, such that a new stability criterion is proposed in terms of linear matrix inequalities (LMIs). Numerical examples are given to demonstrate the effectiveness of the proposed method and the applicability of the proposed method.


Introduction
In the past decades, recurrent neural networks (RNNs) have been widely studied due to their wide applications in some areas such as pattern recognition, associative memory, combinatorial optimization, and signal processing.Dynamical behaviors (e.g., stability, instability, periodic oscillatory, and chaos) of the neural networks are known to be crucial in applications.It is noted that the stability of neural networks is a prerequisite for some optimization problems.As is known to all, many biological and artificial neural networks contain inherent time delays in signal transmission due to the finite speed of information processing, which may cause oscillation, divergence, and instability.In recent years, a great number of papers have been published on various networks with time delays [1][2][3][4][5][6][7][8][9][10].
On one hand, delay-dependent stability condition for continuous-time RNNs with time-varying delays was derived by defining a new Lyapunov functional, and the obtained condition could include some existing time delay-independent ones; see [11,12].Up to now, however, when we use computer to simulate, experimentalize or compute continuoustime RNNs, it is necessary to discretize the continuoustime networks to formulate a discrete-time system.So, the study on the dynamics of discrete-time neural networks is crucially needed.In particular, the stability of discrete-time neural networks (DNNs) has been studied in [13][14][15][16][17][18], since DNNs play a more important role than their continuous-time counterparts in today's digital life.
On the other hand, the neuron states are seldom fully available in the network outputs in many applications; the neuron state estimation problem becomes important to utilize the estimated neuron state through the available measurements.Recently, the state estimation problem for the neural networks has engaged lots of scholars' attention and interest.Therefore, delay-dependent state estimation problem has been studied widely for NNs; see [19][20][21][22][23][24][25][26].
Stochastic disturbances are mostly inevitable owing to thermal noise in electronic implementations.It has also been revealed that certain stochastic inputs could make a neural network unstable.
Remark 2. The condition on the activation function in Assumption 1 was originally employed in [27] and has been subsequently used in recent papers with the problem of stability of neural networks; see [5,6,11,28,29], for example.Remark 5.It is noted that the introduction of binary stochastic variable was first introduced in [6].
The system (1) can be rewritten as  ( + 1) =  () +  ( ()) + As mentioned before, it is very difficult or even impossible to acquire the complete information of the neuron states in relatively large-scale neural networks.The main purpose of this study is to develop a novel approach to estimating the neuron states via the network outputs.As mentioned above, the objective of this study is to present an efficient algorithm to estimate the neuron states via available network outputs.It is assumed that the measured network outputs are of the form where () ∈   is the measured output,  is known constant matrix with appropriate dimensions, and  : As a matter of fact, the activation functions (⋅) are known.In order to fully utilize the information of the activation function, the state estimator for the neural network is constructed as where x( + 1) is the estimation of the neuron state and  ∈ R × is the estimator gain matrix to be determined.Define the error signal ( + 1) = ( + 1) − x( + 1); thus, we obtain the error state system as follows: = ( + )  () +  ( ( ()) −  ( x ())) Denote () = (()) − ( x()), ( −   ()) = (( −   ())) − ( x( −   ())), ℎ( + ) = (( + )) − (x( + ), () = (, ()) − (, x()), then (12) can be rewritten as The initial condition associated with the error system ( 13) is given as where By defining ê() = [  (),   ()]  and combing ( 8) and ( 13) with  = 0, we can obtain the following system: where Then, it is easy to show the following equations: Lemma 7.For any constant matrix  ∈  × , any integers  2 ≥  1 , and any vector function  : where  staisfies  =   > 0 such that the sums in the following are well defined, then where, matrix  and vector () independent of  1 and  2 are appropriate dimensional arbitrary ones.
Proof.It's well known that where where the vector , ,  with appropriate dimensional and  > 0. From this, we can get which is equivalent to (18).

New Stability Criteria
In this section, we will establish new stability criteria for system (1).Since the system in ( 8) involves a stochastic parameter, to investigate its stability, we need the following definition.

𝑋.
Proof.We construct a new Lyapunov-Krasovskii functional as Journal of Applied Mathematics 7 where ()    () Taking the difference of the functional along the solution of the system, we obtain From Remark 6, we can get = Ĝ () B P BĜ () . ( Similarly, the following equation can be deduced: And it is easy to deduce that B ρ 1  ρ2 B = 0, B ρ 1  ξ2 D = 0, and D ξ Then, by using Lemma 7 and   () ∈ ( −1 ,   ], we have ()    () ()    () ()    () Then, by using Lemma 7, we have Let  ] where   denotes the unit column vector having one element on its th row and zeros elsewhere.
It is obvious that The summation of both sides of (41) from 1 to  (let  be a positive integer) is equal to We can conclude that ∑ +∞ =1 E{‖ê()‖ 2 } is convergent and lim This completes the proof.
Based on Theorem 10, a further improved delaydependent stability criterion of the system (15) is given in the following corollary by using Lemma 8.

Examples
In this section, a numerical example is given to illustrate the effectiveness and benefits of the developed methods.
For the parameters listed above, letting   = 1,   = 5,  1 = 3, and  1 = 0.89, we can obtain the following feasible solution.Therefore, it is clear to see that our method is effective.Due to the limitation of the length of this paper, we only provide a part of the feasible solution here:    46) is indeed a state estimator of the delayed neural network (1). Figure 3 further confirmed that the estimation error () tends to zero as  → ∞.

Conclusions
The robust stability for stochastic discrete-time NNs with mixed delays has been investigated in this research via the Lyapunov functional method.By employing delay partitioning and introducing a new Lyapunov functional, more general LMIs conditions on the stability of the stochastic discrete-time NNs are established.Finally, the feasibility and effectiveness of the developed methods and their less conservatism than most of the existing results have been shown by numerical simulation examples.The foregoing results have the potential to be useful for the study of stochastic discretetime NNs.And the results can also been extended to complex networks with mixed time-varying delays.