Stabilization of Discrete-Time Delayed Systems via Partially Delay-Dependent Controllers

This paper is concerned with the stabilization problem for a class of discrete-time delayed systems, whose stabilizing controller is firstly designed to be partially delay-dependent. The distribution property of such a controller is firstly described by a discretetime Markov chain with two modes. It is seen that two traditionally special cases of state feedback controller without or with time delay, respectively, are included. Based on the proposed controller, new stabilization conditions depending on some probabilities are developed. Because of the established results with LMI forms, they are further extended tomore general cases that the transition probabilities are uncertain and totally unknown, while more applications are also given in detail. Finally, numerical examples are used to demonstrate the effectiveness and superiority of the proposed methods.


Introduction
As we know, time delay is usually encountered in various practical dynamical systems, such as chemical systems, heating systems, biological systems, networked control systems (NCSs), and telecommunication and economic systems.The presence of time delay in such practical systems often leads to oscillation, instability, and poor performance.Motivated by these facts, various research topics of time-delayed systems have been studied in [1][2][3][4][5], in which both delay-independent and delay-dependent results are included.Because of making use of the information on the length of delays, delaydependent results are less conservative, especially when time delay is small.During the past decades, much effort has been paid on the derivation of delay-dependent conditions.For example, by exploiting the slack variable technique, less conservative results on the delayed systems with time-varying delays were established in [6][7][8], while the other results based on the Jensen inequality were got in [9][10][11][12].By considering the distribution of delay, some improved delay-dependent results were presented in [13][14][15][16][17].The other research topics on delayed systems are often related to networked control systems [18][19][20][21], complex networks [22][23][24][25], and Markovian jump systems [26][27][28][29].Particularly, there is a special kind of time delay for Markovian jump systems, which is named as mode-dependent time delay.The characteristic of modedependent time delay not only is time-varying but also depends on system modes.For this kind of time delay, some results for continuous-time case were obtained in [30][31][32][33], while the results in [34][35][36] were referred to discrete-time case.
By investigating the existing references on various kinds of time delay systems, it is said that the stabilization achieved by state feedback control is mainly realized through a controller without or with delay separately.When a state feedback controller without delay is designed to be delayed systems, it is assumed that the related system state should be available online and is regarded as an ideal assumption.On the contrary, the state of a delayed state feedback controller should always have time delay.In this case, it is said that it is an absolute assumption.More importantly, it is found in this paper that one cannot say that either of the above realized stabilization ways is superior in terms of having less conservatism.Since the underlying systems are with time delays, whether a controller both with and without time delay can be designed.What is the form of such a controller?Compared with the traditional stabilizing controllers for delayed systems, are there some advantages?What is the correlation among such controllers?Based on these illustrations, it is necessary to consider the above problems in detail.To the best of our knowledge, very few results are available to design a feedback controller for delayed systems depending on state and delayed state simultaneously.All the facts motivate the current research.
In this paper, the stabilization of discrete-time delayed systems is firstly achieved by a kind of partially delaydependent controller.The main contributions of this paper are summarized as follows: (1) a kind of partially delaydependent controller is developed, whose probability distribution is modeled into a discrete-time Markov chain with two modes; (2) new conditions for such a stabilizing controller are presented in terms of LMIs, where the probabilities are taken into account in the controller design; (3) the given results are further extended to some general cases that such probabilities are uncertain and totally unknown, which are also developed with LMI forms; (4) based on the proposed methods, more applications on state feedback controller with partially observable states are considered.
Notation.R  denotes the  dimensional Euclidean space, and R × is the set of all  ×  real matrices.Z is the set of positive integer.E{⋅} means the mathematical expectation of {⋅}.‖ ⋅ ‖ refers to the Euclidean vector norm or spectral matrix norm. min (⋅) and  max (⋅) denote the minimum and the maximum eigenvalues of matrices, respectively.In symmetric block matrices, we use " * " as an ellipsis for the terms induced by symmetry and diag{⋅ ⋅ ⋅ } for a block-diagonal matrix and  ⋆ ≜  +   .

Problem Formulation
Consider a class of discrete-time delayed systems described as where () ∈ R  is the system state, () ∈ R  is the control input,  ≥ 0 is the time delay, and () ∈ R  is an initial value at . ,   , and  are known matrices with appropriate dimensions.
In this paper, a kind of partially delay-dependent controller is proposed as where both  1 and  2 are control gains to be determined.The parameter () represents the delayed controller added or not.In this paper, it is assumed to be a discrete-time homogeneous Markov chain taking values in a finite set B = {0, 1} with the following transition probability matrix: Here, parameters  and  are probabilities and defined as follows: which are named as the recovery rate and failure rate of the delayed state-feedback controller () =  2 ( − ), respectively.If + = 1, () will be reduced to the Bernoulli type process, whose probabilities are After applying controller (2) to system (1), one gets which is equivalent to with Here, () is also a discrete-time homogeneous Markov chain taking values in a finite set S = {1, 2}, whose transition probability matrix is similar to (3).
In these references, the system parameters switch simultaneously according to a Markov process, while all the system parameters considered here are deterministic and only the probability distribution of delayed controller is modeled into a Markov process.More importantly, by investigating the existing references such as [34][35][36], it is seen that the stabilization ways are also realized by adding either of the above special controllers, respectively, and such probabilities are not considered.
Remark 2. In this paper, a kind of partially delay-dependent controller is proposed, whose distribution probability is described by a Markov process.It is said that the key ideal of controller ( 2) is also applied to other systems or problems such as singular systems, pinning control of complex networks, and networked control systems with disordering phenomenon.For the stabilization problem of singular systems, it is seen that controller (2) can be designed directly.However, the corresponding analysis and synthesis problems should be reconsidered carefully, since the singular derivative matrix is included.Similarly, based on the traditionally pinning control methods of networks and controller (2), a kind of pining controller for complex networks with time delay could be designed.Thirdly, last but not the least, the key ideal of the proposed controller may be applied to deal with the disorder problem of networked control systems.However, how to propose an efficient controller based on controller (2) to resist disorder should be considered carefully.In a word, it is said that the proposed controller (2) has more applications which could be used to handle other problems.These problems will be our further topics.
In order to process our main results, a definition for system (6) or (7) is introduced here.
holds for all initial conditions  0 ∈ R  and  0 ∈ S.
Remark 5.In this theorem, the relationship between such probabilities and existence conditions for stabilizing controller (2) are established.Because of the corresponding probability distribution considered, a stochastic Lyapunov functional depending on the Markov process is exploited.
Since the probabilities are included, it is said that they are important in the controller design, whose effects will be illustrated by numerical examples.Moreover, in this paper, it is found by a numerical example that it is not a deterministic conclusion that one of two special cases is less conservative than the other one.This phenomenon further demonstrates the utility and superiority of partially delaydependent controller (2).
Remark 6.It is worth mentioning that, without loss of generality, the stochastic Lyapunov functional is selected to be (14), which is simpler.However, based on the methods proposed in this paper, there are still some possible ways to further reduce the conservatism.First of all, some additional terms related to time delay could be added to (14).In addition, some other techniques such as slack variable algorithm and Jensen inequality method can be used in the proof of Theorem 4. On the other hand, due to the results with LMI forms, more applications could be extended to other systems such as singular systems and the output control or filtering problems of discrete-time delayed systems, which will be our future work.
From the formula of partially delay-dependent controller (7), it is seen that two special cases are included and described by where  1 and  2 are the control gains to be determined.They are traditionally state-feedback controllers without or with delay and are obtained by letting () ≡ 0 and () ≡ 1, respectively.After applying such traditional controllers and by the similar method in this paper, we could have the following corollaries.
Corollary 7. Consider system (1) with a given scalar  > 0; there is controller (27) such that the resulting closed-loop system is stochastically stable if there exist  > 0,  > 0, and  > 0 and  1 satisfying where Φ 1 =   +   1   .Then, the gain of controller ( 27) is computed by Proof.When only controller ( 27) is applied, similar to the proof of Theorem 4, it is obtained that the resulting closedloop system is stochastically stable if the following condition holds, where  = + 1 ,  =  −1 , and  =  −1 .By pre-and postmultiplying both sides of (31) with diag{, , , } and its transpose, respectively, condition ( 31) is equivalent to where  = .Based on the representation of (30), it is concluded that conditions (30) and ( 32) are the same.This completes the proof.
Corollary 8. Consider system (1) with a given scalar  > 0; there is controller (28) such that the resulting closed-loop system is stochastically stable if there exist  > 0,  > 0, and  > 0 and  2 satisfying where Φ 2 =      +   2   .Then, the gain of controller ( 27) is computed by Proof.Based on the proofs of Theorem 4 and Corollary 7, one can get this corollary easily.Thus, the proof is omitted here.
From Theorem 4, it is seen that probabilities  and  play an important role in the controller design and should be given exactly.But in some applications, it is very hard or high cost to obtain them exactly, even they are totally unknown.So, it is natural and important to study such general cases.If there exist uncertainties in  and , we will use their estimations which are described as where α and β are their estimates and admissible uncertainties are Δα ∈ [−, ] with  ∈ [0, 1] and Δ β ∈ [−, ] with  ∈ [0, 1], respectively.Then, based on Theorem 4, we will have the following theorem.
Proof.By the proof of Theorem 4, it is known that the stochastic stability of the resulting closed-loop system with general conditions ( 35) and ( 36) can be guaranteed by Ψ  < 0,  ∈ S. It is equivalent to be described as with   =   +  2 .First, we deal with condition (42).By condition (35), it is concluded that (42) is equivalent to where  1 > 0. It is implied by where the former one is further guaranteed by Let  1 =  −1 1 and by the Schur complement lemma, we have (47) and (46) equivalent to where  =  −1 .From (37), it is known that  1 is nonsingular.Then, by pre-and postmultiplying both sides of (48) with diag{ 1 ,   1 , , , , } and its transpose, it is obtained that where  =  −1 .As for nonlinear −  1  1 , it can be done by (23).Similarly, − −1 1 is handled as Based on these and taking into account (13), it is got that condition (37) implies (50).As for (49), by pre-and postmultiplying both its sides with diag{ 1 , } and its transpose, one has it equal to (39).As for (43), similar to (42), it is concluded that it is equivalent to where  2 > 0. Similar to the above proof, we have it implied by By pre-and postmultiplying both sides of (53) with diag{ 2 ,   2 } and its transpose, we have which is equivalent to Letting  2 =  2  2  2 and considering representation ( 13), similar to (37) implying (50), it is obtained that (38) implies (56).As for (54), it can be guaranteed by ( 40), whose process is similar to the proof of ( 46).This completes the proof.
When probabilities  and  satisfy another general case that both of them are unknown, how to design controller (2) is also an interesting question.Similar to Theorem 9, we have the following result.
Then, the gains of the designed partially delay-dependent controller ( 3) are constructed by (13).
As for delayed system (1), it is said that the implementation of state feedback controller (27) or (28) needs its corresponding state or delayed state to be observable totally.When the states of such controllers are observable with some probabilities instead of being totally observable, the corresponding state feedback controllers without or with delay can be described by respectively, where  1 and  2 are control gains to be determined, and () is also a Markov process.The corresponding state observation sets are defined as which are referred to stochastic sets.However, based on the proposed method for partially delay-dependent controller, when such two sets are complementary, an improved controller based on stochastic observation sets of states can be constructed as After applying controllers (65), (66), and (69) to system (1), respectively, based on the exploited methods in this paper, one can get the corresponding results easily.
Corollary 11.Consider system (1) with a given scalar  > 0; there is controller (65) with condition (67) such that the resulting closed-loop system is stochastically stable if there exist where the corresponding symbols are given in Theorem 4.Then, the gain of controller ( 65) is computed as Corollary 12. Consider system (1) with a given scalar  > 0; there is controller (66) with condition (68) such that the resulting closed-loop system ( 6) is stochastically stable if there exist   > 0,  > 0,  > 0,   and  1 ,  ∈ S, satisfying where Φ1 =   1    +   1   , and the others are given in Theorem 4.Then, the gain of controller ( 65) is computed as Because of controllers ( 2) and (69) being similar, the obtained result based on controller (69) is similar to Theorem 4, while only the meaning of the transition probability matrices is different.Thus, it is omitted here.

Numerical Examples
Example 1.Consider a discrete-time delayed system described by (1), whose parameters are given as follows: where time delay is assumed to be  = 3.When the desired controller is with form (27), it is obtained from Corollary 7 that while there is no solution to controller (28) by Corollary 8. On the other hand, without loss of generality, when the probabilities are assumed to be  = 0.3 and  = 0.7, by Theorem 4, the gains of partially delay-dependent controller (2) are computed as For this example, it is seen that Corollary 7 is less conservative than Corollary 8.However, the system parameters of (1) are selected to be where time delay is also assumed to be  = 3.It is known that there is no solution to controller ( 27) by Corollary 7, while the gain of controller ( 28) can be got by Corollary 8 and is given as For this case, it is obtained that Corollary 8 is less conservative than Corollary 7. Thus, we cannot conclude that one of them is always less conservative.Based on these facts, it is said that the methods proposed in this paper have some utilities and advantages.In order to further demonstrate the correlation between the stabilization region and such probabilities, the system parameters of discrete-time delayed system (1) are given by where time delay is assumed to be  = 3 and  1 and  2 are scalars.The probabilities of  and  are allowed to take values in [0, 1], respectively.Under  2 = −0.1, the upper bound of allowable range of  1 can be obtained by Theorem 4, which are listed in Table 1.On the other hand, when  2 = −0.1,one can get the corresponding upper bound of  1 given in Table 2 similarly.Moreover, the correlations between allowable range of  1 with given  2 = −0.1 and  2 = 0.2 for different pair (, ) are also demonstrated in Figures 1 and 2, respectively.Based on these simulations, the correlation between stabilization region and such probabilities are illustrated vividly.
Example 2. Consider the cart and inverted pendulum system illustrated in Figure 3. There,  and  are the car mass and the pendulum mass,  is the length of the pendulum,  is the cart position,  is the pendulum angular position, and  is the input force.The state variables are selected to be  1 = ,  2 = ẋ ,  3 = , and  3 = θ .Without loss of generality, it is assumed that  = 0.5 kg,  = 1 kg, and  = 1 m, and the surface is without friction.Under the sampling period   = 0.1s, the discretized model linearized on the upposition  = 0 is obtained as where Because of  having eigenvalues at 1, 1, 1.5569, and 0.6423, the above discrete-time system is unstable.Without loss of generality, it is assumed that the above system has a constant delay  = 2, and the corresponding matrix   is given as    Based on the proposed method, a kind of partially delaydependent controller will be designed, whose transition probabilities are assumed to be  = 0.4 and  = 0.7, respectively.By Theorem 4, the control gains are computed by where α = 0.8 with  = 0.1 and β = 0.9 with  = 0.1.However, when such probabilities are totally unknown, it is found from Theorem 10 that there is no solution to controller (2).The main reasons are that the effects of such probabilities are removed in the controller design, and the original open-loop system (80) is unstable.This simulation also demonstrates the utility of the proposed methods.

Conclusions
In this paper, we have studied the stabilization problem for a kind of discrete-time delayed system by exploiting partially delay-dependent controllers.Here, the designed controller is composed of state feedback controllers without and with delay together, whose probability distribution is described by a Markov process with two modes.Several sufficient conditions for the existence of the designed controller are given with LMI forms, where the corresponding probabilities are contained and play important roles.Due to the given results being LMIs, some general cases that such probabilities are uncertain and unknown have been considered, respectively, and more applications on the controller with partially observable states have also been demonstrated.The effectiveness and superiority of the proposed methods have been shown by numerical examples.

Figure 4 :
Figure 4: The simulation of the closed-loop system.