New Results on Stability and Stabilization of Markovian Jump Systems with Partly Known Transition Probabilities

This paper investigates the problem of stability and stabilization of Markovian jump linear systems with partial information on transition probability. A new stability criterion is obtained for these systems. Comparing with the existing results, the advantage of this paper is that the proposed criterion has fewer variables, however, does not increase any conservatism, which has been proved theoretically. Furthermore, a sufficient condition for the state feedback controller design is derived in terms of linear matrix inequalities. Finally, numerical examples are given to illustrate the effectiveness of the proposed method.


Introduction
Over the past few decades, Markov jump systems MJSs have drawn much attention of researchers throughout the world.This is due to their important roles in many practical systems.That is, MJSs are quite appropriate to model the plants whose structures are subject to random abrupt changes, which may result from random component failures, abrupt environment changes, disturbance, changes in the interconnections of subsystems, and so forth 1 .
Since the transition probabilities in the jumping process determine the behavior of the MJSs, the main investigation on MJSs is to assume that the information on transition probabilities is completely known see, e.g., 2-5 .However, in most cases, the transition probabilities of MJSs are not exactly known.Whether in theory or in practice, it is necessary to further consider more general jump systems with partial information on transition probabilities.Recently, 6-9 considered the general MJSs with partly unknown transition probabilities.But in these papers, when the terms containing unknown transition probabilities were separated from others, the fixed connection weighting matrices were introduced, which may lead to the conservatism.As noticed, it, currently 10 , have achieved an excellent work of reducing the conservatism.The basic idea is to introduce free-connection weighting matrices to substitute the fixed connection weighting matrices.However, this means that the method of 10 has to increase the number of decision variables.As shown in 11 , more decision variables imply the augmentation of the numerical burden.Therefore, developing some new methods without introducing any additional variable meanwhile without increasing conservatism will be a valuable work, which motivates the present study.
In this paper, we are concerned with the problem of the stability and stabilization of MJSs with partly unknown transition probabilities.By fully unitizing the relationship among the transition rates of various subsystems, we obtain a new stability criterion.The proposed criterion avoids introducing any connection weighting matrix; however, do not increase any conservatism comparing to that of 10 , which has been proved theoretically.More importantly, because the proposed stability criterion need not introduce any slack matrix, the relationships among Lyapunov matrices are highlighted.Therefore, it helps us to understand the effect of the unknown transition probabilities on the stability.Then, based on the proposed stability criterion, the condition for the controller design is derived in terms of LMIs.Finally, numerical examples are given to illustrate the effectiveness of the proposed method.

Notation
In this paper, R n and R n×m denote the n-dimensional Euclidean space and the set of all n × m real matrices, respectively.Z represents the set of positive integers.The notation P > 0 P ≥ 0 means that P is a real symmetric and positive definite semipositivedefinite matrix.For notation Ω, F, P , Ω represents the sample space, F is the σ-algebra of subsets of the sample space, and P is the probability measure on F. E{•} stands for the mathematical expectation.Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

Problem Formulation
Consider the following stochastic system with Markovian jump parameters: where Δ > 0, lim Δ → 0 o Δ /Δ 0, and π ij ≥ 0, for i / j, is the transition rate from mode i at time t to mode j at time t Δ, and π ii − N j 1,i / j π ij .A r t are known matrix functions of the Markov process.
Since the transition probability depends on the transition rates for the continuous-time MJSs, the transition rates of the jumping process are considered to be partly accessible in this paper.For instance, the transition rate matrix Λ for system 2.1 with N operation modes may be expressed as where "?" represents the unknown transition rate.
For notational clarity, for all i ∈ S, we denote where m is a nonnegative integer with 1 ≤ m ≤ N and k i j ∈ Z , 1 ≤ k i j ≤ N, j 1, 2, . . ., m represent the jth known element of the set S i k in the ith row of the transition rate matrix Λ.For the underlying systems, the following definitions will be adopted in the rest of this paper.More details refer to 2 .
Definition 2.1.The system 2.1 with u t 0 is said to be stochastically stable if the following inequality holds for every initial condition x 0 ∈ R n and r 0 ∈ S.
To this end, we introduce the following result on the stability analysis of systems 2.1 .
Lemma 2.2 see 2 .The system 2.1 with u t 0 is stochastically stable if and only if there exists a set of symmetric and positive-definite matrices P i , i ∈ S, satisfying Remark 2.3.Since the unknown transition rates may have infinitely admissible values, it is impossible to be directly used the inequalities of Lemma 2.2 to test the stability of the system.

Stochastic Stability Analysis
In this section, a stochastic stability criterion for MJSs is given without any additional weighting matrix.
Theorem 3.1.The system 2.1 with a partly unknown transition rate matrix 2.3 and u t 0 is stochastically stable if there exist matrices P i > 0, such that the following LMIs are feasible for i 1, 2, . . ., N. 3 Proof.Based on Lemma 2.2, we know that the system 2.1 with u t 0 is stochastically stable if 2.6 holds.Now we prove that 3.1 -3.3 guarantee that 2.6 holds by the following two cases.

Case I i /
∈ S i k .In this case, 2.6 can be rewritten as Note that in this case j∈S i uk ,j / i π ij −π ii − π i k and π ij ≥ 0, j ∈ S i uk , j / i; then from 3.2 , we have

3.5
Therefore, if i / ∈ S i k , inequalities 3.1 and 3.2 imply that 2.6 holds.π ij P j − W i < 0, 3.8 P j − W i ≤ 0, j ∈ S i uk , j / i, 3.9 P j − W i ≥ 0, j ∈ S i uk , j i.

3.10
Now we have the following conclusion.

Proof. If i /
∈ S i k , 3.9 -3.10 imply that 3.2 and the following inequality hold:
If i ∈ S i k , 3.9 and 3.10 guarantee that P j − W i ≤ 0, j ∈ S i uk .

Mathematical Problems in Engineering
In addition, under this circumstance, we have Then, From 3.12 , 3.13 , and 3.8 , we obtain that 3.3 holds.The proof is completed.
Remark 3.4.The stability condition in 10 and that in Theorem 3.1 are derived via different techniques.Now Theorem 3.3 proves that the former can be simplified to the latter without increasing any conservatism.More importantly, because Theorem 3.1 of this paper does not involve any slack matrix, the relationships among Lyapunov matrices are highlighted.Therefore, it is clearer how the unknown transition probabilities affect on the stability.

State-Feedback Stabilization
In this section, the stabilization problem of system 2.1 with control input u t is considered.
The mode-dependent controller with the following form is designed: where K r t for all r t ∈ S are the controller gains to be determined.In the following, for given r t i ∈ S, K r t K i .Using 4.1 , the system 2.1 is represented as The following theorem is proposed to design the mode-dependent stabilizing controller with the form 4. where Proof.It is clear that the system 4.2 is stable if the following conditions are satisfied.If i / ∈ S i k ,

4.10
Pre-and postmultiply the left sides of 4.8 -4.10 by P −1 i , respectively, and introduce the following new variables: Then, inequalities 4.8 -4.10 are equivalent to the following matrix inequalities, respectively.

Mathematical Problems in Engineering
12

4.14
By applying Schur complement, inequalities 4.12 -4.14 are equivalent to LMIs 4.3 -4.5 , respectively.Therefore, if LMIs 4.3 -4.5 hold, the closed-loop system 4.2 is stochastically stable according to Theorem 3.1.Then, system 2.1 can be stabilized with the state feedback controller 4.1 , and the desired controller gains are given by 4.7 .The proof is completed.
Remark 4.2.The number of variables involved in Theorem 4.1 in this paper is also Nn n 1 /2 less than that in the corresponding result of 10 .Furthermore, it can be seen that no conservativeness is introduced when deriving Theorem 4.1 from Theorem 3.1.Therefore, the stabilization method presented in Theorem 4.1 is not more conservative than that of 10 , too.

Numerical Example
In this section, an example is provided to illustrate the effectiveness of our results.
Consider the following MJSs, which borrowed from 10 with small modifications,

5.1
The partly transition rate matrix Λ is considered as where the parameter a in matrix Λ can take different values for extensive comparison purpose.
We consider the stabilization of this system corresponding to different a by using different approaches.Considering the precision of comparison, we let a increase starting from 0 with a very small constant increment 0.01.Using the LMI toolbox in MATLAB, both the LMIs in Theorem 5 of 10 and the ones in Theorem 4.1 of this paper are feasible for all a 0, 0.01, . . ., 1.64, and are infeasible when a increases to 1.65.It can be seen that for this example the stabilization method in our paper is not conservative than that in 10 .Now by some simulation results, we further show the effectiveness of the stabilization method of this paper.For example, when a 1.64, in our method, the controller gains are obtained as

5.3
Figure 1 is the state response cures in 1000 random sampling with initial condition x 0 1 − 1 T when u t 0. In each random sampling, the transition rate matrix is randomly generated but satisfies the partly transition rate matrix Λ in 5.2 .Figure 1 shows that the open-loop system is unstable.
Applying the controllers in 5.3 , the trajectory simulation of state response for the closed-loop system with 1000 random sampling is shown in Figure 2 under the given initial condition x 0 1 − 1 T .In this case, the transition rate matrix is also randomly generated but satisfies the partly transition rate matrix Λ in 5.2 .Figure 2 shows that the stabilizing controller effectively keeps the running reliability of the system.

Conclusions
This paper has considered the problem of stability and stabilization of a class continuous-time MJSs with unknown transition rates.A new stability criterion has been proposed.The merit of the proposed criterion is that it has less decision variables without increasing conservatism comparing those in the literature to date.Then, the mode-dependent state feedback controller designing method has been proposed, which possess the same merit as the stability criterion.Numerical examples have been given to illustrate the effectiveness of the results in this paper.

4
and k i l i.Moreover, if 4.3 -4.5 are true, the stabilization controller gains from 4.1 are given by

Figure 1 :
Figure 1: State response of the open-loop system with 1000 random samplings.

Figure 2 :
Figure 2: State response of the closed-loop system with 1000 random samplings.
Therefore, if LMIs 3.1 -3.3 hold, we conclude that system 2.1 is stochastically stable according to Lemma 2.2.The proof is completed.Theorem 3.1 proposed in this paper does not introduce any free variable.It involves Nn n 1 /2 variables, while Theorem 3.3 in 10 involves Nn n 1 variables.Namely, the number of variables in this paper is only half of 10 .Generally, reducing the number of decision variables easily results in increasing conservatism of stability criteria.However, Theorem 3.1 in this paper does not increase conservatism while with less variables.To show this, we rewrite it as follows.
The closed-loop system 4.2 with a partly unknown transition rate matrix 2.3 is stochastically stable if, there exist matrices Q i > 0 and Y i , i 1, 2, . . ., N such that the following LMIs are feasible for i 1, 2, . . ., N.