OPTIMAL GUARANTEED COST FILTERING FOR MARKOVIAN JUMP DISCRETE-TIME SYSTEMS

This paper develops a result on the design of robust steady-state estimator for a class of uncertain discrete-time systems with Markovian jump parameters. This result extends the steady-state Kalman filter to the case of norm-bounded time-varying uncertainties in the state and measurement equations as well as jumping parameters. We derive a linear state estimator such that the estimation-error covariance is guaranteed to lie within a certain bound for all admissible uncertainties. The solution is given in terms of a family of linear matrix inequalities (LMIs). A numerical example is included to illustrate the theory.


Introduction
Perhaps the problem of optimal (state and/or parameter) estimation is the oldest problem in systems theory and particularly for dynamical systems subject to stationary Gaussian input and measurement noise processes [1].For classes of continuous-time and discretetime systems with uncertain parameters, the robust state estimation problem arises naturally for which several techniques have been developed (see [3,17,20,22,26,30,31] and the references cited therein).
Recently, dynamical systems with Markovian jumping parameters have received increasing interests from both control and filtering points of view.For some representative prior work on this general topic, we refer the reader to [7,8,9,10,11,23,24,25] and the references therein.The filtering problem of systems with jumping parameters has been resolved in [11] and a discrete-time filtering problem for hybrid systems has been studied in [12].In these two papers, the state process is observed in white noise and the random jump process is observed by a point process.The so-called H ∞ filter of jump systems has been designed in [9] via a linear matrix inequality (LMI) approach, which provides mean square stable error dynamics and a prescribed bound on the ᏸ 2 -induced gain from the noise signals to the estimation error.The problem of robust Kalman filtering for uncertain linear continuous-time systems with Markovian jump parameters has been studied in [24] in which a state estimator is designed such that the covariance of the estimation error is guaranteed to be within a certain bound for all admissible uncertainties.However, to date the problem of robust Kalman filtering for uncertain discrete-time linear systems with Markovian jump parameters, to the best of the authors' knowledge, has not yet been fully investigated.
In this paper, the problem of robust state estimation for linear discrete-time systems with both Markovian jump parameters and norm-bounded parametric uncertainties is investigated.The state estimator is designed such that the estimation-error covariance is guaranteed to be upper bounded for all admissible uncertainties.Our study illustrates that the above problem is solvable if a set of LMIs has a solution.Furthermore, it is shown that the results obtained in this paper encompass the available results in the literature.A numerical example is included to demonstrate the potential of the proposed techniques.
Notations and facts.In the sequel, we denote by W t , W −1 , and λ(W) the transpose, the inverse, and the eigenvalues of any square matrix W, respectively.Let λ m (W) and λ M (W) be the minimum and maximum eigenvalues of matrix W. We use W > 0 (≥,<,≤ 0) to denote a symmetric positive definite (positive semidefinite, negative, negative semidefinite) matrix and I to denote the n × n identity matrix, E[•] stands for mathematical expectation, tr(•) denotes the matrix trace, and 2 [0,∞] is the space of square-summable vectors defined by The symbol • will be used in some matrix expressions to induce a symmetric structure, that is, given matrices L = L t and R = R t of appropriate dimensions, then Also, the notation (Ω,Ᏺ,P) stands for a given probability space, where Ω is the sample space, Ᏺ is the algebra of events, and P is the probability measure defined on Ᏺ.
Sometimes the arguments of a function will be omitted in the analysis when no confusion can arise.
Fact 1.For any real matrices Σ 1 , Σ 2 , and Σ 3 with appropriate dimensions and Σ t 3 Σ 3 ≤ I, it follows that Fact 2. Let Σ 1 , Σ 2 , Σ 3 , and 0 < R = R t be real constant matrices of compatible dimensions and H(t) a real matrix function satisfying H t (t)H(t) ≤ I.Then, for any ρ > 0 satisfying ρΣ t 2 Σ 2 < R, the following matrix inequality holds: 2. Discrete-time jumping system 2.1.Model description.We consider the following class of discrete-time systems with Markovian jump parameters for a given probability space (Ω,Ᏺ,P): ) where x k ∈ R n is the system state, z k ∈ R p is the system measurement, and w k ∈ R n and v k ∈ R p are zero-mean Gaussian white-noise processes with joint covariance 2) The initial condition x o is assumed to be a zero-mean Gaussian random variable independent of the white-noise processes w k and v k .The matrices A(η k ) ∈ R n×n and C(η k ) ∈ R p×n are known real-valued matrices.These matrices are functions of the random process {η k } which is a discrete-time, discrete-state Markovian chain taking values in a finite set = {1, 2,...,s} with generator = (α i j ) and transition probability from mode i at time k to mode j at time j + 1, k ∈ : with p i j ≥ 0 for i, j ∈ and s j=1 p i j = 1.We note that the set consists of different operation modes of system (2.1), and for each value η k = i, i ∈ , we denote the matrices associated with mode i by where A i and C i are known constant matrices describing the nominal system.For η = i ∈ , ∆A(k,η k ) = ∆A i (k) are unknown matrices which represent time-varying uncertainties, and are assumed to belong to certain bounded compact sets, where, for are known real constant matrices characterizing the way the uncertain parameters ∆(k,η k ) = ∆ i (k) ∈ R i× j affect the nominal matrices A i and C i , and ∆ i (k), i ∈ , is an unknown time-varying matrix function satisfying (2.1c).
Remark 2.1.Note that system (2.1) can be used to represent many important physical systems subject to random failures and structure changes, such as electric-power systems [28], control systems of a solar thermal central receiver [27], communications systems [2], aircraft flight control [18], control of nuclear power plants [21], and manufacturing systems [4,5].

Stochastic quadratic stability.
First, we recall the following definition.
Definition 2.2.System (2.1) with v k ≡ 0, w k ≡ 0, and ∆ i (k) ≡ 0 is said to be stochastically stable (SS) if for all finite initial state φ ∈ R n and initial mode η o ∈ , there exists a finite number ᏺ(φ,η o ) > 0 such that lim Remark 2.3.In light of [6,13], it follows that (2.5) is equivalent to mean square stability (MSS) in the sense that and, in turn, it implies almost sure stability (ASS) in the sense that, for every finite initial state φ ∈ R n and initial mode η o ∈ , we have with probability 1.
Lemma 2.4.System (2.1) with v k ≡ 0, w k ≡ 0, and F(k,η k ) ≡ 0 is SS if and only if there exists a set of matrices {W i = W t i > 0}, i ∈ , satisfying the following set of coupled LMIs: Proof.Let the modes at times k and k + 1 be η k = i, η k+1 = j ∈ .Take the stochastic Lyapunov function candidate V (•) to be (see [14]) (2.9) Thus, we have from (2.9), together with (2.8), (2.10) M. S. Mahmoud and P. Shi 37 With Q i > 0, we have from (2.10), for x k = 0, (2.11) where and in view of (2.11), it is readily evident that 0 < β < 1, and hence and hence and using Rayleigh quotient, we have which means that system (2.1) is SS, thus the sufficiency part is proved.The proof of necessity can be found in [13].
Remark 2.5.Lemma 2.4 establishes an LMI stability test of the input-free nominal jump system.It is easy to show that (2.8) is equivalent to the fact that there exists a set of matrices {Z i > 0, i ∈ } satisfying In line with the results of [15] for linear systems, we introduce the following definition of stochastic quadratic stability.Definition 2.6.System (2.1) with w k ≡ 0 is said to be stochastically quadratically stable if there exists a set of matrices for all admissible parameter uncertainties ∆A i (k), i ∈ , satisfying (2.1c).Now, we show that for system (2.1), stochastic quadratic stability implies stochastic stability.
Theorem 2.7.System (2.1) with w k ≡ 0 is SS for all admissible parameter uncertainties ∆A i (k), i ∈ , if it is stochastically quadratically stable.Proof.Since system (2.1) with w k ≡ 0 is stochastically quadratically stable, by Definition 2.6, there exists a set of matrices {0 < W i = W t i , i ∈ } satisfying (2.20) for all admissible parameter uncertainties ∆A i (k), i ∈ ; thus (2.1a) is SS.
By direct application of Fact 2 and rearranging terms, we have the following corollary.
Corollary 2.8.System (2.1) with w k ≡ 0 is SS for all admissible parameter uncertainties ∆A i (k), i ∈ , if there exist a set of matrices {W i = W t i > 0}, i ∈ , and a set of scalars ρ i > 0, i ∈ , satisfying the following set of coupled LMIs: for i ∈ , where xk ∈ R n is the state estimate and xko is the estimator initial condition which is assumed to be a zero-mean Gaussian random vector.The matrices G i and K i , i ∈ , are the estimator gain to be determined in order that the estimation error dynamics be stochastically asymptotically stable.When such an estimator is applied to the uncertain system (2.1), the corresponding estimation error vector is defined by e k = x k − xk .From (2.1) and estimator (2.23), for η k = i, one has In terms of the state variables e k and xk , the state equations describing the augmented system obtained from (2.1) and (2.24) are as follows: where (2.26) We introduce the following definition.
Definition 2.9.The state estimator (2.23) is said to be an SS quadratic guaranteed cost (SSQGC) state estimator with associated set of cost matrices {0 < X i = X t i , i ∈ } for system (2.1) if for the estimator gain matrix F, |λ(F)| < 1, and there exist matrices ᐄ i , i ∈ , satisfying holds for all uncertainties ∆(k,i) ≤ 1, where ᐄi = s j=1 p i j X j , i ∈ .In the discussions to follow, we restrict attention to the class of quadratic guaranteed cost state estimators.The next result shows that if estimator (2.23) is an SSQGC for system (2.1) with cost matrix X i , i ∈ , then X i provides an upper bound on the steady-state error covariance matrix at time k: (2.29) Theorem 2.10.Suppose that (2.23) is an SSQGC state estimator with cost matrix X i , i ∈ , for system (2.1).Then the augmented system (2.25) will be stochastically quadratically stable and the steady-state error covariance matrix at time k satisfies the bound for all admissible uncertainties ∆(k,i).
Proof Necessity.Suppose that (2.23) is an SSQGC state estimator for system (2.1) with cost matrix X i > 0. Since for the matrix F, |λ(F)| < 1, and (2.1) is stochastically quadratically stable, it follows that the augmented system (2.25) will be stochastically quadratically stable.This can be easily verified on selecting a Lyapunov matrix of the form with ω i > 0 being a sufficiently large constant and the matrices Ω is and Ω i f quadratic Lyapunov functions for system (2.1) and (2.23), respectively.Let E{ξ ko ξ t ko } = Ξ i ≥ 0. Since σ k is a Gaussian white-noise process with identity covariance, it follows for any admissible uncertainty ∆(k,i) ≤ 1 that the state covariance matrix for system (2.25) is given by where Φ(k, j) is the state transition matrix associated with system (2.25).Moreover, using the fact that system (2.25) is stochastically quadratically stable, it follows that which in turn implies that Xic (k) (2.34) it is readily evident that Λ i (k,k o ) satisfies the Lyapunov difference equation and hence, in view of the stochastic quadratic stability of system (2.25), (2.40) Hence, there exist constants We conclude that this estimator is an SSQGC state estimator with cost matrix ε −1 i Xi .Remark 2.11.Since our main purpose is to construct a state estimator which minimizes the upper bound on the error covariance X i , i ∈ , we solve an alternative minimization problem by looking at the corresponding bound on the steady-state mean square error (2.42) We will be concerned with constructing an SSQGC state estimator which minimizes Tr[X i ].In the case of limited-state measurements, it may be required to estimate an output variable y k = Lx k .The solution would be an output estimate of the form ŷk = Lx k and the corresponding steady-state mean square error bound This means that an SSQGC state estimator would be constructed to minimize the quantity Tr[L t X i L].

Construction of the optimal filter
In this section, we provide an LMI approach to constructing the SSQGC state estimator for system (2.1), which minimizes the bound in (2.42).We show that the filtering problem can be solved if a family of coupled LMIs has a feasible solution.For simplicity in exposition, we assume that the matrix A i is invertible for all i ∈ .The following theorem establishes the main result.
Theorem 3.1.Consider system (2.1) and suppose that it is stochastically quadratically stable.If there exist two sets of matrices have a feasible solution, where ) then the estimator (2.23) is an SSQGC state estimator with gains ) Proof.The proof essentially follows a line similar to the proof of a result in the work of Xie et al. [32].First, in view of the stochastic quadratic stability of system (2.1a), it follows from the results of [15,32] that, for each fixed i ∈ , For an arbitrary small ν > 0 and a sufficiently small i > 0, inequality (3.13) implies that By the discrete bounded real lemma [16,19], there exists a matrix s j=1 p i j Ξ * j and µ i = i ν, using (3.4) and applying the matrixinversion lemma [16], it follows from (3.15) that Inequality (3.16) is feasible provided that Âi is a stable matrix and ᐃ1/2 i Φi ᐃ1/2 i < I, where Φi = s j=1 p i j Φ j .Using parallel arguments, it follows from [1] that (3.2) is an LMI for the stationary standard linear filtering, where M i and Ri are the covariance matrices of the process and measurement noise signals, respectively, and L i is the cross-covariance matrix between the process and measurement noises.
To establish the stochastic quadratic stability of the estimator (2.23), we define where Φ i and Ψ i are the feasible solutions to the LMIs (3.1) and (3.2), respectively.In terms of where ᐅi = s j=1 p i j ᐅ j .It follows from Theorem 2.10 that (2.23) is a stochastic stable quadratic estimator with a guaranteed cost given by (3.12).Remark 3.2.It should be observed that Theorem 3.1 provided an LMI-based feasibility test which can be conveniently solved by Matlab-LMI solver.The matrix Υ i , i ∈ , is an intermediate variable introduced to facilitate the well-posedness of the problem.In this study, it is assumed that jumping parameter information {η k , k = 1,2,...} is available for our design.However, if it is not the case, then Wonham filtering technique [29] would be required to first estimate the Markov chain observed in Gaussian noise, and the approach presented in this paper can then be employed.Also, it should be noted that the designed state estimator/filter (2.23) depends upon the system mode i.This is because due to the existence of the jumping parameters {η k } in the system, the "complicated" dependence is unavoidable, otherwise, the filter (independent of i) would be very conservative (using one operating form for the whole system).Indeed, the dependence is good in the sense that it gives us more options for designing and choosing the better, if not the best, filter to better estimate the system state.That is, if the system has more chance to stay in mode i, then the filter (2.23) would be likely to be chosen at ith form.Similarly, the filter can be chosen at jth form if the system is likely to jump from i mode to j mode (with high probability).Remark 3.3.In effect, the estimation error with minimum covariance of system (2.1) can be determined by the following minimization problem: minimize σ i subject to where Φ i and Ψ i , i ∈ , are the feasible solutions of (3.1) and (3.2).

Example
In order to illustrate Theorem 3.1, we consider a pilot-scale multireach water-quality system which can fall into the type (2.1a) and (2.1b).Let the Markov process governing the mode switching have the infinitesimal generator (see [29]) For the three operating conditions (modes), the associated dates are as follows: M. S. Mahmoud and P. Shi 45 Mode 1.For the purpose of comparison, Table 4.2 gives the associated costs of both the guaranteed cost filter designed for the nominal system and the optimal filter developed in this paper.It is clear that the latter outperforms the standard one in the presence of parametric uncertainty.

Conclusions
In this paper, the problem of the design of robust steady-state estimator for a class of uncertain discrete-time systems with Markovian jump parameters has been addressed.The results obtained here have extended the normal steady-state Kalman filter to the case of systems with norm-bounded time-varying uncertainties in the state and measurement equations as well as jumping parameters.A linear state estimator has been constructed such that the estimation error covariance is guaranteed to lie within a certain bound for all admissible uncertainties.The solution has been given in terms of a family of linear matrix inequalities.