1. Introduction
Descriptor systems are also referred to as singular systems, implicit systems, generalized state-space systems, or semistate systems and provide convenient and natural representations in the description of economic systems, power systems, robotics, network theory, and circuits systems [1]. The stability for singular system is more complicated than that for nonsingular systems because not only the asymptotic stability but also the system regularity and impulse elimination are needed to be addressed [2–5].
In practice, in many physical systems, such as aircraft control, solar receiver control, power systems, manufacturing systems, networked control systems, air intake systems, and other practical systems, abrupt variations may happen in their structure, due to random failures, repair of components, sudden environmental disturbances, changing subsystem interconnections, or abrupt variations in the operating points of a nonlinear plant [6–19]. Therefore, more and more attention has been paid to the problem of stochastic stability and stochastic admissibility for singular Markovian jump systems (SMJSs) [20–30]. Long et al. [23] derived stochastic admissibility for a class of singular Markovian jump systems with mode-dependent time delays. Wang and Zhang [27] focused on the asynchronous
l
2
-
l
∞
filtering for discrete-time stochastic Markov jump systems with randomly occurring sensor nonlinearities. However, the TRs in the above mentioned literatures are assumed to be completely known.
In practice, the TRs in some jumping processes are difficult to be precisely estimated due to the cost and some other factors. Therefore, analysis and synthesis problems for normal MJSs with incomplete information on transition probability have attracted more and more attentions [31–49]. Xiong and Lam [32] probed robust
H
2
control of Markovian jump systems with uncertain switching probabilities. Karan et al. [33] considered the stochastic stability robustness for continuous-time and discrete-time Markovian jump linear systems (MJLSs) with upper bounded TRs. Zhang and Boukas [34] discussed stability and stabilization for the continuous-time MJSs with partly unknown TRs. Lin et al. [38] considered delay-dependent
H
∞
filtering for discrete-time singular Markovian jump systems with time-varying delay and partially unknown transition probabilities. Guo and Wang [49] proposed another description for the uncertain TRs, which is called generally uncertain TRs (GUTRs).
On the other hand, state estimation plays an important role in systems and control theory, signal processing, and information fusion [50, 51]. Certainly, the most widely used estimation method is the well-known Kalman filtering [52, 53]. A common feature in the Kalman filtering is that an accurate model is available. In some applications, however, when the system is subject to parameter uncertainties, the accurate system model is hard to obtain. To overcome this difficulty, the guaranteed cost filtering approach has been proposed to ensure the upper bound of guaranteed cost function [54]. Robust
H
∞
filtering for uncertain Markovian jump systems with mode-dependent time delays was proposed in [55]. In [56], guaranteed cost and
H
∞
filtering for time-delay systems were presented in terms of LMIs. However, to the best of our knowledge, there are few considering the robust guaranteed cost observer for a class of linear singular Markovian jump time-delay systems with generally incomplete transition probability, which is still an open problem.
In this paper, based on LMI method, we address the design problem of the robust guaranteed cost observer for a class of uncertain descriptor time-delay systems with Markovian jumping parameters and generally uncertain transition rates. The design problem proposed here is to design a memoryless observer such that for all uncertainties, including generally uncertain transition rates, the resulting augmented system is regular, impulse-free, and robust stochastically stable, and satisfies the proposed guaranteed cost performance.
2. Problem Formulation
Consider the following descriptor time-delay systems with Markovian jumping parameters:
(1)
E
x
˙
(
t
)
=
A
(
r
t
,
t
)
x
(
t
)
+
A
d
(
r
t
,
t
)
x
(
t
-
d
)
,
y
(
t
)
=
C
(
r
t
,
t
)
x
(
t
)
+
C
d
(
r
t
,
t
)
x
(
t
-
d
)
,
x
(
t
)
=
φ
(
t
)
,
∀
t
∈
[
-
d
,
0
]
,
where
x
(
t
)
∈
R
n
and
y
(
t
)
∈
R
r
are the state vector and the controlled output, respectively.
d
represents the state time delay. For convenience, the input terms in system (1) have been omitted.
φ
(
t
)
∈
L
2
[
-
d
,
0
]
is a continuous vector-valued initial function. The random parameter
γ
(
t
)
represents a continuous-time discrete-state Markov process taking values in a finite set
𝕊
=
{
1,2
,
…
,
s
}
and having the transition probability matrix
Π
=
[
π
i
j
]
,
i
,
j
∈
N
. The transition probability from mode
i
to mode
j
is defined by
(2)
Pr
{
r
t
+
Δ
=
j
∣
r
t
=
i
}
=
{
π
i
j
Δ
+
o
(
Δ
)
,
i
≠
j
,
1
+
π
i
j
Δ
+
o
(
Δ
)
,
i
=
j
,
where
Δ
>
0
satisfies
lim
Δ
→
0
(
o
(
Δ
)
/
Δ
)
=
0
,
π
i
j
≥
0
is the transition probability from mode
i
to mode
j
and satisfies
(3)
π
i
i
=
-
∑
j
=
1
,
j
≠
i
s
π
i
j
≤
0
.
In this paper, the transition rates of the jumping process are assumed to be partly available; that is, some elements in matrix
Λ
have been exactly known, some have been merely known with lower and upper bounds, and others may have no information to use. For instance, for system (1) with four operation modes, the transition rate matrix might be described by
(4)
Λ
=
[
π
^
11
+
Δ
11
?
?
⋯
?
?
?
π
^
23
+
Δ
23
⋯
π
^
2
s
+
Δ
2
s
⋮
⋮
⋮
⋱
⋮
?
π
^
s
2
+
Δ
s
2
?
⋯
?
]
,
where
π
^
i
j
and
Δ
i
j
∈
[
-
σ
i
j
,
σ
i
j
]
(
σ
i
j
≥
0
)
represent the estimate value and estimate error of the uncertain TR
π
i
j
, respectively, where
π
^
i
j
and
σ
i
j
are known. ? represents the complete unknown TR, which means that its estimate value
π
^
i
j
and estimate error bound are unknown.
For notational clarity, for all
i
∈
𝕊
, the set
U
i
denotes
U
i
=
U
k
i
∪
U
u
k
i
with
U
k
i
=
{
j
:
The
estimate
value
of
π
i
j
is
known
for
j
∈
𝕊
}
,
U
u
k
i
=
{
j
:
The
estimate
value
of
π
i
j
is
unknown
for
j
∈
𝕊
}
. Moreover, if
U
k
i
≠
∅
, it is further described as
U
k
i
=
{
k
1
i
,
k
2
i
,
…
,
k
m
i
}
, where
k
m
i
∈
ℕ
+
represents the
m
th bound-known element with the index
k
m
i
in the
i
th row of matrix
Π
. We assume that the known estimate values of the TRs are well defined. That is
Assumption 1.
If
U
k
i
=
𝕊
, then
π
^
i
j
-
σ
i
j
≥
0
(
for
all
j
∈
𝕊
,
j
≠
i
)
,
π
^
i
i
=
-
∑
j
=
1
,
j
≠
i
N
π
^
i
j
and
σ
i
i
=
-
∑
j
=
1
,
j
≠
i
s
σ
i
j
.
Assumption 2.
If
U
k
i
≠
𝕊
and
i
∈
U
k
i
, then
π
^
i
j
-
σ
i
j
≥
0
(
for
all
j
∈
𝕊
,
j
≠
i
)
,
π
^
i
i
+
σ
i
i
≤
0
and
∑
j
∈
U
k
i
π
^
i
j
.
Assumption 3.
If
U
k
i
≠
𝕊
and
i
∉
U
k
i
, then
π
^
i
j
-
σ
i
j
≥
0
(
for
all
j
∈
𝕊
)
.
Remark 4.
The above assumption is reasonable, since it is the direct result from the properties of the TRs (e.g.,
π
i
j
≥
0
(
for
all
i
,
j
∈
𝕊
,
j
≠
i
)
and
π
i
i
=
-
∑
j
=
1
,
j
≠
i
s
π
i
j
). The above description about uncertain TRs is more general than either the MJSs model with bounded uncertain TRs or the MJSs model with partly uncertain TRs. If
U
u
k
i
=
∅
,
for
all
i
∈
𝕊
, then generally uncertain TR matrix (4) reduces to bounded uncertain TR matrix (5) as follows:
(5)
[
π
^
11
+
Δ
11
π
^
12
+
Δ
12
⋯
π
^
1
s
+
Δ
1
s
π
^
21
+
Δ
21
π
^
22
+
Δ
22
⋯
π
^
2
s
+
Δ
2
s
⋮
⋮
⋱
⋮
π
^
s
1
+
Δ
s
1
π
^
s
2
+
Δ
s
2
⋯
π
^
s
s
+
Δ
s
s
]
,
where
π
^
i
j
-
Δ
i
j
≥
0
(
for
all
j
∈
𝕊
,
j
≠
i
)
,
π
^
i
i
=
-
∑
j
=
1
,
j
≠
i
s
π
^
i
j
≤
0
, and
Δ
i
i
=
∑
j
=
1
,
j
≠
i
s
Δ
i
j
; if
σ
i
j
=
0
,
for
all
i
∈
𝕊
,
for
all
j
∈
U
k
i
, then generally uncertain TR matrix (4) reduces to partly uncertain TR matrix (6) as follows:
(6)
[
π
11
?
?
⋯
?
?
?
π
23
⋯
π
2
s
⋮
⋮
⋮
⋱
⋮
?
π
s
2
?
⋯
?
]
.
Our results in this paper can be applicable to the general Markovian jump systems with bounded uncertain or partly uncertain TR matrix.
A
(
γ
(
t
)
,
t
)
,
A
d
(
γ
(
t
)
,
t
)
,
C
(
γ
(
t
)
,
t
)
, and
C
d
(
γ
(
t
)
,
t
)
are matrix functions of the random jumping process
γ
(
t
)
. To simplify the notion, the notation
A
i
(
t
)
represents
A
(
γ
(
t
)
,
t
)
when
γ
(
t
)
=
i
. For example,
A
d
(
γ
(
t
)
,
t
)
is denoted by
A
d
i
(
t
)
and so on. Further, for each
γ
(
t
)
=
i
∈
N
, it is assumed that the matrices
A
i
(
t
)
,
A
d
i
(
t
)
,
C
i
(
t
)
, and
C
d
i
(
t
)
can be described by the following form:
(7)
A
i
(
t
)
=
A
i
+
Δ
A
i
(
t
)
,
A
d
i
(
t
)
=
A
d
i
+
Δ
A
d
i
(
t
)
,
C
i
(
t
)
=
C
i
+
Δ
C
i
(
t
)
,
C
d
i
(
t
)
=
C
d
i
+
Δ
C
d
i
(
t
)
,
where
A
i
,
A
d
i
,
C
i
are
C
d
i
known real coefficient matrices with appropriate dimensions. Time-varying matrices
Δ
A
i
(
t
)
,
Δ
A
d
i
(
t
)
,
Δ
C
i
(
t
)
, and
Δ
C
d
i
(
t
)
represent norm-bounded uncertainties and satisfy
(8)
[
Δ
A
i
(
t
)
Δ
A
d
i
(
t
)
Δ
C
i
(
t
)
Δ
C
d
i
(
t
)
]
=
[
M
1
i
M
2
i
]
F
i
(
t
)
[
N
1
i
N
2
i
]
,
where
M
1
i
,
M
2
i
,
M
1
i
, and
N
2
i
are known constant real matrices of appropriate dimensions, which represent the structure of uncertainties, and
F
i
(
t
)
is an unknown matrix function with Lebesgue measurable elements and satisfies
F
i
(
t
)
F
i
T
(
t
)
≤
I
.
Further, for convenience, we assume that the system has the same dimension at each mode and the Markov process is irreducible. Consider the following nominal unforced descriptor time-delay system:
(9)
E
x
˙
(
t
)
=
A
i
x
(
t
)
+
A
d
i
x
(
t
-
d
)
,
x
(
t
)
=
φ
(
t
)
,
∀
t
∈
[
-
d
,
0
]
.
Let
x
0
,
r
0
, and
x
(
t
,
φ
,
r
0
)
be the initial state, initial mode, and the corresponding solution of the system (9) at time
t
, respectively.
Definition 5.
System (9) is said to be stochastically stable if, for all
φ
(
t
)
∈
L
2
[
-
d
,
0
]
and initial mode
r
0
∈
N
, there exists a matrix
M
>
0
such that
(10)
E
{
∫
0
∞
∥
x
(
t
,
φ
,
r
0
)
∥
2
d
t
∣
r
0
,
x
(
t
)
=
φ
(
t
)
,
t
∈
[
-
d
,
0
]
}
≤
x
0
T
M
x
0
.
The following definition can be regarded as an extension of the definition in [2].
Definition 6.
(
1
)
System (9) is said to be regular if det
(
s
E
-
A
i
)
,
i
=
1,2
,
…
,
s
are not identically zero.
(
2
)
System (9) is said to be impulse free if deg
(
det
(
s
E
-
A
i
)
)
=
rank
E
i
,
i
=
1,2
,
…
,
s
.
(
3
)
System (9) is said to be admissible if it is regular, impulse free, and stochastically stable.
The linear memoryless observer under consideration is as follows:
(11)
E
x
^
˙
(
t
)
=
K
1
i
x
^
(
t
)
+
K
2
i
y
(
t
)
,
x
^
0
=
0
,
r
(
0
)
=
r
0
,
where
x
^
(
t
)
∈
R
n
is the observer state, and the constant matrices
K
1
i
and
K
2
i
are observer parameters to be designed.
Denote the error state
e
(
t
)
=
x
(
t
)
-
x
^
(
t
)
, and the augmented state vector
x
f
=
[
x
T
(
t
)
e
T
(
t
)
]
T
. Let
x
~
(
t
)
=
L
e
(
t
)
represent the output of the error states, where
L
is a known constant matrix. Define
(12)
A
f
i
=
[
A
i
0
A
i
-
K
1
i
-
K
2
i
C
i
K
1
i
]
,
A
f
d
i
=
[
A
d
i
0
A
d
i
-
K
2
i
C
d
i
0
]
,
E
f
=
[
E
0
0
E
]
,
M
f
i
=
M
f
1
i
=
[
M
1
i
M
1
i
-
K
2
i
M
2
i
]
,
N
f
i
=
[
N
1
i
0
]
,
Δ
A
f
i
=
M
f
i
F
i
(
t
)
N
f
i
,
N
f
1
i
=
[
N
2
i
0
]
,
Δ
A
f
d
i
=
M
f
1
i
F
i
(
t
)
N
f
1
i
,
C
f
=
[
0
L
]
and combine (1) and (11); then we derive the augmented systems as follows:
(13)
E
f
x
˙
f
(
t
)
=
(
A
f
i
+
Δ
A
f
i
)
x
f
(
t
)
+
(
A
f
d
i
+
Δ
A
f
d
i
)
x
f
(
t
-
d
)
,
z
(
t
)
=
C
f
x
f
(
t
)
,
x
f
0
(
t
)
=
[
φ
T
(
t
)
,
φ
T
(
t
)
]
T
,
∀
t
∈
[
-
d
,
0
]
.
Similar to [5], it is also assumed in this paper that, for all
ς
∈
[
-
d
,
0
]
, there exists a scalar
h
>
0
such that
∥
x
f
(
t
+
ς
)
∥
≤
h
∥
x
f
(
t
)
∥
.
Associated with system (13) is the cost function
(14)
𝒥
=
𝔼
{
∫
0
∞
z
T
(
t
)
z
(
t
)
d
t
}
.
Definition 7.
Consider the augmented system (13), if there exist the observer parameters
K
1
i
,
K
2
i
and a positive scalar
𝒥
*
, for all uncertainties, such that the augmented system (13) is robust, stochastically stable and the value of the cost function (14) satisfies
𝒥
≤
𝒥
*
, then
𝒥
*
is said to be a robust guaranteed cost and observer (11) is said to be a robust guaranteed cost observer for system (1) with (4).
Problem 8 (robust guaranteed cost observer problem for a class of linear singular Markovian jump time-delay systems with generally incomplete transition probability).
Given system (1) with GUTR Matrix (4), can we determine an observer (11) with parameters
K
1
i
and
K
2
i
such that the observer is a robust guaranteed cost observer for system (1) with GUTR Matrix (4)?
Lemma 9.
Given any real number
ε
and any matrix Q, the matrix inequality
ε
(
Q
+
Q
T
)
≤
ε
2
T
+
Q
T
-
1
Q
T
holds for any matrix
T
>
0
.
3. Main Results
Theorem 10.
Consider the augmented system (13) with GUTR Matrix (4) and the cost function (14). Then the robust guaranteed cost observer (11) with parameters
K
1
i
and
K
2
i
can be designed if there exist matrices
P
i
,
K
1
i
, and
K
2
i
,
i
=
1,2
,
…
,
s
, and symmetric positive definite matrix
Q
, satisfying the following LMIs, respectively:
Case 1. If
i
∉
U
k
i
and
U
k
i
=
{
k
1
i
,
…
,
k
m
i
}
, there exist a set of symmetric positive definite matrices
T
i
j
∈
ℝ
n
×
n
(
i
∉
U
k
i
,
j
∈
U
k
i
)
such that
(15)
E
f
T
P
i
=
P
i
T
E
f
≥
0
,
(16)
[
Π
i
+
C
f
T
C
f
P
i
(
A
f
d
i
+
Δ
A
f
d
i
)
N
^
1
(
A
f
d
i
+
Δ
A
f
d
i
)
T
P
i
-
Q
0
*
*
N
^
2
]
<
0
,
(17)
P
i
-
P
j
≥
0
,
∀
j
∈
U
u
k
i
,
j
≠
i
.
Case 2. If
i
∈
U
k
i
,
U
k
i
=
{
k
1
i
,
…
,
k
m
i
}
and
U
u
k
i
≠
∅
, there exist a set of symmetric positive definite matrices
V
i
j
l
∈
ℝ
n
×
n
(
i
,
j
∈
U
k
i
,
l
∈
U
u
k
i
)
such that
(18)
E
f
T
P
i
=
P
i
T
E
f
≥
0
,
(19)
[
Ω
i
+
C
f
T
C
f
P
i
(
A
f
d
i
+
Δ
A
f
d
i
)
M
^
1
(
A
f
d
i
+
Δ
A
f
d
i
)
T
P
i
-
Q
0
*
*
M
^
2
]
<
0
.
Case 3. If
i
∈
U
k
i
and
U
u
k
i
=
∅
, there exist a set of symmetric positive definite matrices
W
i
j
∈
ℝ
n
×
n
(
i
,
j
∈
U
k
i
)
such that
(20)
E
f
T
P
i
=
P
i
T
E
f
≥
0
,
(21)
[
Δ
i
+
C
f
T
C
f
P
i
(
A
f
d
i
+
Δ
A
f
d
i
)
L
^
1
(
A
f
d
i
+
Δ
A
f
d
i
)
T
P
i
-
Q
0
*
*
L
^
2
]
<
0
,
where
(22)
Π
i
=
(
A
f
i
+
Δ
A
f
i
)
T
P
i
+
P
i
(
A
f
i
+
Δ
A
f
i
)
+
Q
+
∑
j
∈
U
k
i
π
^
i
j
E
f
T
(
P
j
-
P
i
)
+
∑
j
∈
U
k
i
1
4
σ
i
j
2
T
i
j
,
Ω
i
=
(
A
f
i
+
Δ
A
f
i
)
T
P
i
+
P
i
(
A
f
i
+
Δ
A
f
i
)
+
Q
+
∑
j
∈
U
k
i
π
^
i
j
E
f
T
(
P
j
-
P
l
)
+
∑
j
∈
U
k
i
1
4
σ
i
j
2
V
i
j
l
,
Δ
i
=
(
A
f
i
+
Δ
A
f
i
)
T
P
i
+
P
i
(
A
f
i
+
Δ
A
f
i
)
+
Q
+
∑
j
∈
𝕊
,
j
≠
i
π
^
i
j
E
f
T
(
P
j
-
P
i
)
+
∑
j
∈
𝕊
,
j
≠
i
1
4
σ
i
j
2
W
i
j
,
N
^
1
=
[
E
f
T
(
P
i
k
1
i
-
P
i
)
,
E
f
T
(
P
i
k
2
i
-
P
i
)
,
…
,
E
f
T
(
P
i
k
m
i
-
P
i
)
]
,
N
^
2
=
diag
{
-
T
i
k
1
i
,
…
,
-
T
i
k
m
i
}
,
M
^
1
=
[
E
f
T
(
P
k
1
i
-
P
l
)
,
E
f
T
(
P
k
2
i
-
P
l
)
,
…
,
E
f
T
(
P
k
m
i
-
P
l
)
]
,
M
^
2
=
diag
{
-
V
i
k
1
i
l
,
…
,
-
V
i
k
m
i
l
}
,
L
^
1
=
[
E
f
T
(
P
1
-
P
i
)
,
…
,
E
f
T
(
P
i
-
1
-
P
i
)
,
h
E
f
T
(
P
i
+
1
-
P
i
)
,
…
,
E
f
T
(
P
s
-
P
i
)
]
,
L
^
2
=
diag
{
-
W
i
1
,
…
,
-
W
i
s
}
.
Proof.
According to Definition 2 and Theorem 1 in [2], we can derive from (15)–(21) that system (13) is regular and impulse free. Let the mode at time
t
be
i
, and consider the following Lyapunov function with respect to the augmented system (13)
(23)
V
(
x
f
(
t
)
,
γ
(
t
)
=
i
)
=
x
f
T
(
t
)
E
f
T
P
i
x
f
(
t
)
+
∫
t
-
d
t
x
f
T
(
s
)
Q
x
f
(
s
)
d
t
,
where
Q
is the symmetric positive definite matrix to be chosen, and
P
i
is a matrix satisfying (15)–(21). The weak infinitesimal operator
ℒ
of the stochastic process
{
γ
(
t
)
,
x
f
(
t
)
}
,
t
≥
0
, is presented by
(24)
ℒ
V
(
x
f
(
t
)
,
γ
(
t
)
=
i
)
=
lim
Δ
→
0
1
Δ
[
E
f
{
V
(
x
(
t
+
Δ
)
,
γ
(
t
+
Δ
)
)
x
(
t
)
,
γ
(
t
)
=
i
}
h
-
V
(
x
(
t
)
,
γ
(
t
)
=
i
)
E
f
]
=
x
f
T
(
t
)
[
∑
j
=
1
s
(
A
f
i
+
Δ
A
f
i
)
T
P
i
+
P
i
(
A
f
i
+
Δ
A
f
i
)
h
+
∑
j
=
1
s
π
i
j
E
f
T
P
j
+
Q
]
x
f
(
t
)
+
2
x
f
T
(
t
)
P
i
(
A
f
1
i
+
Δ
A
f
1
i
)
x
f
(
t
-
d
)
-
x
f
T
(
t
-
d
)
Q
x
f
(
t
-
d
)
.
Case 1 (
i
∉
U
k
i
). Note that in this case
∑
j
∈
U
u
k
i
,
j
≠
i
π
i
j
=
-
∑
j
∈
𝒰
k
i
,
j
≠
i
π
i
j
-
π
i
i
and
π
i
j
≥
0
,
j
∈
U
u
k
i
,
j
≠
i
; then from (24), we have
(25)
x
f
T
(
t
)
[
∑
j
=
1
s
π
i
j
E
f
T
P
j
]
x
f
(
t
)
=
x
f
T
(
t
)
[
∑
j
∈
U
k
i
π
i
j
E
f
T
P
j
+
∑
j
∈
U
u
k
i
,
j
≠
i
π
i
j
E
f
T
P
j
+
π
i
i
E
f
T
P
j
]
x
f
(
t
)
=
x
f
T
(
t
)
[
∑
j
∈
U
k
i
π
i
j
E
f
T
P
j
+
(
-
π
i
i
-
∑
j
∈
U
k
i
π
i
j
)
E
f
T
P
i
h
+
π
i
i
E
f
T
P
i
∑
j
∈
U
k
i
]
x
f
(
t
)
=
x
f
T
(
t
)
E
f
T
[
∑
j
∈
U
k
i
π
i
j
(
P
j
-
P
i
)
]
x
f
(
t
)
=
x
f
T
(
t
)
E
f
T
[
∑
j
∈
U
k
i
π
^
i
j
(
P
j
-
P
i
)
+
∑
j
∈
U
k
i
Δ
i
j
(
P
j
-
P
i
)
]
x
f
(
t
)
.
On the other hand, in view of Lemma 9, we have
(26)
∑
j
∈
U
k
i
Δ
i
j
E
f
T
(
P
j
-
P
i
)
=
∑
j
∈
U
k
i
[
1
2
Δ
i
j
E
f
T
(
P
j
-
P
i
)
+
1
2
Δ
i
j
E
f
T
(
P
j
-
P
i
)
]
≤
∑
j
∈
U
k
i
[
(
1
2
Δ
i
j
)
2
T
i
j
+
E
f
T
(
P
j
-
P
i
)
T
i
j
-
1
(
P
j
-
P
i
)
E
f
]
≤
∑
j
∈
U
k
i
[
1
4
σ
i
j
2
T
i
j
+
E
f
T
(
P
j
-
P
i
)
T
i
j
-
1
(
P
j
-
P
i
)
E
f
]
.
Case 2 (
i
∈
U
k
i
and
U
u
k
i
≠
∅
). Because of
U
k
i
=
{
k
1
i
,
…
,
k
m
i
}
and
U
u
k
i
=
{
u
1
i
,
…
,
u
s
-
m
i
}
, there must be
l
∈
U
u
k
i
so that
E
f
T
P
l
≥
E
f
T
P
j
(
for
all
j
∈
U
u
k
i
)
:
(27)
x
f
T
(
t
)
[
∑
j
=
1
s
π
i
j
E
f
T
P
j
]
x
f
(
t
)
≤
x
f
T
(
t
)
[
∑
j
∈
U
k
i
π
i
j
E
f
T
P
j
-
(
∑
j
∈
U
k
i
,
j
≠
i
π
i
j
)
E
f
T
P
l
]
x
f
(
t
)
=
x
f
T
(
t
)
E
f
T
[
∑
j
∈
U
k
i
π
i
j
(
P
j
-
P
l
)
]
x
f
(
t
)
=
x
f
T
(
t
)
E
f
T
[
∑
j
∈
U
k
i
π
^
i
j
(
P
j
-
P
l
)
+
∑
j
∈
U
k
i
Δ
i
j
(
P
j
-
P
l
)
]
x
f
(
t
)
.
By using Lemma 9, we have
(28)
∑
j
∈
U
k
i
Δ
i
j
E
f
T
(
P
j
-
P
l
)
=
∑
j
∈
U
k
i
[
1
2
Δ
i
j
E
f
T
(
P
j
-
P
l
)
+
1
2
Δ
i
j
E
f
T
(
P
j
-
P
l
)
]
≤
∑
j
∈
U
k
i
[
(
1
2
Δ
i
j
)
2
V
i
j
l
+
E
f
T
(
P
j
-
P
l
)
V
i
j
l
-
1
(
P
j
-
P
l
)
E
f
T
]
≤
∑
j
∈
U
k
i
[
1
4
σ
i
j
2
V
i
j
l
+
E
f
T
(
P
j
-
P
l
)
V
i
j
l
-
1
(
P
j
-
P
l
)
E
f
T
]
.
Case 3 (
i
∈
U
k
i
and
U
u
k
i
=
∅
). Consider
(29)
x
f
T
(
t
)
[
∑
j
=
1
s
π
i
j
E
f
T
P
j
]
x
f
(
t
)
=
x
f
T
(
t
)
E
f
T
[
∑
j
=
1
,
j
≠
i
s
π
i
j
(
P
j
-
P
i
)
]
x
f
(
t
)
=
x
f
T
(
t
)
E
f
T
[
∑
j
=
1
,
j
≠
i
s
π
^
i
j
(
P
j
-
P
i
)
h
+
∑
j
=
1
,
j
≠
i
s
Δ
i
j
(
P
j
-
P
i
)
]
x
f
(
t
)
.
Case 1. Substituting (25) and (26) into (24), it results in
(30)
ℒ
V
≤
Λ
T
(
t
)
Φ
(
i
)
Λ
(
t
)
,
where
Λ
T
(
t
)
=
[
x
f
T
(
t
)
,
x
f
T
(
t
-
d
)
]
and
(31)
Φ
i
=
[
Π
i
+
C
f
T
C
f
P
i
(
A
f
d
i
+
Δ
A
f
d
i
)
N
^
1
(
A
f
d
i
+
Δ
A
f
d
i
)
T
P
i
-
Q
0
*
*
N
^
2
]
.
Case 2. Substituting (27) and (28) into (24), it results in
(32)
ℒ
V
≤
Λ
T
(
t
)
Ψ
(
i
)
Λ
(
t
)
,
where
Λ
T
(
t
)
=
[
x
f
T
(
t
)
,
x
f
T
(
t
-
d
)
]
and
(33)
Ψ
i
=
[
Ω
i
+
C
f
T
C
f
P
i
(
A
f
d
i
+
Δ
A
f
d
i
)
M
^
1
(
A
f
d
i
+
Δ
A
f
d
i
)
T
P
i
-
Q
0
*
*
M
^
2
]
.
Case 3. Substituting (29) into (24), we get
(34)
ℒ
V
≤
Λ
T
(
t
)
Γ
(
i
)
Λ
(
t
)
,
where
Λ
T
(
t
)
=
[
x
f
T
(
t
)
,
x
f
T
(
t
-
d
)
]
and
(35)
Γ
i
=
[
Δ
i
+
C
f
T
C
f
P
i
(
A
f
d
i
+
Δ
A
f
d
i
)
L
^
1
(
A
f
d
i
+
Δ
A
f
d
i
)
T
P
i
-
Q
0
*
*
L
^
2
]
.
Similar to [5], using Dynkin’s formula, we drive for each
i
∈
N
:
(36)
lim
T
→
∞
𝔼
{
∫
0
T
x
f
T
(
t
)
x
f
(
t
)
d
t
∣
φ
f
,
γ
0
=
i
}
≤
x
f
0
T
M
x
f
0
.
By Definition 5, it is easy to see that the augmented system (13) is stochastically stable. Furthermore, from (16), (19), and (21), we have
(37)
ℒ
V
≤
-
x
f
T
(
t
)
C
f
T
C
f
x
f
(
t
)
<
0
.
On the other hand, we have
(38)
𝒥
=
𝔼
{
∫
0
∞
x
f
T
(
t
)
C
f
T
C
f
x
f
(
t
)
d
t
}
<
-
∫
0
∞
ℒ
V
d
t
=
-
𝔼
{
lim
t
→
∞
V
(
x
(
t
)
,
γ
(
t
)
)
}
+
V
(
x
0
,
γ
0
)
.
As the augmented system (13) is stochastically stable, it follows from (38) that
J
<
V
(
x
f
0
,
r
0
)
. From Definition 7, it is concluded that a robust guaranteed cost for the augmented system (13) can be given by
J
*
=
x
f
0
T
(
t
)
E
f
r
0
T
P
(
r
0
)
x
f
0
+
∫
-
d
0
x
f
T
(
t
)
Q
x
f
(
t
)
d
t
.
In the following, based on the above sufficient condition, the design of robust guaranteed cost observers can be turned into the solvability of a system of LMIs.
Theorem 11.
Consider system (13) with GUTR Matrix (4) and the cost function (14). If there exist matrices
Y
1
i
and
Y
2
i
,
i
=
1,2
,
…
,
s
positive scalars
ε
i
,
i
=
1,2
,
…
,
s
, symmetric positive definite matrix Q, and the full rank matrices
P
2
i
, and matrices
P
i
=
diag
(
P
1
i
,
P
2
i
)
,
i
=
1,2
,
…
,
s
, satisfying the following LMIs, respectively.
Case 1. If
i
∉
U
k
i
and
U
k
i
=
{
k
1
i
,
…
,
k
m
i
}
, a set of positive definite matrices
T
i
j
∈
ℝ
n
×
n
(
i
∉
U
k
i
,
j
∈
U
k
i
)
exist such that
(39)
E
f
T
P
i
=
P
i
T
E
f
≥
0
,
(40)
[
ϕ
1
i
ϕ
2
i
N
¯
1
ϕ
3
i
ϕ
2
i
T
-
Q
0
0
N
¯
1
T
0
N
¯
2
0
ϕ
3
i
T
0
0
-
ε
i
I
]
<
0
,
(41)
P
i
-
P
j
≥
0
,
∀
j
∈
U
u
k
i
,
j
≠
i
.
Case 2. If
i
∈
U
k
i
(
U
k
i
=
{
k
1
i
,
…
,
k
m
i
}
)
and
U
u
k
i
≠
∅
, a set of positive definite matrices
V
i
j
l
∈
ℝ
n
×
n
(
i
,
j
∈
U
k
i
,
l
∈
U
u
k
i
)
exist such that
(42)
E
f
T
P
i
=
P
i
T
E
f
≥
0
,
(43)
[
φ
1
i
φ
2
i
M
¯
1
φ
3
i
φ
2
i
T
-
Q
0
0
M
¯
1
T
0
M
¯
2
0
φ
3
i
T
0
0
-
ε
i
I
]
<
0
.
Case 3. If
i
∈
U
k
i
and
U
u
k
i
=
∅
, a set of positive definite matrices
W
i
j
∈
ℝ
n
×
n
(
i
,
j
∈
U
k
i
)
exist such that
(44)
E
f
i
T
P
i
=
P
i
T
E
f
i
≥
0
,
(45)
[
ψ
1
i
ψ
2
i
L
¯
1
ψ
3
i
ψ
2
i
T
-
Q
0
0
L
¯
1
T
0
L
¯
2
0
ψ
3
i
T
0
0
-
ε
i
I
]
<
0
,
where
(46)
ϕ
1
i
=
φ
1
i
=
ψ
1
i
=
[
P
1
i
A
i
+
A
i
T
P
1
i
A
i
T
P
2
i
-
Y
1
i
T
-
C
i
T
Y
2
i
T
P
2
i
A
i
-
Y
1
i
-
Y
2
i
C
i
Y
1
i
T
+
Y
1
i
]
+
Q
+
C
f
T
C
f
+
∑
j
∈
U
k
i
π
^
i
j
E
f
T
(
P
j
-
P
i
)
+
∑
j
∈
U
k
i
1
4
σ
i
j
2
T
i
j
,
ϕ
2
i
=
φ
2
i
=
ψ
2
i
=
[
P
1
i
A
i
0
P
2
i
A
i
-
Y
1
i
-
Y
2
i
C
i
0
]
,
ϕ
3
i
=
φ
3
i
=
ψ
3
i
=
[
P
1
i
M
1
i
P
2
i
M
1
i
-
Y
1
i
M
1
i
-
Y
2
i
M
2
i
]
,
N
¯
1
=
[
E
f
T
(
P
k
1
i
-
P
i
)
,
E
f
T
(
P
k
2
i
-
P
i
)
,
…
,
E
f
T
(
P
k
m
i
-
P
i
)
]
,
N
¯
2
=
diag
{
-
T
i
k
1
i
,
…
,
-
T
i
k
m
i
}
,
M
¯
1
=
[
E
f
T
(
P
k
1
i
-
P
i
)
,
E
f
T
(
P
k
2
i
-
P
i
)
,
…
,
E
f
T
(
P
k
m
i
-
P
i
)
]
,
M
¯
2
=
diag
{
-
V
i
k
1
i
l
,
…
,
-
V
i
k
m
i
l
}
,
L
¯
1
=
[
E
f
T
(
P
1
-
P
i
)
,
…
,
E
f
T
(
P
i
-
1
-
P
i
)
,
E
f
T
(
P
i
+
1
-
P
i
)
,
…
,
E
f
T
(
P
s
-
P
i
)
]
,
L
¯
2
=
diag
{
-
W
i
1
,
…
,
-
W
i
s
}
.
Then a suitable robust guaranteed cost observer in the form of (11) has parameters as follows:
(47)
K
1
i
=
P
1
i
-
1
Y
1
i
,
K
2
i
=
P
2
i
-
1
Y
2
i
and
J
*
is a robust guaranteed cost for system (13) with GUTR Matrix (4).
Proof.
Define
(48)
A
i
1
=
[
A
f
i
T
P
i
+
P
i
A
f
i
+
Q
+
∑
j
∈
U
k
i
π
^
i
j
(
P
j
-
P
i
)
+
∑
j
∈
U
k
i
1
4
σ
i
j
2
T
i
j
+
C
f
T
C
f
P
i
A
f
d
i
N
¯
1
A
f
d
i
T
P
i
-
Q
0
*
*
N
¯
2
]
,
(49)
A
i
2
=
[
A
f
i
T
P
i
+
P
i
A
f
i
+
Q
+
∑
j
∈
U
k
i
π
^
i
j
(
P
j
-
P
i
)
+
∑
j
∈
U
k
i
1
4
σ
i
j
2
V
i
j
l
+
C
f
T
C
f
P
i
A
f
d
i
M
¯
1
A
f
d
i
T
P
i
-
Q
0
*
*
M
¯
2
]
,
(50)
A
i
3
=
[
A
f
i
T
P
i
+
P
i
A
f
i
+
Q
+
∑
j
∈
U
k
i
π
^
i
j
(
P
j
-
P
i
)
+
∑
j
∈
U
k
i
1
4
σ
i
j
2
W
i
j
+
C
f
T
C
f
P
i
A
f
d
i
L
¯
1
A
f
d
i
T
P
i
-
Q
0
*
*
L
¯
2
]
<
0
.
Then (16) is equivalent to
(51)
A
i
1
+
[
P
i
M
f
i
0
0
]
F
i
[
N
f
i
N
f
1
i
0
]
+
[
N
f
i
F
f
1
i
0
]
T
F
i
T
[
P
i
M
f
i
0
0
]
T
<
0
.
Then (19) is equivalent to
(52)
A
i
2
+
[
P
i
M
f
i
0
0
]
F
i
[
N
f
i
N
f
1
i
0
]
+
[
N
f
i
F
f
1
i
0
]
T
F
i
T
[
P
i
M
f
i
0
0
]
T
<
0
.
Then (21) is equivalent to
(53)
A
i
3
+
[
P
i
M
f
i
0
0
]
F
i
[
N
f
i
N
f
1
i
0
]
+
[
N
f
i
F
f
1
i
0
]
T
F
i
T
[
P
i
M
f
i
0
0
]
T
<
0
.
By applying Lemma 2.4 in [57], (50), (51), and (52) hold for all uncertainties
F
i
satisfying
F
i
T
F
i
<
I
if and only if there exist positive scalars
ε
i
,
i
=
1
,
2
,
…
,
s
, such that
(54)
A
i
1
+
ε
i
-
1
[
P
i
M
f
i
0
0
]
[
P
i
M
f
i
0
0
]
T
+
ε
i
[
N
f
i
F
f
1
i
0
]
T
[
N
f
i
F
f
1
i
0
]
<
0
,
A
i
2
+
ε
i
-
1
[
P
i
M
f
i
0
0
]
[
P
i
M
f
i
0
0
]
T
+
ε
i
[
N
f
i
F
f
1
i
0
]
T
[
N
f
i
F
f
1
i
0
]
<
0
,
A
i
3
+
ε
i
-
1
[
P
i
M
f
i
0
0
]
[
P
i
M
f
i
0
0
]
T
+
ε
i
[
N
f
i
F
f
1
i
0
]
T
[
N
f
i
F
f
1
i
0
]
<
0
.
Let
P
i
=
diag
(
P
1
i
,
P
2
i
)
, and using (47), we can conclude from Schur complement results that the above matrix inequalities are equivalent to the coupled LMIs (40), (43), and (45). It further follows from Theorem 10 that
J
*
is a robust guaranteed cost for system (13) with (4).
Remark 12.
The solution of LMIs (39)–(45) parameterizes the set of the proposed robust guaranteed cost observers. This parameterized representation can be used to design the guaranteed cost observer with some additional performance constraints. By applying the methods in [14], the suboptimal guaranteed cost observer can be determined by solving a certain optimization problem. This is the following theorem.
Theorem 13.
Consider system (13) with GUTR Matrix (4) and the cost function (14), and suppose that the initial conditions
r
0
and
x
f
0
are known; if the following optimization problem
(55)
min
Q
,
P
1
i
,
P
2
i
,
ε
i
,
Y
1
i
a
n
d
Y
2
i
J
*
s
.
t
.
L
M
I
s
(
39
)
–
(
45
)
has a solution
Q
,
P
1
i
,
P
2
i
,
ε
i
,
Y
1
i
, and
Y
2
i
,
i
=
1,2
,
…
,
s
, then the observer (11) is a suboptimal guaranteed cost observer for system (1), where
J
*
=
x
f
0
T
E
f
r
0
T
P
(
r
0
)
x
f
0
+
tr
(
∫
-
d
0
x
f
0
(
t
)
x
f
0
(
t
)
x
f
0
T
d
t
Q
)
.
Proof.
It follows from Theorem 11 that the observer (11) constructed in terms of the solution
Q
,
P
1
i
,
P
2
i
,
ε
i
,
Y
1
i
, and
Y
2
i
,
i
=
1,2
,
…
,
s
, is a robust guaranteed cost observer. By noting that
(56)
∫
-
d
0
x
f
0
T
(
t
)
Q
x
f
0
(
t
)
d
t
=
∫
-
d
0
tr
(
x
f
0
T
(
t
)
Q
x
f
0
(
t
)
)
d
t
=
tr
(
∫
-
d
0
x
f
0
T
(
t
)
x
f
0
(
t
)
d
t
Q
)
,
it follows that the suboptimal guaranteed cost observer problem is turned into the minimization problem (55).
Remark 14.
Theorem 13 gives the suboptimal guaranteed cost observer conditions of a class of linear Markovian jumping time-delay systems with generally incomplete transition probability and LMI constraints, which can be easily solved by the LMI toolbox in MATLAB.
4. Numerical Example
In this section, a numerical example is presented to demonstrate the effectiveness of the method mentioned in Theorem 11. Consider a 2-dimensional system (1) with 3 Markovian switching modes. In this numerical example, the singular system matrix is set as
E
=
[
1
0
0
0
]
, and the 3-mode transition rate matrix is
Λ
=
[
-
3.2
?
?
?
?
2
1.5
2.1
-
3.6
]
, where
Δ
11
,
Δ
31
∈
[
-
0.15,0.15
]
;
Δ
23
,
Δ
33
∈
[
-
0.12,0.12
]
and
Δ
32
∈
[
-
0.1,0.1
]
. The other system matrices are as follows.
For mode
i
=
1
, there are
(57)
A
1
=
[
-
3.2
0.65
1
0.2
]
,
A
d
1
=
[
0.2
0.5
1
-
0.68
]
,
C
1
=
[
1.2
0.65
-
6.5
1.9
-
0.21
-
1.8
]
,
C
d
1
=
[
-
3.6
-
1.05
2.1
0.96
0.21
-
0.86
]
,
M
11
=
[
-
0.2
0.8
]
,
M
21
=
[
0.25
0.875
-
2
]
,
N
11
=
[
-
1.2
3.1
]
,
N
21
=
[
-
0.69
-
4.2
]
.
For mode
i
=
2
, there are
(58)
A
2
=
[
-
1
6
2
-
3.6
]
,
A
d
2
=
[
-
3.1
-
1.6
3
0.75
]
,
C
2
=
[
9
-
2.5
0.35
-
2
3.6
-
1.8
]
,
C
d
2
=
[
0.89
-
6
-
1.2
0.9
-
2.4
6
]
,
M
12
=
[
2.3
-
4
]
,
M
22
=
[
0.75
-
3.6
2.5
]
,
N
12
=
[
-
7.2
-
6
]
,
N
22
=
[
1
2
]
.
For mode
i
=
3
, there are
(59)
A
3
=
[
-
10.6
2.9
-
0.3
3.6
]
,
A
d
3
=
[
-
5.6
-
1.2
-
3
4.5
]
,
C
3
=
[
-
3
-
0.36
0.15
-
1.8
0.9
-
5
]
,
C
d
3
=
[
-
1.65
5
-
1.2
2.65
-
0.98
-
5.6
]
,
M
13
=
[
-
8.2
-
0.3
]
,
M
23
=
[
-
0.52
2.5
-
3.6
]
,
N
13
=
[
1.05
-
5
]
,
N
23
=
[
-
7.2
-
1.26
]
.
Then, we set the error state matrix
L
=
[
-
45
0.6
2
-
6
]
, and the positive scalars in Theorem 11 are
ε
1
=
0.2
,
ε
2
=
0.15
,
ε
3
=
0.32
. According to the definitions of augmented state matrices in (12), we can easily obtain the following parameter matrices in Theorem 11 by MATLAB
(60)
Y
11
=
[
-
8452.1006
0.0127
0.0127
8450.9001
]
,
Y
21
=
[
0.02
0.1291
0.1435
-
2.3520
-
0.4080
-
0.8106
]
,
Y
12
=
[
-
17.0991
26.9626
26.9626
-
24.6750
]
,
Y
22
=
[
-
20.0744
-
13.2941
52.9388
21.6893
18.0120
-
50.6693
]
,
Y
13
=
[
-
675.1329
22.4456
22.4456
-
897.6976
]
,
Y
23
=
[
-
13.6021
-
146.1726
54.0500
-
3.4324
-
19.5125
-
10.3068
]
,
P
1
=
[
6.3029
0
0
0
0
4.8620
0
0
0
0
2.2914
0
0
0
0
0.3169
]
,
P
2
=
[
0.8914
0
0
0
0
1.2505
0
0
0
0
7.3629
0
0
0
0
3.0056
]
,
P
3
=
[
3.0265
0
0
0
0
0.2156
0
0
0
0
0.8965
0
0
0
0
1.0002
]
,
Q
=
[
0.5000
0
0
0
0
0.5001
0
0
0
0
0.5001
0
0
0
0
0.5001
]
,
T
11
=
[
3417.3214
-
870.7765
0
0
-
870.7765
416.7216
0
0
0
0
2226.3598
-
320.7456
0
0
-
320.7456
1226.3101
]
,
T
23
=
[
3775.3231
-
2799.9330
0
0
-
2799.9330
2810.7685
0
0
0
0
10690.7366
-
10743.2750
0
0
-
10743.2750
10855.5053
]
,
T
31
=
[
951.8504
-
539.9245
0
0
-
539.9245
896.2029
0
0
0
0
1477.3012
-
207.7540
0
0
-
207.7540
1479.1256
]
,
T
32
=
[
2161.7695
-
1209.4164
0
0
-
1209.4164
2037.1205
0
0
0
0
1.4786
-
0.9283
0
0
-
0.9283
1.4794
]
,
T
33
=
[
1493.9313
-
839.8780
0
0
-
839.8780
1407.3689
0
0
0
0
147.8123
-
133.6452
0
0
-
133.6452
245.9347
]
,
V
11
=
[
1.6650
0
0
0
0
1.6650
0
0
0
0
1.6650
0
0
0
0
1.6650
]
,
W
31
=
[
1.5426
0
0
0
0
1.6650
0
0
0
0
1.6662
0
0
0
0
1.6650
]
,
W
32
=
[
1.5428
0
0
0
0
1.6650
0
0
0
0
1.6622
0
0
0
0
1.6650
]
.
Therefore, we can design a linear memoryless observer as (11) with the constant matrices
(61)
K
11
=
P
11
-
1
Y
11
=
[
-
1340.9860
0.0020
0.0026
1738.1530
]
,
K
21
=
P
21
-
1
Y
21
=
[
0.0087
0.0563
0.0626
-
7.4219
-
1.2875
-
2.5579
]
,
K
12
=
P
12
-
1
Y
12
=
[
-
19.1823
30.2475
21.5615
-
19.7321
]
,
K
22
=
P
22
-
1
Y
22
=
[
-
2.7264
-
1.8056
7.1899
7.2163
5.9928
-
16.8583
]
,
K
13
=
P
13
-
1
Y
13
=
[
-
223.1402
7.4186
104.1076
-
4163.7180
]
,
K
23
=
P
23
-
1
Y
23
=
[
-
15.1724
-
163.0481
60.2900
-
3.4317
-
19.5086
-
10.3047
]
.
Finally, the observer (11) with the above parameter matrices for this numerical example is a suboptimal guaranteed cost observer by Theorems 11 and 13.