The solvability theory of an important self-adjoint polynomial matrix equation is presented, including the boundary of its Hermitian positive definite (HPD) solution and some sufficient conditions under which the (unique or maximal) HPD solution exists. The algebraic perturbation analysis is also given with respect to the perturbation of coefficient matrices. An efficient general iterative algorithm for the maximal or
unique HPD solution is designed and tested by numerical experiments.
1. Introduction
In this paper, we consider the following self-adjoint polynomial matrix equation:
(1)Xs-A*XtA=Q,
where s,t are positive integers, A,Q∈ℂn×n, and Q>0. As far as we know, the solvability of (1) is not completely solved untill now.
In many fields of applied mathematics, engineering, and economic sciences, (1) plays an important role. The famous discrete-time algebraic Lyapunov equation (DALE) is exactly (1) with s=t=1. Undoubtedly, DALE is one of the most important mathematical problems in signal processing, system, and control theory and many others (e.g., see the monographs [1, 2]). If A is stable (with respect to the unit circle), DALE has a unique Hermitian positive definite (HPD) solution. Such strong relation between the spectral property of A and the solvability theory is fortunately owned by (1), which can be considered as a nonlinear DALE if s≠1 or t≠1. What about the following algebraic Riccati equation:
(2)Y2+B*Y+YB-A*YA-R=0,
where A,B,R∈ℂn×n, B*=B≥0, and R*=R>0? Defining X:=Y+B and Q:=R+B2-A*BA, we can immediately get (1) with s=2 and t=1 as an equivalent form of (2). As we all know, solving algebraic Riccati equations is an important task in the linear-quadratic regulator problem, Kalman filtering, H∞-control, model reduction problems, and so forth. See [1, 3–5] and the references therein. Many numerical methods have been proposed, such as invariant subspace methods [6], Schur method [7], doubling algorithm [8], and structure-preserving doubling algorithm [9, 10]. At the same time the perturbation theory was developed in [11–15], as well as the unified methods for the discrete-time and continuous-time algebraic Riccati equations [16, 17]. A general iteration method for (1) given in this paper can be seen as a new algorithm for the algebraic Riccati equation (2), setting s=2 and t=1.
Apart from the above applications, (1) is appealing from the mathematical viewpoint since it unifies a large class of systems of polynomial matrix equations. Many nonlinear matrix equations are special cases of (1). For example, nonlinear matrix equations, X-A*XqA=Q (see, e.g., [18, 19]), are equivalence models of Ys-A*YtA=Q and Y=X1/s, where s,t are positive integers and q=t/s. In a rather general form, Ran and Reurings [18] investigated X+A*ℱ(X)A=Q (Q>0) for its positive semidefinite solutions under the assumption that the function ℱ(·) is monotone and Q-A*ℱ(Q)A is positively definite. Besides, Lee and Lim [20] proved that (1) has a unique HPD solution when |s|≥1≥|t| and |t/s|<1. See [21–25] for more recent results on nonlinear matrix equations. To the best of our knowledge, (1) with s<t (without monotony in hand) has not been discussed. These facts motivate us to study polynomial matrix equation (1).
This paper is organized as follows. In Section 2 we deduce the existence and uniqueness conditions of HPD solutions of (1); in Section 3 we derive the algebraic perturbation theory for the unique or maximal solution of (1); finally in Section 4, we provide an iterative algorithm and two numerical experiments.
We begin with some notations used throughout this paper. 𝔽m×n stands for the set of m×n matrices with elements on field 𝔽(𝔽isℝorℂ). If H is a Hermitian matrix on 𝔽n×n, λmin(H) and λmax(H) stand for the minimal and the maximal eigenvalues, respectively. Denote the singular values of a matrix A∈𝔽m×n by σ1(A)≥⋯≥σl(A)≥0, where l=min{m,n}. Suppose that X and Y are Hermitian matrices; we write X≥Y(X>Y) if X-Y is positively semidefinite (definite) and denote the matrices set {X∣X-αI≥0andβI-X≥0} by [αI,βI].
2. Solvability of Self-Adjoint Polynomial Matrix Equation
In this section, we study the solvability theory of (1) assuming that A is nonsingular; that is, λmin(A*A)>0. To do this, we need two simple but useful functions defined on the positive abscissa axis:
(3)g1(x)=xs-λmax(A*A)xt-λmax(Q),g2(x)=xs-λmin(A*A)xt-λmin(Q).
The following two famous inequalities will be used frequently in the remaining of this paper.
Let A and B be positive operators on a Hilbert space H, such that M1I≥A≥m1I>0,M2I≥B≥m2I>0, and 0<A≤B. Then
(4)At≤(M1m1)t-1Bt,At≤(M2m2)t-1Bt
hold for any t≥1.
2.1. Maximal Solution of (1) with s<t
Now we derive a necessary condition and a sufficient condition for existence of HPD solutions of (1) with s<t. With g1(x) and g2(x) in hand, we can easily get the distribution of eigenvalues of the HPD solution X of (1).
Theorem 3.
Suppose that λmax(A*A)≤(s/t)((t-s)/λmax(Q)t)(t-s)/s and X∈ℂn×n is an HPD solution of (1); then for any eigenvalue λ(X) of X,
(5)β1≤λ(X)≤α1orα2≤λ(X)≤β2,
where α1,α2 are two positive roots of g1(x) and β1,β2 are two positive roots of g2(x).
Proof.
From Theorem 3.3.16(d) in Horn and Johnson [28], one can see that
(6)σi(A*XtA)≤σi(Xt)σ12(A),thatis,λi(A*XtA)≤λi(Xt)λmax(A*A),i=1,…,n.
If A is nonsingular,
(7)σi(Xt)=σi((A-1)*A*XtAA-1)≤σn-2(A)σi(A*XtA).
That means
(8)σi(A*XtA)≥σi(Xt)σn2(A),thatis,λi(A*XtA)≥λi(Xt)λmin(A*A),i=1,…,n.
The above equations still hold if A is singular, since σn(A)=0, that is, λmin(A*A)=0, in this case. Applying Weyl theorem in Horn and Johnson [29], Xs=Q+A*XtA implies
(9)λ(X)s-λmax(A*A)λ(X)tλmax(Q)≤0,λ(X)s-λmin(A*A)λ(X)t-λmin(Q)≥0.
Define a function f(x)=xs-a2xt-q,a>0,q>0. Then the only positive stationary point of f(x) is x0=((t/s)a2)1/(s-t). If a2≤(s/t)((t-s)/qt)(t-s)/s, f(x) has two positive roots, x1 and x2, with q1/s<x1≤x0≤x2<a2/(s-t). So λmax(A*A)≤(s/t)(t-s)/λmax(Q)t(t-s)/s implies that g1(x) has two roots α1,α2>0 and g2(x) has two roots β1,β2>0. Since g2(x)≥g1(x), (λmin(Q))1/s≤β1≤α1≤α2≤β2≤(λmin(A*A))1/(s-t). Then from (9) we obtain (5).
If (1) has an HPD solution, its eigenvalues may skip between [β1,α1] and [α2,β2]. Next, what we take more attention on is the HPD solution with its eigenvalues distributed only on one interval.
Theorem 4.
Suppose that λmax(A*A)≤(s/t)((t-s)/λmax(Q)t)(t-s)/s.
Equation (1) has an HPD solution, X∈[β1I,α1I], and if λmin(A*A)>sα1s-1(tβ1t-1)-1 such X exists uniquely.
Equation (1) has an HPD solution, Z∈[α2I,β2I], and if λmin(A*A)>sβ2s-1(tα2t-1)-1 such Z exists uniquely.
Proof.
(1) Let h1(X)=(Q+A*XtA)1/s, where X∈[(λmin(Q))1/sI,(s/(λmax(A*A)t))1/(t-s)I]. Lemmas 1 and 2 and t-s>0 imply
(10)(λmin(Q))1/sI≤h1(X)≤{λmax(Q)+λmax(A*A)[s(λmax(A*A)t)]t/(t-s)}1/sI≤[sλmax(A*A)t]s/(t-s)×1/sI=[sλmax(A*A)t]1/(t-s)I.
Applying Brouwer’s fixed-point theorem, h1(X) has a fixed point X∈[(λmin(Q))1/sI,(s/(λmax(A*A)t))1/(t-s)I]. Then from Theorem 3, X∈[β1I,α1I].
We now prove the uniqueness of X under the additional condition that λmin(A*A)>sα1s-1(tβ1t-1)-1. Suppose Y∈[(λmin(Q))1/sI,(s/(λmax(A*A)t))1/(t-s)I] is another HPD solution of (1) and Y≠X. It has been known that
(11)∥Xt-Yt∥F=∥(A-1)*(Xs-Ys)A-1∥F≤(λmin(A*A))-1∥Xs-Ys∥F.
Then from ∥Xs-Ys∥F≤sα1s-1∥X-Y∥F and ∥Xt-Yt∥F≥tβ1t-1∥X-Y∥F,
(12)∥X-Y∥F≤sα1s-1[tβ1t-1λmin(A*A)]-1∥X-Y∥F<∥X-Y∥F,
which is impossible. Hence, X=Y.
(2) Let h2(Z)=[(A-1)*(Zs-Q)A-1]1/t, where Z∈[α2I,β2I]. h2(Z) is continuous, and
(13)h2(α2I)≤h2(Z)≤h2(β2I)
because (A-1)*(α2sI-Q)A-1≤(A-1)*(Zs-Q)A-1≤(A-1)*(β2sI-Q)A-1. By Lemmas 1 and 2 and Brouwer’s fixed-point theorem, it is sufficient to prove h2(α2I)≥α2I and h2(β2I)≤β2I in order for an HPD solution Z∈[α2I,β2I] to exist. The existence of such Z follows from inequalities
(14)h2(α2I)=[(A-1)*(α2sI-Q)A-1]1/t≥[(A-1)*(α2sI-λmax(Q)I)A-1]1/t≥[(λmax(A*A))-1(α2sI-λmax(Q)I)]1/t=α2I,h2(β2I)=[(A-1)*(β2sI-Q)A-1]1/t≤[(A-1)*(β2sI-λmin(Q)I)A-1]1/t≤[(λmin(A*A))-1(β2sI-λmin(Q)I)]1/t=β2I.
Next we prove the uniqueness of Z under the additional condition that λmin(A*A)>sβ2s-1(tα2t-1)-1. Suppose (1) has two different HPD solutions Z and Y on [α2I,β2I]. Then
(15)∥Zt-Yt∥F=∥(A-1)*(Zs-Ys)A-1∥F≤(λmin(A*A))-1∥Zs-Ys∥F≤(λmin(A*A))-1sβ2s-1∥Z-Y∥F.
Moreover, if λmin(A*A)>sβ2s-1(tα2t-1)-1, applying the inequality ∥Zt-Yt∥F≥tα2t-1∥Z-Y∥F, we have
(16)∥Z-Y∥F≤(tα2t-1λmin(A*A))-1sβ2s-1∥Z-Y∥F<∥Z-Y∥F,
which is impossible. Hence, Y=Z.
The maximal solution (see, e.g., [30, 31]) of (1) is defined as follows.
Definition 5.
An HPD solution XM∈ℂn×n of (1) is the maximal solution if, for any HPD solution Y∈ℂn×n of (1), there is XM≥Y.
So the second term of Theorem 4 implies that the maximal solution of (1) is on [α2I,β2I].
Theorem 6.
Suppose that λmax(A*A)≤(s/t)(t/(t-s)λmax(Q))(s-t)/s and λmin(A*A)>sβ2s-1(tα2t-1)-1; then (1) has a maximal solution Xmax∈[α2I,β2I] which can be computed by
(17)Xi=[(A-1)*(Xi-1s-Q)A-1]1/t,i=1,2,….
with the initial value X0=β2I.
Proof.
Let ξ=(tα2t-1λmin(A*A))-1sβ2s-1; then ξ<1. From the proof of Theorem 4 (2),
(18)tα2t-1∥Xi+1-Xi∥F≤∥Xi+1t-Xit∥F=∥(A-1)*(Xis-Xi-1s)A-1∥F≤(λmin(A*A))-1∥Xis-Xi-1s∥F≤(λmin(A*A))-1sβ2s-1∥Xi-Xi-1∥F.
Then
(19)∥Xi+1-Xi∥F≤ξ∥Xi-Xi-1∥F≤ξi∥X1-X0∥F,
which indicates the convergence of matrix series {X0,X1,X2,…}, generated by (17).
Set X0=β2I. Assuming Xi∈[α2I,β2I], then from inequalities (14) we have
(20)α2I≤h(α2I)≤Xi+1=[(A-1)*(Xis-Q)A-1]1/t≤h(β2I)≤β2I.
That means, for any i=0,1,2,…,Xi∈[α2I,β2I]. By Theorem 4 (2), we can see that Xmax=limi→+∞Xi is the unique HPD solution of (1) on [α2I,β2I].
Now we prove the maximality of Xmax. Suppose that X is an arbitrary HPD solution of (1); then X0≥X, and Theorem 3 implies X0t≥Xt (since X0=β2I). Assuming that Xit≥Xt, Lemma 1 with s/t<1 implies
(21)Xi+1t=(A-1)*[(Xit)s/t-Q]A-1≥(A-1)*[(Xt)s/t-Q]A-1=Xt.
Then Xmaxt=limi→+∞Xit≥Xt, which implies that Xmax≥X by the Löwner-Heinz inequality.
Note that similar iteration formula ever appeared in some papers such as [20, 21] for other nonlinear matrix equations. Here we firstly proved that the iteration form (17) preserves the maximality of Xi over all HPD solutions of (1).
2.2. Unique Solution of (1) with s≥t
If s>t, Lee and Lim [20, Theorem 9.4] show that (1) always has a unique HPD solution, denoted by Xu. Now we give an upper bound and a lower bound of Xu and suggest an iteration method for computing Xu.
As defined in (3), g1(x) and g2(x) with s>t have unique positive roots, denoted by γ1 and γ2, respectively.
Since g1(λ(Xu))≤0 and g2(λ(Xu))≤0, γ2≤λ(Xu)≤γ1.
Theorem 7.
If s>t, (1) has a unique HPD solution Xu∈[γ2I,γ1I]. Let X0=γ1I or γ2I, then matrix series {X0,X1,X2,…} generated by
(22)Xi=(Q+A*Xi-1tA)1/s,i=0,1,2,…
will converge to Xu.
Proof.
We only need to prove the convergence of matrix series {X0,X1,X2,…}. Set X0=γ1I. From (22) we have
(23)X1=(Q+γ1tA*A)1/s≤(λmax(Q)+γ1tλmax(A*A))1/sI=γ1I,
and then X1s≤X0s. Assuming that Xis≤Xi-1s,
(24)Xi+1s=Q+A*XitA=Q+A*(Xis)t/sA≤Q+A*(Xi-1s)t/sA=Xis.
Then for any i=0,1,2,…, we have Xi+1s≤Xis and then Xi+1≤Xi by Löwner-Heinz inequality. On the other hand, X0≥γ2I implies Xi≥γ2I for any i=0,1,2,…, because if Xi-1≥γ2I, then
(25)Xi=(Q+A*Xi-1tA)1/s≥(Q+γ2tA*A)1/s≥(λmin(Q)+γ2tλmin(A*A))1/sI=γ2I.
Then {X0,X1,X2,…} with X0=γ1I is a decreasingly monotone matrix series with a lower bound γ2I. Similarly we can prove that {X0,X1,X2,…} generated by (22) with X0=γ2I is an increasingly monotone matrix series with an upper bound γ1I. Therefore, the convergence of {X0,X1,X2,…} has been proved.
From the above proof, we can see that the iteration form (22) preserves the minimality (X0=γ1I) or maximality (X0=γ2I) of Xi in process.
If s=t, (1) can be reduced to a linear matrix equation Y-A*YA=Q, which is the discrete-time algebraic Lyapunov equation (DALE) or Hermitian Stein equation, [1, Page 5], assuming that Y=Xs. It is well known that if A is d-stable (see [1]), Y-A*YA=Q has a unique solution, and matrix series {Y0,Y1,Y2,…}, generated by Yi+1=Q+A*YiA with an initial value Y0, will converge to the unique solution. Besides, it is not difficult to get an expression of the unique solution Xu=(∑j=0∞(A*)jQAj)1/s, applying [32, Theorem 1, Section 13.2), [1, Theorem 1.1.18], and the results in Section 6.4 in [28].
Now we have presented the solvability theory of the self-adjoint polynomial matrix equation (1) in three cases. A general iterative algorithm for its maximal solution (s<t) or unique solution (s≥t) will be given in Section 4. Before it, we study the algebraic perturbation of the maximal or unique solution of (1).
3. Algebraic Perturbation Analysis
In this section, we present the algebraic perturbation analysis of the HPD solution of (1) with respect to the perturbation of its coefficient matrices. Similar to [30], we define the perturbed matrix equation of (1) as
(26)X^s-A^*X^tA^=Q^,
where A^=A+ΔA∈ℂn×n and Q^=Q+ΔQ∈ℂn×n. We always suppose that (1) has a maximal (or unique) solution, denoted by XM∈[α2I,β2I], and (26) has a maximal (or unique) solution, denoted by X^M∈[α^2I,β^2I].
Now we present the perturbation bound for XM when s≠t. Define a function τ:
(27)τ(α,β)=sαs-1-tβt-1∥A∥22,(α,β)∈ℝ2.
Theorem 8.
Let ɛ>0 be an arbitrary real number, and τ(α^2,β^2)≥0. If
(28)∥ΔA∥F<(∥A∥22+2ɛ3τ(α^2,β^2)∥X^M∥2-t)1/2-∥A∥2,∥ΔQ∥F<13τ(α^2,β^2)ɛ,
then
(29)∥X^M-XM∥F<ɛ.
Proof.
It is easy to induce that
(30)∥X^Ms-XMs∥F≥(∑k=0s-1α^2s-1-kα2k)∥X^M-XM∥F≥sα^2s-1∥X^M-XM∥F,∥X^Mt-XMt∥F≤(∑k=0t-1β^2t-1-kβ2k)∥X^M-XM∥F≤tβ^2t-1∥X^M-XM∥F.
Then from (1) and (26), we have
(31)τ(α^2,β^2)∥X^M-XM∥F≤2∥A∥2∥X^M∥2t∥ΔA∥F+∥X^M∥2t∥ΔA∥F2+∥ΔQ∥F.
Since τ(α^2,β^2)>0,
(32)∥X^M-XM∥F≤(τ(α^2,β^2))-1(2∥A∥2∥X^M∥2t∥ΔA∥F+∥X^M∥2t∥ΔA∥F2+∥ΔQ∥F).
Then for an arbitrary ɛ>0, if ∥ΔA∥F<(∥A∥22+(2ɛ/3)τ(α^2,β^2)∥X^M∥2-t)1/2-∥A∥2 and ∥ΔQ∥F<(1/3)τ(α^2,β^2)ɛ, we have (29).
If s=t, for an arbitrary ɛ>0, define
(33)ϱ(ɛ)=∥A∥2+(∥A∥22+2ɛ3ρ)1/2,
where
(34)ρ=∥X^M∥2s[sα^2s-1(1-∥A∥22)]-1.
Theorem 9.
Let ɛ>0 be an arbitrary real number, and ∥A∥2<1. If
(35)∥ΔA∥F<2ɛ3(ρϱ(ɛ))-1,∥ΔQ∥F<ɛ3ρ∥X^M∥2s,
then
(36)∥X^M-XM∥F<ɛ.
Proof.
Similar to the proof of Theorem 8, we can induce that
(37)(1-∥A∥22)∥X^Ms-XMs∥F≤2∥A∥2∥X^M∥2s∥ΔA∥F+∥X^M∥2s∥ΔA∥F2+∥ΔQ∥F.
Then
(38)∥X^Ms-XMs∥F≤(1-∥A∥22)-1(2∥A∥2∥X^M∥2s∥ΔA∥F+∥X^M∥2s∥ΔA∥F2+∥ΔQ∥F).
With the help of (30) and (34), (38) implies
(39)∥X^M-XM∥F≤ρ(∥ΔA∥F+2∥A∥2∥ΔA∥F+∥ΔQ∥F).
Then if ∥ΔA∥F<(2ɛ/3)(ρϱ(ɛ))-1 and ∥ΔQ∥F<(ɛ/3ρ)∥X^M∥2s, we have (36).
Theorems 8 and 9 make sure that the perturbation of XM can be controlled if ΔA and ΔQ have a proper upper bound.
4. Algorithm and Numerical Experiments
In this section we give a general iterative algorithm for the maximal or unique solutions of (1) and two numerical experiments. All reported results were obtained using MATLAB-R2012b on a personal computer with 2.4 GHz Intel Core i7 and 8 GB 1600 MHz DDR3.
Example 10.
Let matrices A=rand(100)×10-2 and Q=eye(100). With tol=10-12 and not more than 200 iterations, we apply Algorithm 1 to compute the maximal or unique HPD solutions of (1) with s≠t and compare the results with those by the iteration method from [33] (denoted by MONO in Table 1).
Table 1 shows iterations, CPU times before convergence, and the residues of the computed HPD solution X, defined by
(40)e(s,t)=∥Xs-A*XtA-Q∥F∥[A,Q]∥F.
Iteration, CPU time (seconds) and residue for solving (1) with s≠t.
(s,t)
Algorithm 1
MONO
Ite
CPU
Res
Ite
CPU
Res
(2,1)
9
0.0541
4.5275e-13
200
2.1031
0.0016
(1,2)
200
1.0275
2.0297e-07
—
—
—
(8,5)
10
0.0716
5.9909e-13
200
2.2284
0.0034
(5,8)
200
1.1048
3.1059e-05
—
—
—
(30,15)
9
0.0743
5.2317e-13
200
2.3051
0.0029
(15,30)
200
1.2865
2.0838e-08
—
—
—
(300,150)
10
0.0886
7.9960e-13
200
2.2683
0.0031
(150,300)
200
1.4187
2.8384e-07
—
—
—
Algorithm 1: Given matrices A,Q∈ℂn×n and positive integers s,t.
Step 3. If s<t, run Steps 4-5; if t<s, run Steps 6-7; otherwise, run Steps 8-9.
Step 4. Compute the roots α1, α2 of g1(x), and β1, β2 of g2(x), respectively.
Step 5. Let X0=β2I, run (17).
Step 6. Compute the root γ1 of g1(x) and the root γ2 of g2(x), respectively.
Step 7. Let Z0=γ1I, run (22).
Step 8. Compute the root δ1 of g1(x) and the root δ2 of g2(x), respectively.
Step 9. If λmax(A*A)<1 and δ1≥δ2, then let X0=δ1I and run (22).
From Table 1, we can see that it takes more iterations and CPU times to solve the maximal solution of (1) with s<t than to solve the unique solution of (1) with s>t. At the same time, the accuracy of the latter is better than the former. MONO can not be used to solve (1) with s<t, and it costs more iterations and CPU times than Algorithm 1 when solving (1) with s>t.
Now we use Example 4.1 of [33] to test our method.
Example 11.
Let A=0.5B/∥B∥∞ with B=[Bij]n×n,bij=i+j+1 and let Q=eye(n), with n=100. We solve (1) with s=t and with two different initial solutions. The iterations, CPU times, and the residues of the computation are reported in Table 2.
Iteration, CPU time (seconds) and residue for solving (1) with s=t and different initial solutions.
(s,t,X0)
Algorithm 1
MONO
Ite
CPU
Res
Ite
CPU
Res
(1,1,δ1In)
20
0.0223
3.4947e-13
20
0.0202
3.4947e-13
(1,1,δ2In)
31
0.0258
7.3027e-13
31
0.0305
7.3027e-13
(2,2,δ1In)
20
0.1170
8.9978e-13
200
2.6421
0.0037
(2,2,δ2In)
43
0.5475
9.6421e-13
200
2.7224
0.0037
(10,10,δ1In)
29
0.1890
2.9296e-13
200
2.9717
0.0059
(10,10,δ2In)
157
3.0859
7.1154e-13
200
2.9788
0.0059
Table 2 shows that for Algorithm 1 the choice X0=δ1In is better than X0=δ2In. When s and t rise, MONO might lose its efficiency. It seems not proper to apply the iteration method designed for Y-A*Yt/sA=Q with Y=Xs to solve Xs-A*XtA=Q, although they are equivalent to each other in theory.
5. Conclusion
In this paper, we considered the solvability of the self-adjoint polynomial matrix equation (1). Sufficient conditions were given to guarantee the existence of the maximal or unique HPD solutions of (1). The algebraic perturbation analysis including perturbation bounds was also developed for (1) under the perturbation of given coefficient matrices. At last a general iterative algorithm with maximality preserved in process was presented for the maximal or unique solution with two numerical experiments reported.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
Zhigang Jia’s research was supported in part by National Natural Science Foundation of China under Grants 11201193 and 11171289 and a project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions. Minghui Wang’s research was supported in part by the National Natural Science Foundation of China (Grant no. 11001144), the Science and Technology Program of Shandong Universities of China (J11LA04), and the Research Award Fund for Outstanding Young Scientists of Shandong Province in China (BS2012DX009). Sitao Ling’s research was supported in part by National Natural Science Foundations of China under Grant 11301529, Postdoctoral Science Foundation of China under Grant 2013M540472, and Jiangsu Planned Projects for Postdoctoral Research Funds 1302036C. The authors would like to thank three anonymous referees for giving valuable comments and suggestions.
Abou-KandilH.FreilingG.IonescuV.JankG.2003Basel, SwitzerlandBirkhäuserMR1997753GohbergI.LancasterP.RodmanL.1982New York, NY, USAAcademic PressMR662418BennerP.LaubA. J.MehrmannV.Benchmarks for the numerical solution of algebraic Riccati equations199717518282-s2.0-003124904710.1109/37.621466BittantiS.LaubA.WillemsJ. C.1991Berlin, GermanySpringerCommunications and Control Engineering SeriesMR1132048LancasterP.RodmanL.1995New York, NY, USAOxford University PressOxford Science PublicationsMR1367089LaubA. J.BittantiS.LaubA. J.WillemsJ. C.Invariant subspace methods for the numerical solution of Riccati equations1991Berlin, GermanySpringer163196Communications and Control EngineeringMR1132055LaubA. J.A Schur method for solving algebraic Riccati equations197924691392110.1109/TAC.1979.1102178MR566449ZBL0427.65027KimuraM.Doubling algorithm for continuous-time algebraic Riccati equation198920219120210.1080/00207728908910119MR984488ZBL0667.93035ChuE. K.-W.FanH.-Y.LinW.-W.WangC.-S.Structure-preserving algorithms for periodic discrete-time algebraic Riccati equations200477876778810.1080/00207170410001714988MR2072208ZBL1089.93006ChuE. K.-W.FanH.-Y.LinW.-W.A structure-preserving doubling algorithm for continuous-time algebraic Riccati equations2005396558010.1016/j.laa.2004.10.010MR2112199ZBL1151.93340HighamN. J.Perturbation theory and backward error for AX-XB=C199333112413610.1007/BF01990348MR1326007ZBL0781.65034KonstantinovM.PetkovP.Note on perturbation theory for algebraic Riccati equations20002113273542-s2.0-003363002710.1137/S0895479898338111MR1718678SunJ.-G.Residual bounds of approximate solutions of the algebraic Riccati equation199776224926310.1007/s002110050262MR1440123ZBL0878.15007SunJ.-G.Backward error for the discrete-time algebraic Riccati equation199725918320810.1016/S0024-3795(96)00283-2MR1450537ZBL0883.93041SunJ.-G.Backward perturbation analysis of the periodic discrete-time algebraic Riccati equation200426111910.1137/S0895479802414928MR2112849ZBL1112.93023MehrmannV.A step towards a unified treatment of continuous and discrete time control problems1996241–24374977910.1016/0024-3795(95)00257-XXuH.-G.Transformations between discrete-time and continuous-time algebraic Riccati equations200742517710110.1016/j.laa.2007.03.018MR2334493ZBL1135.93009RanA. C. M.ReuringsM. C. B.On the nonlinear matrix equation X+A*ℱXA=Q: solutions and perturbation theory2002346152610.1016/S0024-3795(01)00508-0MR1897819RanA. C. M.ReuringsM. C. B.RodmanL.A perturbation analysis for nonlinear selfadjoint operator equations20062818910410.1137/05062873MR2218944ZBL1105.47053LeeH.LimY.Invariant metrics, contractions and nonlinear matrix equations200821485787810.1088/0951-7715/21/4/011MR2399829ZBL1153.15020CaiJ.ChenG.-L.On the Hermitian positive definite solutions of nonlinear matrix equation Xs+AX-tA=Q2010217111712310.1016/j.amc.2010.05.023MR2672569ZBL1201.15005DuanX.-F.WangQ.-W.LiaoA.-P.On the matrix equation arising in an interpolation problem20136191192120510.1080/03081087.2012.746326MR3175358ZBL06213191JiaZ.-G.WeiM.-S.Solvability and sensitivity analysis of polynomial matrix equation Xs+ATXtA=Q2009209223023710.1016/j.amc.2008.12.048MR2493399ZBL1162.15006WangM.-H.WeiM.-S.HuS.The extremal solution of the matrix equation Xs+A*X-qA=I201322019319910.1016/j.amc.2013.06.046MR3091843ZhouB.CaiG.-B.LamJ.Positive definite solutions of the nonlinear matrix equation X+AH X¯ −1A=I2013219147377739110.1016/j.amc.2013.01.021MR3032580ZhanX.-Z.20021790Berlin, GermanySpringerLecture Notes in Mathematics10.1007/b83956MR1927396FurutaT.Operator inequalities associated with Hölder-McCarthy and Kantorovich inequalities1998199823452110.1155/S1025583498000083MR1671992ZBL0910.47014HornR. A.JohnsonC. R.1991Cambridge, UKCambridge University PressMR1091716HornR. A.JohnsonC. R.1990Cambridge, UKCambridge University PressMR1084815LiuX.-G.GaoH.On the positive definite solutions of the matrix equations Xs−ATX-tA=In2003368839710.1016/S0024-3795(02)00661-4MR1983196ZBL1025.15018XuS.-F.Perturbation analysis of the maximal solution of the matrix equation A*X−1A=P2001336617010.1016/S0024-3795(01)00300-7MR1855392ZBL0992.15013LancasterP.TismenetskyM.19852ndOrlando, Fla, USAAcademic PressComputer Science and Applied MathematicsMR792300El-SayedS. M.RanA. C. M.On an iteration method for solving a class of nonlinear matrix equations200123363264510.1137/S0895479899345571MR1896810