This paper studies the sensitivity analysis of a nonlinear matrix equation connected to interpolation problems. The backward error estimates of an approximate solution to the equation are derived. A residual bound of an approximate solution to the equation is obtained. A perturbation bound for the unique solution to the equation is evaluated. This perturbation bound is independent of the exact solution of this equation. The theoretical results are illustrated by numerical examples.
1. Introduction
In this paper we consider the Hermitian positive definite solution of the nonlinear matrix equation:
(1)X-∑i=1mAi*X-1Ai=Q,
where A1,A2,…,Am are n×n complex matrices, m is a positive integer, and Q is a positive definite matrix. Here, Ai* denotes the conjugate transpose of the matrix Ai.
This type of nonlinear matrix equations arises in many practical problems. The equation X-A*X-1A=Q which is a representative of (1) for m=1 comes from ladder networks, dynamic programming, control theory, stochastic filtering, statistics, and so forth [1–6]. When m>1, (1) comes from the nonlinear matrix equation:
(2)X=Q+A*(X^-C)-1A,
where Q is an n×n positive definite matrix, C is an mn×mn positive semidefinite matrix, A is an arbitrary mn×n matrix, and X^ is the m×m block diagonal matrix with, on each diagonal entry, the n×n matrix X. In [7], (2) is recognized as playing an important role in modelling certain optimal interpolation problems. Let C=0 and A=(A1T,A2T,…,AmT)T, where Ai,i=1,2,…,m, are n×n matrices. Then X=Q+A*(X^-C)-1A can be rewritten as X-∑i=1mAi*X-1Ai=Q. When we solve the nonlinear matrix equation X-∑i=1mAi*X-1Ai=Q, we often do not know Ai and Q exactly but have only approximations A~i and Q~ available. Then we only solve the equation X~-∑i=1mA~i*X~-1A~i=Q~ exactly which gives a different solution X~. We would like to know how the errors of A~i and Q~ influence the error in X~. Motivated by this, we consider in this paper the sensitivity analysis of (1).
For the equation X-A*X-1A=Q and related equations X-∑i=1mAi*XδiAi=Q(0<|δi|<1) and X-∑i=1mAi*XrAi=Q (-1≤r<0 or 0<r<1) there were many contributions in the literature to the solvability and numerical solutions [8–14]. However, these papers have not examined the sensitivity analysis about the above equations. Hasanov et al. [12, 15] obtained two perturbation estimates of the solutions to the equations X±A*X-1A=Q. Li and Zhang [13] derived two perturbation bounds of the unique solution to the equation X-A*X-1A=Q. They also obtained the explicit expression of the condition number for the unique positive definite solution. The perturbation analysis about the related equations X+A*X-1A=P, X-A*X-pA=Q, and Xs±ATX-tA=In were mentioned in papers [16–19]. Yin and Fang [20] obtained an explicit expression of the condition number for the unique positive definite solution of (1). They also gave two perturbation bounds for the unique positive definite solution, whereas, to our best knowledge, there have been no backward error estimates and any computable residual bound for (1) in the known literature. In this paper, we obtain the backward error estimates and a residual bound of the approximate solution to (1) as well as evaluate a new relative perturbation bound for (1). This bound does not need any knowledge of the exact solution of (1), which is important in many practical calculations.
As a continuation of the previous results, this paper gives some preliminary knowledge that will be needed to develop this work in Section 2. In Section 3, the backward error estimates of an approximate solution for the unique solution to (1) are discussed. In Section 4, we derive a residual bound of an approximate solution for the unique solution to (1). In Section 5, we give a new perturbation bound for the unique solution to (1), which is independent of the exact solution of (1). Finally, several numerical examples are presented in Section 6.
We denote by 𝒞n×n the set of n×n complex matrices, by ℋn×n the set of n×n Hermitian matrices, by I the identity matrix, by i the imaginary unit, by ∥·∥ the spectral norm, by ∥·∥F the Frobenius norm, and by λmax(M) and λmin(M) the maximal and minimal eigenvalues of M, respectively. For A=(a1,…,an)=(aij)∈𝒞n×n and a matrix B, A⊗B=(aijB) is a Kronecker product, and vecA is a vector defined by vecA=(a1T,…,anT)T. For X,Y∈ℋn×n, we write X≥Y (resp., X>Y) if X-Y is Hermitian positive semidefinite (resp., definite).
2. PreliminariesLemma 1 (see [<xref ref-type="bibr" rid="B8">8</xref>, Theorem 2.1]).
The matrix equation
(3)X-∑i=1mAi*XrAi=Q,-1≤r<0
always has a unique positive definite solution.
Lemma 2 (see [<xref ref-type="bibr" rid="B8">8</xref>, Theorem 2.3]).
Let X be the unique Hermitian positive definite solution of (3); then X∈[βI,αI], where the pair (β,α) is a solution of the system
(4)β=λmin(Q)+∑i=1mλmin(Ai*Ai)αr,α=λmax(Q)+∑i=1mλmax(Ai*Ai)βr.
3. Backward Error
In this section, applying the technique developed in [18], we obtain some estimates for the backward error of the approximate solution of (1).
Let X~∈ℋn×n be an approximation to the unique solution X to (1), and let ΔAi∈𝒞n×n(i=1,2,…,m) and ΔQ∈ℋn×n be the corresponding perturbations of the coefficient matrices Ai(i=1,2,…,m) and Q in (1). A backward error of the approximate solution X~ can be defined by
(5)η(X~)=min{∥(ΔA1α1,ΔA2α2,…,ΔAmαm,ΔQρ)∥F:X~-∑i=1m(Ai+ΔAi)*X~-1(Ai+ΔAi)=Q+ΔQ∥(ΔA1α1,ΔA2α2,…,ΔAmαm,ΔQρ)∥F},
where α1,α2,…,αm and ρ are positive parameters. Taking αi=∥Ai∥F,i=1,2,…,m, and ρ=∥Q∥F in (5) gives the relative backward error ηrel(X~), and taking αi=1,i=1,2,…,m, and ρ=1 in (5) gives the absolute backward error ηabs(X~).
Let
(6)R=Q-X~+∑i=1mAi*X~-1Ai.
Note that
(7)Q=X~-∑i=1m(Ai+ΔAi)*X~-1(Ai+ΔAi)-ΔQ.
It follows from (6) that
(8)-∑i=1m(ΔAi*X~-1Ai+Ai*X~-1ΔAi)-ΔQ=R+∑i=1mΔAi*X~-1ΔAi.
Let
(9)(I⊗(X~-1Ai)*)=Ui1+iΩi1,((X~-1Ai)T⊗I)Π=Ui2+iΩi2,vecΔAi=xi+iyi,vecΔQ=q1+iq2,vecR=r1+ir2,vec(ΔAi*X~-1ΔAi)=ai+ibi,i=1,2,…,m,g=(x1Tα1,y1Tα1,…,xmTαm,ymTαm,q1Tρ,q2Tρ)T,Ui=(Ui1+Ui2Ωi2-Ωi1Ωi1+Ωi2Ui1-Ui2),T=[-α1U1,-α2U2,…,-αmUm,-ρI2n2],
where Π is the vec-permutation. Then (8) can be written as
(10)Tg=(r1r2)+∑i=1m(aibi).
It follows from ρ>0 that 2n2×2(m+1)n2 matrix T is full row rank. Hence, TT†=I2n2, which implies that every solution to the equation
(11)g=T†(r1r2)+T†(∑i=1m(aibi))
must be a solution to (10). Consequently, for any solution g to (11) we have
(12)η(X~)≤∥g∥.
Then we can state the estimates of the backward error as follows.
Theorem 3.
Let A1,A2,…,Am,Q,X~∈𝒞n×n be given matrices and let η(X~) be the backward error defined by (5). If
(13)r<s4t(∑i=1mαi2),
then one has that
(14)U(r)≤η(X~)≤B(r),
where
(15)r=∥T†(r1r2)∥,s=∥T†∥-1,t=∥X~-1∥,(16)B(r)=2rss+s2-4rst(∑i=1mαi2),U(r)=2rs2-4rst(∑i=1mαi2)s+s2-4rst(∑i=1mαi2).
Proof.
Let
(17)L(g)=T†(r1r2)+T†(∑i=1m(aibi)).
Obviously, L:𝒞2(m+1)n2×1→𝒞2(m+1)n2×1 is continuous. Condition (13) ensures that the quadratic equation
(18)x=r+ts(∑i=1mαi2)x2
in x has two positive real roots. The smaller one is
(19)B(r)=2rss+s2-4rst(∑i=1mαi2).
Define Ω={g∈𝒞2(m+1)n2×1:∥g∥≤B(r)}. Then for any g∈Ω, we have
(20)∥L(g)∥≤r+1s∑i=1m∥(aibi)∥=r+1s∑i=1m∥ΔAi*X~-1ΔAi∥F≤r+ts∑i=1m∥ΔAi∥F2≤r+ts(∑i=1mαi2)∥(ΔA1α1,ΔA2α2,…,ΔAmαm)∥F2≤r+ts(∑i=1mαi2)∥g∥2≤r+ts(∑i=1mαi2)B2(r)=B(r).
The last equality is due to the fact that B(r) is a solution to the quadratic equation (18). Thus we have proved that L(Ω)⊂Ω. By the Schauder fixed-point theorem, there exists a g*∈Ω such that L(g*)=g*, which means that g* is a solution to (11), and hence it follows from (12) that
(21)η(X~)≤∥g*∥≤B(r).
Next we derive a lower bound for η(X~). Suppose that (ΔA1min/α1,…,ΔAmmin/αm,ΔQmin/ρ) satisfies
(22)η(X~)=∥(ΔA1minα1,…,ΔAmminαm,ΔQminρ)∥F.
Then we have
(23)Tgmin=(r1r2)+∑i=1m(ai⋆bi⋆),
where
(24)vec(ΔAimin*X~-1ΔAimin)=ai⋆+ibi⋆,vec(ΔAimin)=xi⋆+iyi⋆,vec(ΔQmin)=q1⋆+iq2⋆,gmin=(x1⋆Tα1,y1⋆Tα1,…,xm⋆Tαm,ym⋆Tαm,q1⋆Tρ,q2⋆Tρ)T.
Let a singular value decomposition of T be T=W(E,0)Z*, where W and Z are unitary matrices and E=diag(e1,e2,…,e2n2) with e1≥⋯≥e2n2>0. Substituting this decomposition into (23), and letting
(25)Z*gmin=(v⋆),v∈𝒞2n2×1,
we get
(26)v=E-1W*(r1r2)+E-1W*∑i=1m(ai⋆bi⋆).
It follows from (22) that
(27)η(X~)=∥gmin∥=∥(v⋆)∥≥∥v∥≥∥E-1W*(r1r2)∥-∥E-1W*∑i=1m(ai⋆bi⋆)∥≥∥T†(r1r2)∥-∥T†∥·∑i=1m∥(ai⋆bi⋆)∥≥r-1s∑i=1m∥ΔAimin*X~-1ΔAimin∥F≥r-ts∑i=1m∥ΔAimin∥F2≥r-ts(∑i=1mαi2)∥(ΔA1minα1,…,ΔAmminαm)∥F2≥r-ts(∑i=1mαi2)B2(r).
Here we have used the fact that
(28)∥(ΔA1minα1,…,ΔAmminαm)∥F≤∥(ΔA1minα1,…,ΔAmminαm,ΔQminρ)∥F=η(X~)≤B(r).
Let
(29)U(r)=r-ts(∑i=1mαi2)B2(r).
Since B(r) is a solution to (18), we have that
(30)B(r)=r+ts(∑i=1mαi2)B2(r),
which implies that
(31)U(r)=r-ts(∑i=1mαi2)B2(r)=2r-B(r)=2rs2-4rst(∑i=1mαi2)s+s2-4rst(∑i=1mαi2)>0.
Then η(X~)≥U(r).
4. Residual Bound
Residual bound reveals the stability of a numerical method. In this section, in order to derive the residual bound of an approximate solution for the unique solution to (1), we first introduce the following lemma.
Lemma 4.
For every positive definite matrix X∈ℋn×n, if X+ΔX≥(1/ν)I>0, then
(32)∥∑i=1mAi*((X+ΔX)-1-X-1)Ai∥≤(∥ΔX∥+ν∥ΔX∥2)∑i=1m∥X-1Ai∥2.
Proof.
According to
(33)(X+ΔX)-1-X-1=-X-1ΔX(X+ΔX)-1=-X-1ΔXX-1+X-1ΔXX-1ΔX(X+ΔX)-1,
it follows that
(34)∥∑i=1mAi*((X+ΔX)-1-X-1)Ai∥≤∑i=1m(∥Ai*X-1ΔXX-1Ai∥+∥Ai*X-1ΔXX-1ΔX(X+ΔX)-1Ai∥)≤(∥ΔX∥+ν∥ΔX∥2)∑i=1m∥X-1Ai∥2.
Theorem 5.
Let X~>0 be an approximation to the solution X of (1). If the residual R(X~)≡Q+∑i=1mAi*X~-1Ai-X~ satisfies
(35)∥R(X~)∥<(1-Σ)21+Σ+2Σλmin(X~),whereΣ≡∑i=1m∥X~-1Ai∥2<1,
then
(36)∥X~-X∥≤θ∥R(X~)∥,
where
(37)θ=(2λmin(X~))×(((1-Σ)λmin(X~)+∥R(X~)∥)2)1/2(1-Σ)λmin(X~)+∥R(X~)∥+(((1-Σ)λmin(X~)+∥R(X~)∥)2-4λmin(X~)∥R(X~)∥((1-Σ)λmin(X~)+∥R(X~)∥)2)1/2)-1.
Proof.
Let
(38)Ψ={ΔX∈ℋn×n:∥ΔX∥≤θ∥R(X~)∥}.
Obviously, Ψ is a nonempty bounded convex closed set. Let
(39)g(ΔX)=∑i=1mAi*[(X~+ΔX)-1-X~-1]Ai+R(X~).
Evidently g:Ψ↦ℋn×n is continuous.
Note that condition (35) ensures that the quadratical equation
(40)x2-(λmin(X~)(1-Σ)+∥R(X~)∥)x+λmin(X~)∥R(X~)∥=0
has two positive real roots, and the smaller one is given by
(41)μ*=(2λmin(X~)∥R(X~)∥)×(((1-Σ)λmin(X~)+∥R(X~)∥)2)1/2(1-Σ)λmin(X~)+∥R(X~)∥+(((1-Σ)λmin(X~)+∥R(X~)∥)2-4λmin(X~)∥R(X~)∥((1-Σ)λmin(X~)+∥R(X~)∥)2)1/2)-1.
Next, we will prove that g(Ψ)⊆Ψ.
For every ΔX∈Ψ, we have
(42)ΔX≥-θ∥R(X~)∥I.
Hence
(43)X~+ΔX≥X~-θ∥R(X~)∥I≥(λmin(X~)-θ∥R(X~)∥)I.
By (36), one sees that
(44)θ∥R(X~)∥≤2λmin(X~)∥R(X~)∥(1-Σ)λmin(X~)+∥R(X~)∥=λmin(X~)×(1+∥R(X~)∥-(1-Σ)λmin(X~)(1-Σ)λmin(X~)+∥R(X~)∥).
According to (35), we obtain
(45)∥R(X~)∥-(1-Σ)λmin(X~)≤((1-Σ)21+Σ+2Σ-(1-Σ))λmin(X~)≤-2(1-Σ)λmin(X~)1+Σ<0,
which implies that
(46)θ∥R(X~)∥≤λmin(X~),(λmin(X~)-θ∥R(X~)∥)I>0.
According to Lemma 4, we obtain
(47)∥g(ΔX)∥≤(∥ΔX∥+∥ΔX∥2λmin(X~)-θ∥R(X~)∥)×∑i=1m∥X-1Ai∥2+∥R(X~)∥≤(θ∥R(X~)∥+(θ∥R(X~)∥)2λmin(X~)-θ∥R(X~)∥)Σ+∥R(X~)∥=θ∥R(X~)∥,
for ΔX∈Ψ. That is, g(Ψ)⊆Ψ. By Brouwer fixed point theorem, there exists a ΔX∈Ψ such that g(ΔX)=ΔX. Hence X~+ΔX is a solution of (1). Moreover, by Lemma 1, we know that the solution X of (1) is unique. Then
(48)∥X~-X∥=∥ΔX∥≤θ∥R(X~)∥.
5. Perturbation Bounds
In this section we develop a relative perturbation bound for the unique solution of (1), which does not need any knowledge of the actual solution X of (1) and is easy to calculate.
Here we consider the perturbed equation:
(49)X~-∑i=1mAi~*X~-1Ai~=Q~,
where Ai~ and Q~ are small perturbations of Ai and Q in (1), respectively. It follows from Lemma 1 that the solutions of (1) and (49) exist. Then we assume that X and X~ are the solutions of (1) and (49), respectively. Let ΔX=X~-X, ΔQ=Q~-Q, and ΔAi=Ai~-Ai.
Theorem 6.
If
(50)∑i=1m∥Ai∥2<β2,
then
(51)∥ΔX∥∥X∥≤∑i=1m(2∥Ai∥+∥ΔAi∥)∥ΔAi∥+∥ΔQ∥β2-∑i=1m∥Ai∥2≜ξ1.
Proof.
By Lemma 1, we know that X and X~ are the unique solutions to (1) and (49), respectively. Subtracting (1) from (49) we have
(52)ΔX+∑i=1mAi*X~-1ΔXX-1Ai=∑i=1m(Ai*X~-1ΔAi+ΔAi*X~-1Ai+ΔAi*X~-1ΔAi)+ΔQ.
Then
(53)∥ΔX+∑i=1mAi*X~-1ΔXX-1Ai∥≥∥ΔX∥-∥∑i=1mAi*X~-1ΔXX-1Ai∥≥(1-∑i=1m∥Ai∥2∥X~-1∥∥X-1∥)∥ΔX∥.
By Lemma 2, it follows that ∥X-1∥≤1/β and ∥X~-1∥≤1/β. Then
(54)∥ΔX+∑i=1mAi*X~-1ΔXX-1Ai∥≥1β2(β2-∑i=1m∥Ai∥2)∥ΔX∥.
Condition (50) ensures (β2-∑i=1m∥Ai∥2)∥ΔX∥>0.
Combining (52) and (54), we obtain
(55)∥ΔX∥∥X∥≤∑i=1m(2∥Ai∥+∥ΔAi∥)∥ΔAi∥+∥ΔQ∥β2-∑i=1m∥Ai∥2.
Remark 7.
Yin and Fang [20] obtained two perturbation bounds, which were dependent on the exact solution of (1), whereas, in this paper, the relative perturbation bound in Theorem 6 does not need any knowledge of the actual solution X of (1), which is important in many practical calculations.
Remark 8.
With
(56)ξ1=∑i=1m(2∥Ai+∥ΔAi∥∥)∥ΔAi∥+∥ΔQ∥β2-∑i=1m∥Ai∥2,
we get ξ1→0 as ∥ΔQ∥→0 and ∥ΔAi∥→0(i=1,2,…,m). Therefore (1) is well-posed.
6. Numerical Examples
To illustrate the theoretical results of the previous sections, in this section several simple examples are given, which were carried out using MATLAB 7.1. For the stopping criterion we take εk+1(X)=∥Xk-∑i=1mAi*Xk-1Ai-Q∥<1.0e-10.
Example 1.
In this example, we consider the backward error of an approximate solution for the unique solution X to (1) in Theorem 3. We consider
(57)X-A1*X-1A1-A2*X-1A2=Q,
with the coefficient matrices
(58)A1=15(101-111-1-11),A2=2345A1,Q=X-A1*X-1A1-A2*X-1A2,
where X=diag(1,2,3).
Let
(59)X~=X+(0.5-0.10.2-0.10.30.60.20.6-0.4)×10-j
be an approximate solution to (1). Take α1=∥A1∥F, α2=∥A2∥F and ρ=∥Q∥F in Theorem 3. Some results on lower and upper bounds for the backward error η(X~) are displayed in Table 1.
The results listed in Table 1 show that the backward error of X~ decreases as the error ∥X~-X∥F decreases.
Backward error for Example 1 with different values of j.
j
∥X~-X∥F
r
U(r)
B(r)
1
0.2298
0.0633
0.0630
0.0637
3
2.3×10-3
6.3391×10-4
6.3387×10-4
6.3394×10-4
5
2.2978×10-5
6.3391×10-6
6.3391×10-6
6.3391×10-6
7
2.2978×10-7
6.3391×10-8
6.3391×10-8
6.3391×10-8
9
2.2978×10-9
6.3391×10-10
6.3391×10-10
6.3391×10-10
Example 2.
This example considers the residual bound of an approximate solution for the unique solution X to (1) in Theorem 5. We consider
(60)X-A1*X-1A1-A2*X-1A2=Q,
with
(61)A1=(1/3)+2×10-2∥A∥A,A2=(1/6)+3×10-2∥A∥A,Q=A=(2100012100012100012100012).
Choose X~0=A. Let the approximate solution X~k of X be given with the iterative method Xk=Q+∑i=1mAi*Xk-1-1Ai,X0>0,k=1,2,…, where k is the iteration number.
The residual R(X~k)≡Q+A1*X~k-1A1+A2*X~k-1A2-X~k satisfies the conditions in Theorem 5. By Theorem 5, we can compute the residual bounds for X~k(62)∥X~k-X∥≤θ∥R(X~k)∥,
where
(63)θ=(2λmin(X~k))×(((1-Σ)λmin(X~)+∥R(X~)∥)2)1/2(1-Σ)λmin(X~k)+∥R(X~k)∥+(((1-Σ)λmin(X~k)+∥R(X~k)∥)2-4λmin(X~k)∥R(X~k)∥((1-Σ)λmin(X~k)+∥R(X~k)∥)2)1/2)-1.
Some results are listed in Table 2.
The results listed in Table 2 show that the residual bound given by Theorem 5 is fairly sharp.
Residual bounds for Example 2 with different values of k.
k
1
2
3
4
∥X~k-X∥
5.0268×10-4
5.7662×10-6
6.6162×10-8
7.5024×10-10
θ∥R(X~k)∥
5.1435×10-4
5.9000×10-6
6.7689×10-8
7.7656×10-10
Example 3.
In this example, we consider the corresponding perturbation bound for the solution X in Theorem 6.
We consider the matrix equation
(64)X-A1*X-1A1-A2*X-1A2=I,
with
(65)A1=(1/3)+2×10-2∥A∥A,A2=(1/6)+3×10-2∥A∥A,(66)A=(2100012100012100012100012).
Suppose that the coefficient matrices A1 and A2 are perturbed to Ai~=Ai+ΔAi,i=1,2, where
(67)ΔA1=10-j∥CT+C∥(CT+C),ΔA2=3×10-j-1∥CT+C∥(CT+C)
and C is a random matrix generated by MATLAB function randn.
By Theorem 6, we can compute the relative perturbation bound ξ1. The results averaged as the geometric mean of 20 randomly perturbed runs. Some results are listed in Table 3.
The results listed in Table 3 show that the perturbation bound ξ1 given by Theorem 6 is fairly sharp.
Perturbation bounds for Example 3 with different values of j.
j
4
5
6
7
∥X~-X∥/∥X∥
2.7482×10-5
2.4983×10-6
2.5705×10-7
2.9406×10-8
ξ1
1.0845×10-4
9.2695×10-6
9.6710×10-7
1.0595×10-7
7. Concluding Remarks
In this paper, we consider the sensitivity analysis of the nonlinear matrix equation X-∑i=1mAi*X-1Ai=Q. Compared with existing literature, the contributions of this paper are as follows.
A backward error and a computable residual bound of an approximate solution for the unique solution to (1) are derived, which do not appear in other known literature works.
Some results in this paper can cover the work of Li and Zhang [13] for the matrix equation X-A*X-1A=Q as a special case.
This paper develops a new relative perturbation bound for the solution to (1), which does not need any knowledge of the actual solution X of (1) and could be computed easily.
Acknowledgments
The authors would like to express their gratitude to the referees for their fruitful comments. The work was supported in part by the National Nature Science Foundation of China (11201263), Natural Science Foundation of Shandong Province (ZR2012AQ004), and Independent Innovation Foundation of Shandong University (IIFSDU), China. The authors declare that there is no conflict of interests regarding the publication of this paper.
AndersonW. N.KleindorferG. B.KleindorferM. B.WoodroofeM. B.Consistent estimates of the parameters of a linear systemsAndersonW. N.MorleyT. D.TrappG. E.Christopher ByrnesI.MartinF. C.Richard SaeksE.The cascade limit, the shorted operator and quadratic optimal controlBucyR. S.A priori bounds for the Riccati equationProceedings of the 6th Berkeley Symposium on Mathematical Statistics and Probability, Volume 3: Probability Theory1972Berkeley, Calif, USAUniversity of California Press645656OuelletteD. V.Schur complements and statisticsPuszW.WoronowitzS. L.Functional caculus for sequlinear forms and purification mapZabcykJ.Remarks on the control of discrete time distributed parameter systemsRanA. C. M.ReuringsM. C. B.A nonlinear matrix equation connected to interpolation theoryDuanX. F.LiaoA. P.On Hermitian positive definite solution of the matrix equation X-∑i=1mAi*XrAi=QFerranteA.LevyB. C.Hermitian solutions of the equation X=Q+NX-1N*GuoC.LancasterP.Iterative solution of two matrix equationsHasanovV. I.Positive definite solutions of the matrix equations X±A*X-qA=QHasanovV. I.IvanovI. G.UhligF.Improved perturbation estimates for the matrix equations X±A*X-1A=QLiJ.ZhangY. H.The Hermitian positive definite solutions and perturbation analysis of the matrix equation X-A*X-1A=QLimY.Solving the nonlinear matrix equation X=Q+∑i=1mMiXδiMi* via a contraction principleHasanovV. I.IvanovI. G.On two perturbation estimates of the extreme solutions to the equations X±A*X-1A=QLiJ.ZhangY.Perturbation analysis of the matrix equation X-A*X-qA=QLiuX. G.GaoH.On the positive definite solutions of the matrix equations Xs±ATX-tA=InSunJ. G.XuS. F.Perturbation analysis of the maximal solution of the matrix equation X+A*X-1A=P. IIXuS. F.Perturbation analysis of the maximal solution of the matrix equation X+A*X-1A=PYinX. Y.FangL.Perturbation analysis for the positive definite solution of the nonlinear matrix equation X-∑i=1mAi*X-1Ai=Q