JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 869705 10.1155/2013/869705 869705 Research Article Eigenvector-Free Solutions to the Matrix Equation AXBH=E with Two Special Constraints Qiu Yuyang Wang Qing-Wen 1 College of Statistics and Mathematics Zhejiang Gongshang University Hangzhou 310018 China zjgsu.edu.cn 2013 31 10 2013 2013 11 03 2013 18 09 2013 2013 Copyright © 2013 Yuyang Qiu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The matrix equation AXBH=E with SX=XR or PX=sXQ constraint is considered, where S, R are Hermitian idempotent, P, Q are Hermitian involutory, and s=±1. By the eigenvalue decompositions of S, R, the equation AXBH=E with SX=XR constraint is equivalently transformed to an unconstrained problem whose coefficient matrices contain the corresponding eigenvectors, with which the constrained solutions are constructed. The involved eigenvectors are released by Moore-Penrose generalized inverses, and the eigenvector-free formulas of the general solutions are presented. By choosing suitable matrices S, R, we also present the eigenvector-free formulas of the general solutions to the matrix equation AXBH=E with PX=sXQ constraint.

1. Introduction

In , Chen has denoted a square matrix X, the reflexive or antireflexive matrix with respect to P by (1)PX=XPorPX=-XP, where the matrix P𝒞n×n is Hermitian involutory. He also pointed out that these matrices possessed special properties and had wide applications in engineering and scientific computations [1, 2]. So, solving the matrix equation or matrix equations with these constraints is maybe interesting . In this paper, we consider the matrix equation (2)AXBH=E with constraint (3)PX=sXQorSX=XR, where the matrices A𝒞m×n, B𝒞p×n, E𝒞m×p, the Hermitian involutory matrices P,Q𝒞n×n, the Hermitian idempotent matrices S,R𝒞n×n, and the scalars s=±1.

Equation (2) with different constraints such as symmetry, skew-symmetry, and PX=±XP, was discussed in [911, 1521], where existence conditions and the general solutions to the constrained equation were presented. By generalized singular value decomposition (GSVD) [22, 23], the authors of  simplified the matrix equation by diagonalizing the coefficient matrices and block-partitioned the new variable matrices into several block matrices, then imposed the constrained condition on subblocks, and determined the unknown subblocks separately for (2) with symmetric constraint. A similar strategy was also used in ; the authors achieved symmetric, skew-symmetric, and positive semidefinite solutions to (2) by quotient singular value decomposition (QSVD) [24, 25]. Moreover, in , CCD  was used for establishing a formula of the general solutions to (2) with diagonal constraint.

In , we have presented an eigenvector-free solution to the matrix equation (2) with constraint PX=±XP, where we represented its general solution and existence condition by g-inverses of the matrices A, B, and P. Note that the g-inverses are always not unique, and they can be generalized to the Moore-Penrose generalized inverses. Moreover, the constraint which guarantees the eigenvector-free expressions can be maybe improved further. So, in this paper, we focus on (2) with generalized constraint PX=sXQ or another constraint SX=XR; our ideas are based on the following observations.

If we set (4)S=12(I+P),R=12(I+sQ), then S and R are both Hermitian idempotent. The above fact implies PX=sXQ is the special case of SX=XR. So, we only discuss (2) with SX=XR constraint and construct the PX=sXQ constrained solution by selecting suitable matrices R, Q as (4).

With the eigenvalue decompositions (EVDs) of the Hermitian matrices R, S, matrix X with SX=XR constraint can be rewritten in (lower dimensional) two free variables X^ and Y^. And the corresponding constrained problem can be equivalently transformed to an unconstrained equation (5)A^1X^B^1H+A^2Y^B^2H=E, with given coefficient matrices A^i, B^i, i=1,2 (one can see the details of this discussion in Section 2).

The general solutions and existence conditions of (5) can be represented by the Moore-Penrose generalized inverses of A^i, B^i, i=1,2 [15, 20, 2729]. However, the formulas above are maybe not simpler because the coefficient matrices contain the eigenvectors of S, R. In fact, the Hermitian idempotence of the matrices S, R implies they only have two clusters different eigenvalues, and their corresponding eigenvectors appear in the expression of general solutions, and existence conditions can be easily represented by S, R themselves. So we present a simple and eigenvector-free formulation for the constrained general solution.

The rest of this paper is organized as follows. In Section 2, we give the general solutions and the existence condition to (2) with SX=XR constraint by the EVDs of S, R. In Section 3, we present the corresponding eigenvector-free representations. Equation (2) with PX=sXQ constraint is regarded as the special case of (2) with SX=XR constraint, and its eigenvector-free representation is given in Section 4. Numerical examples are given in Section 5 to display the effectiveness of our theorems.

We will use the following notations in the rest of this paper. Let 𝒞m×n denote the space of complex m×n matrix. For a matrix A, AH and A denote its transpose and Moore-Penrose generalized inverse, respectively. Matrix In is identity matrix with order n; Om×n refers to m×n zero matrix, and On is the zero matrix with order n. For any matrix A𝒞m×n, we also denote (6)𝒫A=AA,KA=Im-𝒫A. So, (7)𝒫AH=AA,KAH=In-𝒫AH.

2. Solution to (<xref ref-type="disp-formula" rid="EEq1.1">2</xref>) with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M78"><mml:mi>S</mml:mi><mml:mi>X</mml:mi><mml:mo mathvariant="bold">=</mml:mo><mml:mi>R</mml:mi><mml:mi>X</mml:mi></mml:math></inline-formula> Constraint by the EVDs

For the Hermitian idempotent matrices S, R, let (8)S=Udiag(Ik,On-k)UH,R=Vdiag(Il,On-l)VH be their two eigenvalue decompositions with unitary matrices U, V, respectively. Then SX=XR holds if and only if (9)diag(Ik,On-k)X~=X~diag(Il,On-l), where X~=UHXV. And the constrained solution X can be expressed in (10)X=Udiag(X^,Y^)VH,X^𝒞k×l,Y^𝒞(n-k)×(n-l). Partitioning U=[U1,U2], V=[V1,V2] and using the transformations (10), (2) with SX=XR constraint is equivalent to the following unconstrained problem: (11)A^1X^B^1H+A^2Y^B^2H=E, where (12)A^1=AU1,B^1=BV1,A^2=AU2,B^2=BV2.

For the unconstrained problem (11), we introduce the results about its existence conditions and expression of solutions.

Lemma 1.

Given A𝒞m×n, B𝒞p×q, C𝒞m×r, D𝒞s×q, and E𝒞m×q, the linear matrix equation AXB+CYD=E is consistent if and only if (13)𝒫GKAE𝒫DH=KAE,𝒫CEKBH𝒫JH=EKBH, or, equivalently, if and only if (14)KGKAE=0,KAEKDH=0,KCEKBH=0,EKBHKJH=0, where G=KAC and J=DKBH. And a representation of the general solution is (15)Y=GKAED+T-𝒫GHT𝒫D,X=A(E-CYD)B+Z-𝒫AHZ𝒫B, with (16)T=(CKGH)(Im-CGKA)EKBHJ+W-𝒫(CKGH)HW𝒫J, where the matrices W𝒞r×s and Z𝒞n×p are arbitrary.

The lemma is easy to verify; we can turn to  for details. The difference between them is that we replace the g-inverse in the theorem of  by the corresponding Moore-Penrose generalized inverse, and the expression of solutions is complicated relatively. However, compared with the multiformity of the g-inverses, the Moore-Penrose generalized inverse involved representation is unique and fixed.

Apply Lemma 1 on the unconstrained problem (11), we have the following theorem.

Theorem 2.

The matrix equation AXBH=E with constraint SX=XR is consistent if and only if (17)𝒫G^KA^1E𝒫B^2=KA^1E,𝒫A^2EKB^1𝒫J^H=EKB^1, where (18)G^=KA^1A^2,J^=B^2HKB^1. In the meantime, a general solution is given by (19)Y^=G^KA^1EB^2H+(A^2KG^H)(Im-A^2G^KA^1)EKB^1J^Y^=-𝒫G^H(A^2KG^H)(Im-A^2G^KA^1)EKB^1J^𝒫B^2HY^=+W-𝒫G^HW𝒫B^2H-𝒫(A^2KG^H)HW𝒫J^Y^=+𝒫G^H𝒫(A^2KG^H)HW𝒫J^𝒫B^2H,X^=A^1(E-A^2Y^B^2H)B^1H+Z-𝒫A^1HZ𝒫B^1H, where the matrices W and Z are arbitrary.

In order to separate Y^ from X^ of the second equality in (19), we substitute Y^ into X^. Let (20)Y*=G^KA^1EB^2H+(A^2KG^H)(Im-A^2G^KA^1)EKB^1J^Y*=-𝒫G^H(A^2KG^H)(Im-A^2G^KA^1)EKB^1J^𝒫B^2H,X*=A^1EB^1H-A^1A^2Y*B^2HB^1H, together with (21)B^2B^2B^2H=(B^2B^2)HB^2H=B^2H,A^2KG^H(A^2KG^H)A^2KG^H=A^2KG^H. Then (19) can be rewritten as (22)Y^=Y*+W-𝒫G^HW𝒫B^2H-𝒫(A^2KG^H)HW𝒫J^Y^=+𝒫G^H𝒫(A^2KG^H)HW𝒫J^𝒫B^2H,X^=X*+Z-𝒫A^1HZ𝒫B^1H-A^1A^2KG^HWKJ^B^2HB^1H.

3. Eigenvector-Free Formulas of the General Solutions to (<xref ref-type="disp-formula" rid="EEq1.1">2</xref>) with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M124"><mml:mi>S</mml:mi><mml:mi>X</mml:mi><mml:mo mathvariant="bold">=</mml:mo><mml:mi>X</mml:mi><mml:mi>R</mml:mi></mml:math></inline-formula> Constraint

The existence conditions and the expression of the general solution given in Theorem 2 contain the eigenvector matrices of S, R, respectively. This implies that the eigenvalue decompositions will be included. In this section, we intend to release the involved eigenvectors in detailed expressions. With the first equality in (8), we have (23)U1U1H=S,U2U2H=In-S,V1V1H=R,V2V2H=In-R. Note that Ui(AUi) is the Moore-Penrose generalized inverse of AUiUiH, which gives (24)𝒫A^i=A^iA^i=(AUiUiH)(AUiUiH)=AiAi=𝒫Ai, where (25)A1=AU1U1H=AS,  A2=AU2U2H=A(In-S). Then (26)KA^i=Im-𝒫A^i=Im-𝒫Ai=KAi,G^U2H=KA1A2. Set (27)B1=BV1V1H=BR,B2=BV2V2H=B(In-R), and denote (28)G=KA1A2,J=B2HKB1. It is not difficult to verify that (29)V2J^=J,G^U2H=G, together with (30)𝒫G^=G^U2H(G^U2H)=𝒫G,𝒫J^H=(V2J^)(V2J^)=𝒫JH. Then the first equality of (17) can be rewritten as (31)𝒫GKA1E𝒫B2=KA1E, and the other can be rewritten as (32)𝒫A2EKB1𝒫JH=EKB1. Now, we consider the simplification of the general solution X given by (10), which can be rewritten as (33)X=U1X^V1H+U2Y^V2H. Note that (34)U2G^=(G^U2H)=G,KG^HU2H=U2HKGH,U2A^2=A2. Together with (26), (35)U2Y*V2H=U2(G^KA^1EB^2H+(A^2KG^H)×(Im-A^2G^KA^1)EKB^1J^-𝒫G^H(A^2KG^H)×(Im-A^2G^KA^1)EKB^1J^𝒫B^2HG^KA^1EB^2H+(A^2KG^H))V2H=GKA1EB2H+(A2KGH)×(Im-A2GKA1)EKB1J-𝒫GH(A2KGH)×(Im-A2GKA1)EKB1J𝒫B2H, so we can represent Uj2Y*V2H by a given expression of Ai, Bi, E. Let (36)f(A1,A2,B1,B2,E)=GKA1EB2H+(A2KGH)×(Im-A2GKA1)EKB1J-𝒫GH(A2KGH)×(Im-A2GKA1)EKB1J𝒫B2H. Hence, we have (37)Uj2Y*V2H=f(A1,A2,B1,B2,E),U1X*V1H=A1EB1H-A1A2U2Y*V2HB2HB1H=A1EB1H-A1A2f(A1,A2,B1,B2,E)B2HB1H. Since (38)V2KJ^=V2(In-l-𝒫J^)=(Ip-V2J^(V2J^))V2=KJV2, then (39)U1(Z-𝒫A^1HZ𝒫B^1H-A^1A^2KG^HWKJ^B^2HB^1H)V1H=U1ZV1H-𝒫A1HU1ZV1H𝒫B1H-A1A2KGHU2WV2HKJB2HB1H,U2(W-𝒫G^HW𝒫B^2H-𝒫(A^2KG^H)HW𝒫J^+𝒫G^H𝒫(A^2KG^H)HW𝒫J^𝒫B^2H)V2H=U2WV2H-𝒫GHU2WV2H𝒫B2H-𝒫(A2KGH)HU2WV2H𝒫J+𝒫GH𝒫(A2KGH)HU2WV2HPJ𝒫B2H. Letting (40)U1ZV1H+U2WV2H=F, it is not difficult for us to verify SF=FR. Together with (41)A2U1=0,A1U2=0,V2HB1=0,V1HB2=0, the following equality holds: (42)PA1HU1ZV1H𝒫B1H+𝒫GHU2WV2H𝒫B2H=(PA1H+𝒫GH)(U2WV2H+U1ZV1H)×(𝒫B1H+𝒫B2H)=(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H). Note that (43)GU1=0,A2KGHU1=0. Then (44)A2KGHU2WV2H=A2KGH(U2WV2H+U1ZV1H)=A2KGHF. Hence, (45)A1A2KGHU2WV2HKJB2HB1H=A1A2KGHFKJB2HB1H,𝒫(A2KGH)HU2WV2H𝒫J-𝒫GH𝒫(A2KGH)HU2WV2HPJ𝒫B2H=𝒫(A2KGH)HF𝒫J-𝒫GH𝒫(A2KGH)HFPJ𝒫B2H. Substituting the expressions above into (33) yields that (46)X=A1EB1H+f(A1,A2,B1,B2,E)-A1A2f(A1,A2,B1,B2,E)B2HB1H+F-(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H)-A1A2KGHFKJB2HB1H-𝒫(A2KGH)HF𝒫J+𝒫GH𝒫(A2KGH)HFPJ𝒫B2H.

We have the following theorem.

Theorem 3.

Let (47)A1=AS,A2=A(In-S),B1=BR,B2=B(In-R). The matrix equation (2) with constraint SX=XR is consistent if and only if (48)𝒫GKA1E𝒫B2=KA1E,𝒫A2EKB1𝒫JH=EKB1, with (49)G=KA1A2,J=B2HKB1. In the meantime, a general solution is given by (50)X=A1EB1H+f(A1,A2,B1,B2,E)-A1A2f(A1,A2,B1,B2,E)B2HB1H+F-(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H)-A1A2KGHFKJB2HB1H-𝒫(A2KGH)HF𝒫J+𝒫GH𝒫(A2KGH)HFPJ𝒫B2H, where the arbitrary matrix F satisfies SF=FR and f(A1,A2,B1,B2,E) is determined by (36).

4. Eigenvector-Free Formulas of the General Solutions to (<xref ref-type="disp-formula" rid="EEq1.1">2</xref>) with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M167"><mml:mi>P</mml:mi><mml:mi>X</mml:mi><mml:mo mathvariant="bold">=</mml:mo><mml:mi>s</mml:mi><mml:mi>X</mml:mi><mml:mi>Q</mml:mi></mml:math></inline-formula> Constraint

For this constraint, if we set S and R as (4), it is not difficult to verify that S, R are Hermitian idempotent, and the constraint PX=sXQ is equivalent to (51)SX=XR. By Theorem 3, we have the following theorem.

Theorem 4.

Let (52)A1=12A(In+P),A2=12A(In-P),B1=12B(In+sQ),B2=12B(In-sQ). The matrix equation (2) with constraint PX=sXQ is consistent if and only if (53)𝒫GKA1E𝒫B2=KA1E,𝒫A2EKB1𝒫JH=EKB1, with (54)G=KA1A2,J=B2HKB1. In the meantime, a general solution is given by (55)X=A1EB1H+f(A1,A2,B1,B2,E)-A1A2f(A1,A2,B1,B2,E)B2HB1H+F-(PA1H+𝒫GH)F(𝒫B1H+𝒫B2H)-A1A2KGHFKJB2HB1H-𝒫(A2KGH)HF𝒫J+𝒫GH𝒫(A2KGH)HFPJ𝒫B2H, where the arbitrary matrix F satisfies PF=sFQ and f(A1,A2,B1,B2,E) is determined by (36).

5. Numerical Examples

In this section, we present some numerical examples to illustrate the effectiveness of Theorems 3 and 4. For simplicity, we set m=n=p and restrict the coefficient matrices A, B and the right-hand-sided matrix E to n×n. The coefficient matrices A, B are randomly constructed by (56)A=Udiag(σ1,,σn)VT, where the orthogonal matrices U and V are constructed as follows: (57)[U,temp]=qr(1-2rand(n)),[V,temp]=qr(1-2rand(n)), and the singular values {σi} will be chosen at interval (0,1). For the computational value X of (2) with constraint PX=sXQ or SX=XR, the residual error ϵX, the PQ-commuting error ϵPQ, SR-commuting error ϵSR, and consistent error Conderr are denoted by (58)ϵX=E-AXBHF,ϵPQ=PX-sXQF,ϵSR=SX-XRF,Conderr=max{𝒫GKA1E𝒫B2-KA1EF,𝒫A2EKB1𝒫JH-EKB1F}.

Example 1.

In this example, we test the solutions to (2) with SX=XQ constraint by Theorem 3. The coefficient matrices A, B are constructed as in (56), and the right-hand-sided matrix E is constructed as follows: (59)E=AX*BH, where X* satisfies (60)RX*=X*S, and S, R are symmetric idempotent. That implies that the constrained equation (2) is consistent, so the residual error ϵX and consistent error Conderr should be zero with the computational value X.

For different n, the residual error ϵX, SR-commuting error ϵSR, and consistent errors Conderr can reach the precision 10-09, but all of them seem not to depend on the matrix size n very much, and the CPU time also grows quickly as n increases. In Table 1, we list the CPU time, ϵX, ϵSR, and Conderr, respectively.

Variant matrix sizes n for the solutions to (2) with SX=XR constraint.

n CPU (s) ϵ X ϵ S R Cond err
100 0.38 1.14 * 1 0 - 12 6.53 * 1 0 - 13 7.12 * 1 0 - 12
300 1.34 3.23 * 1 0 - 12 4.43 * 1 0 - 13 5.63 * 1 0 - 12
500 5.62 4.12 * 1 0 - 10 4.76 * 1 0 - 13 2.24 * 1 0 - 11
700 14.55 3.91 * 1 0 - 10 7.54 * 1 0 - 13 5.43 * 1 0 - 11
900 29.63 2.31 * 1 0 - 09 3.13 * 1 0 - 12 1.37 * 1 0 - 11
1100 55.34 9.36 * 1 0 - 09 6.64 * 1 0 - 12 2.19 * 1 0 - 11
Example 2.

We test the solutions to (2) with PX=XQ constraint by Theorem 4. The test matrices A, B, and E are constructed as in (56) with X* satisfying (61)E=AX*BH, where X* satisfies (62)PX*=X*Q, and P, Q are symmetric involutory.

For different n, the numerical result is similar to those of Example 1; that is, the residual error ϵX, PQ-commuting error ϵPQ, and consistent errors Conderr can all reach the precision 10-09, but it seems that they do not depend on the matrix size n very much. However, the CPU time grows quickly as n increases. In Table 2, we list the CPU time, ϵX, ϵPQ, and Conderr, respectively.

Variant matrix sizes n for solutions to (2) with PX=XQ constraint.

n CPU (s) ϵ X ϵ P Q Con d err
100 0.42 6.11 * 1 0 - 13 5.61 * 1 0 - 13 2.31 * 1 0 - 11
300 2.83 2.07 * 1 0 - 10 9.73 * 1 0 - 13 4.34 * 1 0 - 10
500 8.21 5.85 * 1 0 - 10 1.55 * 1 0 - 12 3.61 * 1 0 - 10
700 14.53 1.17 * 1 0 - 10 2.24 * 1 0 - 12 5.37 * 1 0 - 09
900 28.54 2.60 * 1 0 - 09 4.61 * 1 0 - 11 8.18 * 1 0 - 09
1100 52.81 5.35 * 1 0 - 09 4.92 * 1 0 - 11 6.53 * 1 0 - 09
6. Conclusion

In this paper, we consider (2) with two special constraints PX=sXQ and SX=XR, where P, Q𝒞n×n are Hermitian involutory, S, R𝒞n×n are Hermitian idempotent, and s=±1. We represent the general solutions to the constrained equation by eigenvalue decompositions of P, Q, S, R, release the involved eigenvector by Moore-Penrose generalized inverses, and get the eigenvector-free formulas of the general solutions.

Acknowledgments

The author is grateful to the referees for their enlightening suggestions. Moreover, the research was supported in part by the Natural Science Foundation of Zhejiang Province and National Natural Science Foundation of China (Grant nos. Y6110639, LQ12A01017, and 11201422).

Chen H.-C. Generalized reflexive matrices: special properties and applications SIAM Journal on Matrix Analysis and Applications 1998 19 1 140 153 10.1137/S0895479895288759 MR1609944 ZBL0910.15005 Chen H.-C. Sameh A. H. A matrix decomposition method for orthotropic elasticity problems SIAM Journal on Matrix Analysis and Applications 1989 10 1 39 64 10.1137/0610004 MR976151 ZBL0669.73010 Li F. Hu X. Zhang L. The generalized reflexive solution for a class of matrix equations (AX=B,XC=D) Acta Mathematica Scientia B 2008 28 1 185 193 10.1016/S0252-9602(08)60019-3 MR2389594 ZBL1150.15006 Meng C. Hu X. Zhang L. The skew-symmetric orthogonal solutions of the matrix equation AX=B Linear Algebra and its Applications 2005 402 303 318 10.1016/j.laa.2005.01.022 MR2141092 ZBL1128.15301 Meng C. J. Hu X. Y. An inverse problem for symmetric orthogonal matrices and its optimal approximation Mathematica Numerica Sinica 2006 28 3 269 280 MR2271800 Peng Z.-Y. The inverse eigenvalue problem for Hermitian anti-reflexive matrices and its approximation Applied Mathematics and Computation 2005 162 3 1377 1389 10.1016/j.amc.2004.03.016 MR2113977 ZBL1065.65057 Peng Z.-Y. Hu X.-Y. The reflexive and anti-reflexive solutions of the matrix equation AX=B Linear Algebra and its Applications 2003 375 147 155 10.1016/S0024-3795(03)00607-4 MR2013461 ZBL1050.15016 Qiu Y. Zhang Z. Lu J. The matrix equations AX=B, XC=D with PX=sXP constraint Applied Mathematics and Computation 2007 189 2 1428 1434 10.1016/j.amc.2006.12.046 MR2331816 Wang Q.-W. Yu S.-W. Lin C.-Y. Extreme ranks of a linear quaternion matrix expression subject to triple quaternion matrix equations with applications Applied Mathematics and Computation 2008 195 2 733 744 10.1016/j.amc.2007.05.018 MR2381252 ZBL1149.15012 Wang Q.-W. Chang H.-X. Lin C.-Y. P -(skew)symmetric common solutions to a pair of quaternion matrix equations Applied Mathematics and Computation 2008 195 2 721 732 10.1016/j.amc.2007.05.021 MR2381251 Wang Q.-W. van der Woude J. W. Chang H.-X. A system of real quaternion matrix equations with applications Linear Algebra and its Applications 2009 431 12 2291 2303 10.1016/j.laa.2009.02.010 MR2563022 ZBL1180.15019 Wang Q.-W. He Z.-H. Some matrix equations with applications Linear and Multilinear Algebra 2012 60 11-12 1327 1353 10.1080/03081087.2011.648635 MR2989766 ZBL06174747 Wang Q. He Z. A system of matrix equations and its applications Science China. Mathematics 2013 56 9 1795 1820 10.1007/s11425-013-4596-y MR3090856 He Z.-H. Wang Q.-W. A real quaternion matrix equation with applications Linear and Multilinear Algebra 2013 61 6 725 740 10.1080/03081087.2012.703192 MR3005652 ZBL06180251 Chu K. E. Singular value and generalized singular value decompositions and the solution of linear matrix equations Linear Algebra and its Applications 1987 88/89 83 98 10.1016/0024-3795(87)90104-2 MR882442 ZBL0612.15003 Chu K. E. Symmetric solutions of linear matrix equations by matrix decompositions Linear Algebra and its Applications 1989 119 35 50 10.1016/0024-3795(89)90067-0 MR1005233 ZBL0688.15003 Dai H. On the symmetric solutions of linear matrix equations Linear Algebra and its Applications 1990 131 1 7 10.1016/0024-3795(90)90370-R MR1057060 ZBL0712.15009 Deng Y.-B. Hu X.-Y. Zhang L. Least squares solution of BXAT=T over symmetric, skew-symmetric, and positive semidefinite X SIAM Journal on Matrix Analysis and Applications 2003 25 2 486 494 10.1137/S0895479802402491 MR2047430 ZBL1050.65037 Qiu Y. Qiu C. Matrix equation AXB=E with PX=sXP constraint Applied Mathematics. A Journal of Chinese Universities. Ser. B 2007 22 4 441 448 10.1007/s11766-007-0409-9 MR2362310 ZBL1150.15008 Xu G. Wei M. Zheng D. On solutions of matrix equation AXB+CYD=F Linear Algebra and its Applications 1998 279 1–3 93 109 10.1016/S0024-3795(97)10099-4 MR1637924 ZBL0933.15024 Wang M. Cheng X. Wei M. Iterative algorithms for solving the matrix equation AXB+CXTD=E Applied Mathematics and Computation 2007 187 2 622 629 10.1016/j.amc.2006.08.169 MR2323068 ZBL1121.65048 Paige C. C. Saunders M. A. Towards a generalized singular value decomposition SIAM Journal on Numerical Analysis 1981 18 3 398 405 10.1137/0718026 MR615522 ZBL0471.65018 Paige C. C. Computing the generalized singular value decomposition Society for Industrial and Applied Mathematics. Journal on Scientific and Statistical Computing 1986 7 4 1126 1146 10.1137/0907077 MR857786 ZBL0621.65030 Chu D. De Moor B. On a variational formulation of the QSVD and the RSVD Linear Algebra and its Applications 2000 311 1–3 61 78 10.1016/S0024-3795(00)00072-0 MR1758205 ZBL0970.65037 Moor B. D. Golub G. H. Generalized singular value decompositions: a proposal for a standardized nomenclature Zaterual Report 1989 89-10 Leuven, Belgium ESAT-SISTA Golub G. H. Zha H. Y. Perturbation analysis of the canonical correlations of matrix pairs Linear Algebra and its Applications 1994 210 3 28 10.1016/0024-3795(94)90463-4 MR1294768 ZBL0811.15011 Baksalary J. K. Kala R. The matrix equation AXB+CYD=E Linear Algebra and its Applications 1980 30 141 147 10.1016/0024-3795(80)90189-5 MR568786 ZBL0437.15005 Liao A.-P. Bai Z.-Z. Lei Y. Best approximate solution of matrix equation AXB+CYD=E SIAM Journal on Matrix Analysis and Applications 2005 27 3 675 688 10.1137/040615791 MR2208328 ZBL1096.15004 Özgüler A. B. The equation AXB+CYD=E over a principal ideal domain SIAM Journal on Matrix Analysis and Applications 1991 12 3 581 591 10.1137/0612044 MR1102401 ZBL0742.15006