AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 10.1155/2014/705830 705830 Research Article The Iteration Solution of Matrix Equation AXB=C Subject to a Linear Matrix Inequality Constraint Huang Na http://orcid.org/0000-0002-5936-789X Ma Changfeng Ivanov Ivan 1 School of Mathematics and Computer Science Fujian Normal University Fuzhou 350007 China fjtu.edu.cn 2014 2682014 2014 01 05 2014 22 07 2014 27 8 2014 2014 Copyright © 2014 Na Huang and Changfeng Ma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We propose a feasible and effective iteration method to find solutions to the matrix equation AXB=C subject to a matrix inequality constraint DXEF, where DXEF means that the matrix DXE-F is nonnegative. And the global convergence results are obtained. Some numerical results are reported to illustrate the applicability of the method.

1. Introduction

In this paper, we consider the following problem: (1)AXB=C,DXEF, where ARp×m, BRn×q, CRp×q, DRs×m, ERn×t, and FRs×t are known constant matrixes and XRm×n is unknown matrix.

The solutions X to the linear matrix equation with special structures have been widely studied, for example, symmetric solutions (see ), R-symmetric solutions (see ), (R,S)-symmetric solutions (see [7, 8]), bisymmetric solutions (see ), centrosymmetric solutions (see ), and other general solutions (see ). Some iterative methods to solve a pair of linear matrix equations have been studied as well (see ).

However, very little research has been done on the solutions to a matrix equation subjected to a matrix inequality constraint. In 2012, Peng et al. (see ) proposed a feasible and effective algorithm to find solutions to the matrix equation AX=B subjected to a matrix inequality constraint CXDE based on the polar decomposition in Hilbert space. Next year, Li et al. (see ) used a similar way to study the bisymmetric solutions to the same problem. Motivated and inspired by the work mentioned above, in this paper, we consider the solutions of the matrix equation AXB=C over linear inequality DXEF constraint. We use the theory on the analytical solution of matrix equation AXB=C to transform the problem into a matrix inequality smallest nonnegative deviation problem. And then, combined with the polar decomposition theory, an iterative method for solving this transformed problem is proposed. Meanwhile, the global convergence results are obtained. Some numerical results are reported and indicate that the proposed method is quite effective.

Throughout this paper, we use the following notation: for ARm×n, we write AT, A+, and A to denote the transpose, the Moore-Penrose generalized inverse, and the Frobenius norm of the matrix A, respectively. For any A=(aij), B=(bij), we write AB if aijbij. AB denotes the Kronecker product defined as AB=(aijB). For the matrix X=(x1,x2,,xn)Rm×n, vec(X) denotes the vec operator defined as vec(X)=(x1T,x2T,,xnT)T. For A=(aij)Rm×n, [A]+ is a matrix with ijth entry equal to max{0,aij}. Obviously, A=[A]+-[-A]+. The inner product in space Rm×n is defined as (2)A,B=tr(BTA),A,BRm×n. Hence Rm×n is a Hilbert inner product space, and the norm of a matrix generated by this inner product space is the Frobenius norm.

This paper is organized as follows. In Section 2, we transform problem (1) into a matrix inequality smallest nonnegative deviation problem. Then we study the existence of the solutions for problem (1) in Section 3. The iterative method to the transformed problem and convergence analysis are presented in Section 4. Section 5 shows some numerical experiments. Finally, we conclude this paper in Section 6.

2. Transforming the Original Problem

In this section, we use the theory on the analytical solution of matrix equation AXB=C to transform problem (1) into a matrix inequality smallest nonnegative deviation problem. Firstly, we present the following lemma about the analytical solution of matrix equation AXB=C.

Lemma 1 (see Theorem 1.21 in [<xref ref-type="bibr" rid="B37">34</xref>]).

Given ARp×m, BRn×q, and CRp×q, the matrix equation AXB=C is solvable for X in Rm×n if and only if AA+CB+B=C. Moreover, if the matrix equation AXB=C is solvable, then the general solutions can be expressed as (3)X=A+CB++G-A+AGBB+, where G is an arbitrary m×n matrix.

Assume that AA+CB+B=C; that is, assume that the matrix equation AXB=C is solvable. Substituting (3) into the second inequality of (1), we get (4)D(A+CB++G-A+AGBB+)EF. By simple calculation, we have (5)D(G-A+AGBB+)EF-DA+CB+E. Hence (3) is a solution of (1) if and only if the matrix G in (1) satisfies (5). However, inequality (5) may be unsolvable; namely, (6){GRm×nD(G-A+AGBB+)EF~}=, where F~=F-DA+CB+E. In this case, we can find YR+s×t such that (7)D(G-A+AGBB+)E+YF~ is solvable (if (5) is solvable, then Y=0). Obviously, there exist many YR+s×t satisfied (7). Here we need to find a Y such that YY¯, where Y, Y¯ satisfied (7). Thus we consider the following smallest nonnegative deviation of the matrix inequality, which is also a quadratic programming problem: (8)minimizeP(G,Y)=Y2subject  toD(G-A+AGBB+)E+YF~,hhhhhhhhhhhhhhhhhhY0,GRm×n. If G and Y solve (8) with Y=0, then G solves (7), and (3) is a solution of (1). If G and Y solve (8) Y0, then G solves (7) in the smallest nonnegative deviation sense, and (3) is a solution of the matrix equation AXB=C over the nonnegative smallest deviation constraint of the inequality DXEF. Conversely, if G solves (7), then G and Y=0 solve (8). So, to find XRm×n satisfied (1), we only need to solve the smallest nonnegative deviation problem (8).

Suppose that G and Y solve (8); then (9)Y=[F~-D(G-A+AGBB+)E]+. On the other hand, if a pair of matrices G and Y solves the smallest nonnegative deviation problem (8), then there exists a nonnegative matrix Z satisfying (10)D(G-A+AGBB+)E+Y-Z=F~. Consequently, the smallest nonnegative deviation problem (8) is equivalent to the following optimization problem: (11)minimizeP(G,Y,Z)=Y2subject  toD(G-A+AGBB+)E+Y-Z=F~,hhhhhhhhhhhhhhY0,Z0,GRm×n. Eliminating Y from (11) yields the following optimization problem: (12)minimizeF(G,Z)=D(G-A+AGBB+)E-Z-F~2subject  toZ0,GRm×n. Suppose that a pair (G,Z) solves (12); then (13)Z=[D(G-A+AGBB+)E-F~]+, which together with the matrices Y=[F~-D(G-A+AGBB+)E]+ and G solves (11), and hence G and Y solve (8). This allows one to determine whether or not (3) is a solution of (1). Therefore, to solve (1), we first solve optimization problem (12). Our iteration method proposed below will take advantage of these equivalent forms of (1).

3. The Solution of the Problem

To illustrate the existence of the solutions G*, Y*, and Z* of (8), (11), and (12), we give the following theorem in the first place.

Theorem 2 (see [<xref ref-type="bibr" rid="B20">35</xref>, <xref ref-type="bibr" rid="B34">36</xref>]).

Let MRs×t be a closed convex cone (i.e., M is closed convex set and αuM, for all uM and α0). Let M* be the polar cone of M; that is, (14)M*={yRs×tu,y0,uM,y0}. Then for all fRs×t,f has unique polar decomposition of the form (15)f=u^+y^,u^M,y^M*,u^,y^=0.

Theorem 2 implies that u^ is the projection of f onto M and y^ is the projection of f onto M*.

For problem (12), we give the following two matrix sets: (16)M={QRs×tQ=D(G-A+AGBB+)E-Z,hhhhhhhhhZ0,GRm×n},N={YRs×tDTYET-A+ADTYETBB+=0,Y0}. Now we will prove that M is a closed convex cone and N=M*.

Lemma 3.

The matrix set M is a closed convex cone in the Hilbert space Rs×t.

Proof.

For all QM, there exists Z0 and GRm×n such that Q=D(G-A+AGBB+)E-Z. By the definition of Kronecker product, we have (17)vec(Q)=[ETD-(BB+E)T(DA+A)]vec(G)-(II)vec(Z)=[ETD-(ETBB+)(DA+A)]×([vec(G)]+-[vec(-G)]+)-(II)vec(Z)=(ETD-(ETBB+)(DA+A),h-ETD+(ETBB+)(DA+A),-IIETD-(ETBB+)(DA+A))·([vec(G)]+T,[vec(-G)]+T,vec(Z)T)T=Hβ, where the second equality follows from the definition of Moore-Penrose generalized inverse, and (18)H=(ETD-(ETBB+)(DA+A),hhhhh-ETD+(ETBB+)(DA+A),-II),β=([vec(G)]+T,[vec(-G)]+T,vec(Z)T)T. It is easy to see that β0. As G and Z0 are arbitrary, β is arbitrary as well. Let H=(h1,h2,,hl) and β=(β1,β2,,βl)T, where l=2mn+st; then vec(Q)=i=1lβihi. Thus M is equivalent to the following set: (19)K={qRstq=i=1lβihi,βi0}. By the result in , we know that the set K is a closed convex cone in the Hilbert space Rst. Hence M is a closed convex cone in the Hilbert space Rs×t.

Lemma 4.

The matrix set N is the polar cone of the matrix set M.

Proof.

By the definition of the polar cone, we get (20)M*={YRs×tQ,Y0,QM,Y0}. So we just need to prove N=M*. Firstly, we prove NM*.

For all YN and Q=D(G-A+AGBB+)E-ZM, we have (21)Q,Y=D(G-A+AGBB+)E-Z,Y=DGE,Y-DA+AGBB+E,Y-Z,Y=G,DTYET-G,A+ADTYETBB+-Z,Y=G,DTYET-A+ADTYETBB+-Z,Y=-Z,Y0. Thus YM*. Then NM*.

Theorem 5.

Assume that the matrices G* and Z* solve (12). Define matrices Q*Rs×t and Y*Rs×t as (24)Q*=D(G*-A+AG*BB+)E-Z*,Y*=[F~-D(G*-A+AG*BB+)E]+. Then F~=Q*+Y*, Q*M, Y*N, and Q*,Y*=0; namely, Q* and Y* are the polar decomposition of F~.

Proof.

As G* and Z* solve (12), we have Z*=[D(G*-A+AG*BB+)E-F~]+. Then (25)Q*=D(G*-A+AG*BB+)E-[D(G*-A+AG*BB+)E-F~]+=D(G*-A+AG*BB+)E-F~-[D(G*-A+AG*BB+)E-F~]++F~=-[F~-D(G*-A+AG*BB+)E]++F~=F~-Y*. Thus F~=Q*+Y*.

By Lemmas 3 and 4 and Theorem 2, we get that F~ has unique polar decomposition with M and N; that is, there exists unique Q^M and Y^N such that F~=Q^+Y^ and Q^,Y^=0.

Consider optimization problem (12). The objective function in (12) is (26)F(G,Z)=D(G-A+AGBB+)E-Z-F~2=D(G-A+AGBB+)E-Z-Q^-Y^2=D(G-A+AGBB+)E-Z-Q^2+Y^2-2D(G-A+AGBB+)E-Z-Q^,Y^=D(G-A+AGBB+)E-Z-Q^2+Y^2-2D(G-A+AGBB+)E,Y^+2Z,Y^=D(G-A+AGBB+)E-Z-Q^2+Y^2-2G,DTY^ET-A+ADTY^ETBB++2Z,Y^=D(G-A+AGBB+)E-Z-Q^2+Y^2+2Z,Y^Y^2. Since G*, Z* solve (12), D(G*-A+AG*BB+)E-Z*M, and Q^M, we have Q^=D(G*-A+AG*BB+)E-Z*=Q*M. Thus Y*=F~-Q*=F~-Q^=Y^N. Then Q*,Y*=Q^,Y^=0. This completes the proof.

Remark 6.

Theorem 5 implies that if a pair of matrices G* and Z* solves optimization problem (12), then D(G*-A+AG*BB+)E-Z* and [F~-D(G*-A+AG*BB+)E]+ are the projections of F~ onto M and N, respectively. Conversely, by Theorem 2 we get that F~ has unique polar decomposition of the form F~=Q*+Y*, Q*M, and Y*N. By the definition of M, there exists G* and Z*0 such that Q*=D(G*-A+AG*BB+)E-Z* and Y*=[F~-D(G*-A+AG*BB+)E]+. Moreover, G*, Z*, and Y* solve optimization problem (11). Thus problem (11) is solvable.

By the above analysis, we get the following theorem immediately.

Theorem 7.

Problem (1) is solvable if and only if AA+CB+B=C and Y*=[F~-D(G*-A+AG*BB+)E]+=0, where G* and Z* are the solutions of optimization problem (12).

4. Iterative Method and Convergence Analysis

In this section, we present an iteration method to solve (1) and give the convergence analysis. We are now in a position to give our algorithm to compute the solutions G*, Y*, and Z* of (8), (11), and (12).

Algorithm 8 (an iteration method for (<xref ref-type="disp-formula" rid="EEq1.1">1</xref>)).

Step 0. Input matrices A, B, C, D, E, and F. Choose the initial matrix G0. Compute F~=F-DA+CB+E, Y0=[F~-D(G0-A+AG0BB+)E]+, and Z0=[D(G0-A+AG0BB+)E-F~]+. Take the stopping criterion ɛ>0. Set k:=0.

Step 1. Find a solution Wk of the least squares problem (27)minimizeD(W-A+AWBB+)E-Yk.

Step 2. Update the sequences (28)Gk+1=Gk+Wk,Yk+1=[F~-D(Gk+1-A+AGk+1BB+)E]+,Zk+1=[D(Gk+1-A+AGk+1BB+)E-F~]+.

Step 3. If Yk+1-Ykɛ or Zk+1-Zkɛ, then stop; otherwise, set k:=k+1 and go to Step 1.

Next we give the following lemma.

Lemma 9.

R s × t is the direct sum of M¯ and N¯; that is, (29)Rs×t=M¯N¯, where (30)M¯={D(X-A+AXBB+)EXRm×n},N¯={YRs×tDTYET-A+ADTYETBB+=0}.

Proof.

Obviously, M¯ and N¯ are linear subspaces of Rs×t. By orthogonal decomposition theorem in the Hilbert space, we obtain Rs×t=M¯M¯, where M¯ is the orthogonal complement space of M¯. So we just need to prove N¯=M¯.

We prove N¯M¯ firstly. For all YN¯ and D(X-A+AXBB+)EM¯, we have (31)D(X-A+AXBB+)E,Y=DXE,Y-DA+AXBB+E,Y=X,DTYET-X,(DA+A)TY(BB+E)T=X,DTYET-X,A+ADTYETBB+=X,DTYET-A+ADTYETBB+=0, where the third equality follows from the definition of Moore-Penrose generalized inverse. Hence YM¯; namely, N¯M¯.

Then we prove M¯N¯. For all YM¯ and D(X-A+AXBB+)EM¯, we have D(X-A+AXBB+)E,Y=0. By the same way, we get X,DTYET-A+ADTYETBB+=0. As XRm×n is arbitrary, we take X=DTYET-A+ADTYETBB+. Then (32)DTYET-A+ADTYETBB+,DTYET-A+ADTYETBB+=0. So DTYET-A+ADTYETBB+=0; that is, YN¯. Thus M¯N¯.

From the above, we get N¯=M¯. Therefore, Rs×t=M¯N¯.

Now we present the convergence theorem.

Theorem 10.

Let F~=Q*+Y* be the unique polar decomposition of F~. Then (33)limkYk=Y*,limk(D(Gk-A+AGkBB+)E-Zk)=Q*.

Proof.

Since matrix Wk solves (27), we have D(Wk-A+AWkBB+)E-Yk)Yk. This together with Algorithm 8 yields(34)Yk+1=[F~-D(Gk+1-A+AGk+1BB+)E]+=[F~-D(Gk-A+AGkBB+)Ehh-D(Wk-A+AWkBB+)E]+=[Yk-Zk-D(Wk-A+AWkBB+)E]+[Yk-D(Wk-A+AWkBB+)E]+Yk-D(Wk-A+AWkBB+)EYk, where the first inequality follows from Zk0 and the second inequality follows from nonexpansive property of the projection. This implies that the sequence {Yk} is monotonically decreasing and is bounded from below. So there exists a constant α0 such that limkYk=α. Furthermore, {Yk} is bounded. Then {Yk} has at least one cluster point. Next we show that any cluster point of the sequence {Yk} is equal to Y*. Consequently, {Yk} converges toward to Y*.

Let Y~ be any cluster point of the sequence {Yk}. Without loss of generality, we suppose limkYk=Y~. Obviously, Y~0. It follows from Lemma 9 that Yk has unique orthogonal decomposition of the form (35)Yk=Y^k+Y~k,Y^kM¯,Y~kN¯. Moreover, by (27), Wk satisfies D(Wk-A+AWkBB+)E=Y^k. Thus D(Wk-A+AWkBB+)E-Yk): (36)Yk+1Yk-D(Wk-A+AWkBB+)E=Y~kYk. Let k; we get limkYk=limkY~k=α. This together with Yk2=Y^k2+Y~k2 yields that (37)limkY^k=0. So limkY^k=0. Therefore (38)limkY~k=limk(Yk-Y^k)=Y~. Since Y~kN¯, Y~N¯ as well. This together with Y~0 follows that Y~N.

Since (39)D(Gk-A+AGkBB+)E-Zk=D(Gk-A+AGkBB+)E-[D(Gk-A+AGkBB+)E-F~]+=F~-Yk, we get limk(D(Gk-A+AGkBB+)E-Zk)=limk(F~-Yk)=F~-Y~. By Lemma 3 and D(Gk-A+AGkBB+)E-ZkM, we gain F~-Y~M. By the definition of M, there exists G~Rm×n and Z~0 such that F~-Y~=D(G~-A+AG~BB+)E-Z~. Let Q~=D(G~-A+AG~BB+)E-Z~. Then F~=Q~+Y~ is the unique polar decomposition of F~. Hence Q~=Q* and Y~=Y*. Furthermore, (40)limk(D(Gk-A+AGkBB+)E-Zk)=F~-Y~=F~-Y*=Q*. This completes the proof of the theorem.

5. Numerical Experiments

In this section, we present two numerical examples to illustrate the efficiency and the performance of Algorithm 8. Firstly, we consider least squares problem (27) in Algorithm 8.

By the definition of Kronecker product, least squares problem (27) in Algorithm 8 is equivalent to (41)minmize(ETD-E1TD1)vec(W)-vec(Yk), where D1=DA+A and E1=BB+E. It is well known that the normal equation of problem (41) is (42)(ETD-E1TD1)T×[(ETD-E1TD1)vec(W)-vec(Yk)]=0. Notice that (43)(ETD-E1TD1)T(ETD-E1TD1)=(EDT-E1D1T)(ETD-E1TD1)=(EDT)(ETD)-(EDT)(E1TD1)-(E1D1T)(ETD)+(E1D1T)(E1TD1)=(EET)(DTD)-(EE1T)(DTD1)-(E1ET)(D1TD)+(E1E1T)(D1TD1), and therefore (42) is equal to (44)DTDWEET-DTD1WE1ET-D1TDWEE1T+D1TD1WE1E1T=DTYkET-D1TYkE1T. Taking the definition of D1 and E1 into the above equation, we get (45)DTDWEET-DTDA+AWBB+EET-A+ADTDWEETBB++A+ADTDA+AWBB+EETBB+=DTYkET-A+ADTYkETBB+, which is the normal equation of problem (27). Let (46)L(W)=DTDWEET-DTDA+AWBB+EET-A+ADTDWEETBB++A+ADTDA+AWBB+EETBB+ and Q=DTYkET-A+ADTYkETBB+. Then problem (27) is equivalent to L(W)=Q, which can be solved by the modified conjugate gradient method (see ).

Algorithm 11 (modified conjugate gradient method).

Step 0. Input matrices A, B, C, D, E, F, and Yk. Choose the initial matrix W(0). Compute L(W(0)), R(0)=Q-L(W(0)), R~(0)=L(R(0)), and P(0)=R~(0). Take the stopping criterion ϵ2>0. Set j:=0.

Step 1. If R(j)ϵ2, then stop; otherwise, set j:=j+1 and go to Step 2.

Step 2. Update the sequences (47)W(j)=W(j-1)+R(j-1)2P(j-1)2P(j-1),R(j)=Q-L(W(j)),R~(j)=L(R(j)),P(j)=R~(j)+R(j)2R(j-1)2P(j-1).

In our experiment, all computations were done using the PC with Pentium Dual-Core CPU E5800 @2.40 GHz. All the programming is implemented in MATLAB R2011b. The initial matrix G0 in Algorithm 8 is taken as the null matrix and the termination criterion is Yk+1-YkFɛ=2.22×10-16 or Zk+1-ZkF2.22×10-16.

Example 12.

Matrices A, B, C, D, E, and F are given as follows: (48)A=(1.71.2-1.71.93.42.4-3.43.8-2.1-1.22.3-2.72.11.2-2.32.7),B=(1.20.3-0.30.1-0.10.11.30.2-0.10.20.20.21.20.2-0.2-0.1-0.20.21.10.2-0.2-0.30.3-0.21.2),C=(7.226.475.355.445.8414.4412.9410.7010.8811.68-8.92-7.93-6.51-6.84-6.708.927.936.516.846.70),D=(-2.91.3-2.42.12.9-1.32.4-2.12.22.22.32.92.22.21.41.7),E=(-1.92.3-3.42.1-2.61.9-2.33.4-2.12.62.22.4-4.63.8-3.4-1.1-1.22.3-1.91.7-3.2-1.35.3-3.71.8),F=(-5.83.9-8.22.7-8.14.8-6.96.2-5.76.1-20.02.29.0-12.0-3.9-21.0-0.82.6-15.00.12).

Computing by Algorithm 8, we have Y*=0, (49)Z*=(0.99361.35642.00001.37221.99970.00641.64360.00001.62780.00031.78260.00009.82650.92521.21062.67810.342019.08821.23500.8386) and a solution G~ to inequality (5) as follows: (50)G~=(-0.15840.1584-0.10710.0536-0.19520.0960-0.09600.1873-0.09360.2442-0.01320.01320.5250-0.26250.53310.0693-0.06930.4474-0.22370.4974). It follows from AA+CB+B=C and Y*=[F~-D(G~-A+AG~BB+)E]+=0 that problem (1) is solvable. By substituting G~ into (3), we obtain a solution X~ to problem (1) as follows: (51)X~=A+CB++G~-A+AG~BB+=(1.04561.13410.50510.98760.83281.56371.18800.87190.96511.9153-0.7795-0.54090.0916-0.91360.17470.64260.27720.81610.32100.4646). Furthermore, denote (52)δk=Yk+1-YkF,ρk=Zk+1-ZkF. We have the following iterative error curve in Figure 1.

Y k + 1 - Y k F and Zk+1-ZkF for Example 12.

Example 13.

Given matrices A, B, and C being the same as in Example 12, DR4×4 and ER5×5 are identity matrices, and F is given as follows: (53)F=(0.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.1).

Following Algorithm 8, we get Y*=0, (54)Z*=(1.37761.08240.68081.07131.03241.39521.20480.60160.98251.61100.00000.00000.00000.00000.00000.98620.63380.58460.88940.1588) and a solution G~ to inequality (5) as follows: (55)G~=(0.27360.20660.16850.23730.10430.02750.02080.01690.02380.03980.86630.65410.53350.75110.45840.51290.38730.31580.44470.2916). It follows from AA+CB+B=C and Y*=[F~-D(G~-A+AG~BB+)E]+=0 that problem (1) is solvable. By substituting G~ into (3), we obtain a solution X~ to problem (1) as follows: (56)X~=A+CB++G~-A+AG~BB+=(1.47761.18240.78081.17131.13241.49521.30480.70161.08251.71100.10000.10000.10000.10000.10001.08620.73380.68460.98940.2588). Furthermore, we have the following iterative error curve in Figure 2.

Y k + 1 - Y k F and Zk+1-ZkF for Example 13.

6. Conclusion

In this paper, we propose Algorithm 8 to find solutions to the matrix equation AXB=C subject to a matrix inequality constraint DXEF. And the global convergence results are obtained. For least squares problem (27) in Algorithm 8, we use the modified conjugate gradient method to solve it. Numerical results also confirm the good theoretical properties of our approach.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Authors’ Contribution

All authors contributed equally and significantly to the writing of this paper. All authors read and approved the final paper.

Acknowledgments

The project is supported by the National Natural Science Foundation of China (Grant nos. 11071041, 11201074), Fujian Natural Science Foundation (Grant no. 2013J01006), the University Special Fund Project of Fujian (Grant no. JK2013060), and R&D of Key Instruments and Technologies for Deep Resources Prospecting (the National R&D Projects for Key Scientific Instruments) under Grant no. ZDYZ2012-1-02-04.

Dai H. On the symmetric solutions of linear matrix equations Linear Algebra and its Applications 1990 131 1 7 10.1016/0024-3795(90)90370-R MR1057060 ZBL0712.15009 2-s2.0-0042096811 Chu K. W. E. Symmetric solutions of linear matrix equations by matrix decompositions Linear Algebra and Its Applications 1989 119 35 50 10.1016/0024-3795(89)90067-0 MR1005233 2-s2.0-38049084653 Don F. J. H. On the symmetric solutions of a linear matrix equation Linear Algebra and Its Applications 1987 93 1 7 10.1016/S0024-3795(87)90308-9 MR898539 2-s2.0-38049086600 Deng Y.-B. Bai Z.-Z. Gao Y.-H. Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations Numerical Linear Algebra with Applications 2006 13 10 801 823 10.1002/nla.496 MR2278194 2-s2.0-33845664479 Trench W. F. Characterization and properties of matrices with generalized symmetry or skew symmetry Linear Algebra and Its Applications 2004 377 207 218 10.1016/j.laa.2003.07.013 MR2021612 2-s2.0-0242366580 Trench W. F. Hermitian, Hermitian R-symmetric, and HERmitian R-skew symmetric Procrustes problems Linear Algebra and its Applications 2004 387 83 98 10.1016/j.laa.2004.01.018 MR2069270 2-s2.0-2942604299 Trench W. F. Inverse eigenproblems and associated approximation problems for matrices with generalized symmetry or skew symmetry Linear Algebra and Its Applications 2004 380 199 211 10.1016/j.laa.2003.10.007 MR2038749 2-s2.0-1142263882 Trench W. F. Minimization problems for (R,S)-symmetric and (R,S)-skew symmetric matrices Linear Algebra and Its Applications 2004 389 23 31 10.1016/j.laa.2004.03.035 MR2080392 2-s2.0-3943102052 Xie D. X. Sheng Y. P. Hu X. The least-squares solutions of inconsistent matrix equation over symmetric and antipersymmetric matrices Applied Mathematics Letters 2003 16 4 589 598 10.1016/S0893-9659(03)00041-7 MR1983734 2-s2.0-0038717045 Zhao L. Hu X. Zhang L. Least squares solutions to AX=B for bisymmetric matrices under a central principal submatrix constraint and the optimal approximation Linear Algebra and Its Applications 2008 428 4 871 880 10.1016/j.laa.2007.08.019 MR2382096 2-s2.0-37249049758 Wang Q. Sun J. Li S. Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra Linear Algebra and its Applications 2002 353 1–3 169 182 10.1016/S0024-3795(02)00303-8 MR1919635 Dehghan M. Hajarian M. An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices Applied Mathematical Modelling 2010 34 3 639 654 10.1016/j.apm.2009.06.018 MR2563346 2-s2.0-70350622178 Dehghan M. Hajarian M. Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equations Applied Mathematical Modelling 2011 35 7 3285 3300 10.1016/j.apm.2011.01.022 MR2785279 2-s2.0-79953198577 Song C. Chen G. Zhao L. Iterative solutions to coupled Sylvester-transpose matrix equations Applied Mathematical Modelling 2011 35 10 4675 4683 10.1016/j.apm.2011.03.038 MR2806659 2-s2.0-79957715122 Gu C. Q. Qian H. J. Skew-symmetric methods for nonsymmetric linear systems with multiple right-hand sides Journal of Computational and Applied Mathematics 2009 223 2 567 577 10.1016/j.cam.2008.01.001 MR2478863 2-s2.0-56949103907 Konghua G. Hu X. Zhang L. A new iteration method for the matrix equation AX=B Applied Mathematics and Computation 2007 187 2 1434 1441 10.1016/j.amc.2006.09.059 MR2321347 2-s2.0-34247233702 Jbilou K. Smoothing iterative block methods for linear systems with multiple right-hand sides Journal of Computational and Applied Mathematics 1999 107 1 97 109 10.1016/S0377-0427(99)00083-7 MR1698480 ZBL0929.65018 2-s2.0-0032687010 Karimi S. Toutounian F. The block least squares method for solving nonsymmetric linear systems with multiple right-hand sides Applied Mathematics and Computation 2006 177 2 852 862 10.1016/j.amc.2005.11.038 MR2292011 ZBL1096.65040 2-s2.0-33744972137 Li F. Gong L. Hu X. Zhang L. Successive projection iterative method for solving matrix equation AX=B Journal of Computational and Applied Mathematics 2010 234 8 2405 2410 10.1016/j.cam.2010.03.008 MR2645197 ZBL1207.65044 2-s2.0-77955305511 Toutounian F. Karimi S. Global least squares method (Gl-LSQR) for solving general linear systems with several right-hand sides Applied Mathematics and Computation 2006 178 2 452 460 10.1016/j.amc.2005.07.025 MR2248504 2-s2.0-33746253645 Wang Q. A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity Linear Algebra and Its Applications 2004 384 43 54 10.1016/j.laa.2003.12.039 MR2055342 2-s2.0-2342617534 Huang G. X. Yin F. Guo K. An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=C Journal of Computational and Applied Mathematics 2008 212 2 231 244 10.1016/j.cam.2006.12.005 MR2383016 2-s2.0-37249058262 Baksalary J. K. Kala R. The matrix equation AXB+CYD=E Linear Algebra and Its Applications 1980 30 141 147 10.1016/0024-3795(80)90189-5 MR568786 2-s2.0-0012031067 Mitra S. K. Common solutions to a pair of linear matrix equations A1×B1=C1 and A2×B2=C2 Proceedings of the Cambridge Philosophical Society 1973 74 213 216 MR0320028 Mitra S. K. A pair of simultaneous linear matrix equations A1XB1=C1, A2XB2=C2 and a matrix programming problem Linear Algebra and Its Applications 1990 131 107 123 10.1016/0024-3795(90)90377-O MR1057067 2-s2.0-44949289177 Shinozaki N. Sibuya M. Consistency of a pair of matrix equations with an application Keio Science and Technology Reports 1975 27 10 141 146 MR0382306 Navarra A. Odell P. L. Young D. M. A representation of the general common solution to the matrix equations A1XB1=C1 and A2XB2=C2 with applications Computers & Mathematics with Applications 2001 41 7-8 929 935 10.1016/S0898-1221(00)00330-8 MR1826896 2-s2.0-0035313580 Hajarian M. Matrix form of the CGS method for solving general coupled matrix equations Applied Mathematics Letters 2014 34 37 42 10.1016/j.aml.2014.03.013 MR3212226 Hajarian M. Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations Journal of the Franklin Institute 2013 350 10 3328 3341 10.1016/j.jfranklin.2013.07.008 MR3123421 2-s2.0-84882769548 Hajarian M. Matrix form of the Bi-CGSTAB method for solving the coupled Sylvester matrix equations IET Control Theory & Applications 2013 7 14 1828 1833 10.1049/iet-cta.2013.0101 MR3136627 Hajarian M. Solving the general coupled and the periodic coupled matrix equations via the extended QMRCGSTAB algorithms Computational & Applied Mathematics 2014 33 2 349 362 10.1007/s40314-013-0065-z MR3217566 Peng Z.-Y. Wang L. Peng J.-J. The solutions of matrix equation AX=B over a matrix inequality constraint SIAM Journal on Matrix Analysis and Applications 2012 33 2 554 568 10.1137/100808678 MR2970219 2-s2.0-84865450286 Li J. F. Peng Z. Y. Peng J. J. Bisymmetric solution of the matrix equation AX=B under a matrix inequality constraint Mathematica Numerica Sinica 2013 35 2 137 150 MR3112951 Zhang K. Y. Xu Z. Numerical Algebra 2006 Beijing, China Science Press (Chinese) Moreau J. J. Décomposition orthogonale d'un espace hilbertien selon deux cones mutuellement polaires Comptes Rendus de l'Académie des Sciences 1962 225 238 240 Wierzbicki A. P. Kurcyusz S. Projection on a cone, penalty functionals and duality theory for problems with inequaltity constraints in Hilbert space SIAM Journal on Control and Optimization 1977 15 1 25 56 10.1137/0315003 MR0438720 Ciarlet P. G. Introduction to Numerical Linear Algebra and Optimisation 1989 Cambridge, UK Cambridge University Press MR1015713