An efficient iterative algorithm is presented to solve a system of linear matrix equations A1X1B1+A2X2B2=E, C1X1D1+C2X2D2=F with real matrices X1 and X2. By this iterative algorithm, the solvability of the system can be determined automatically. When the system is consistent, for any initial matrices X10 and X20, a solution can be obtained in the absence of roundoff errors, and the least norm solution can be obtained by choosing a special kind of initial matrix. In addition, the unique optimal approximation solutions X^1 and X^2 to the given matrices X~1 and X~2 in Frobenius norm can be obtained by finding the least norm solution of a new pair of matrix equations A1X¯1B1+A2X¯2B2=E¯, C1X¯1D1+C2X¯2D2=F¯, where E¯=E-A1X~1B1-A2X~2B2, F¯=F-C1X~1D1-C2X~2D2. The given numerical example demonstrates that the iterative algorithm is efficient. Especially, when the numbers of the parameter matrices A1,A2,B1,B2,C1,C2,D1,D2 are large, our algorithm is efficient as well.

1. Introduction

Throughout the paper, we denote the set of all m×n real matrix by Rm×n, the transpose matrix of A by AT, the identity matrix of order n by In, the Kronecker product of A and B by A⊗B, the mn×1 vector formed by the vertical concatenation of the respective columns of a matrix A∈Rm×n by vec(A), the trace of a matrix A by tr(A), and the Frobenius norm of a matrix A by ∥A∥ where ∥A∥=tr(ATA).

In this paper, we consider the following two problems.

Problem 1.

For the given matrices A1∈Rp×k, A2∈Rp×m, B1∈Rr×q, B2∈Rn×q, C1∈Rs×k, C2∈Rs×m, D1∈Rr×t, D2∈Rn×t, E∈Rp×q, and F∈Rs×t, find X1∈Rk×r and X2∈Rm×n such that
(1)A1X1B1+A2X2B2=E,C1X1D1+C2X2D2=F.

Problem 2.

When Problem 1 is consistent, let S denote the solution set of the pair of matrix equation (1). For the given matrices X~1∈Rk×r, X~2∈Rm×n, find {X^1,X^2}∈S such that
(2)∥X^1-X~1∥2+∥X^2-X~2∥2=min{X1,X2}∈S(∥X1-X~1∥2+∥X2-X~2∥2).

Problem 2 is to find the optimal approximation solutions to the given matrices X~1,X~2 in the solution set of Problem 1. It occurs frequently in experiment design (see, for instance, [1]). In the recent years, the matrix optimal approximation problem has been studied extensively (e.g., [2–13]).

The research on solving matrix equation pair has been actively ongoing for the last 30 years or more. For instance, Mitra [14] gave conditions for the existence of a solution and a representation of the general common solution to AXB=E, CXD=F. Shinozaki and Sibuya [15] and van der Woude [16] discussed conditions for the existence of a common solution to AXB=E, CXD=F. Navarra et al. [5] derived sufficient and necessary conditions for the existence of a common solution to AXB=E, CXD=F. Yuan [13] obtained an analytical expression of the least-squares solutions of AXB=E, CXD=F by using the generalized singular value decomposition (GSVD) of matrices. Dehghan and Hajarian [17] presented some examples to show a motivation for studying the general coupled matrix equations ∑j=1lAijXjBij=Ci, i=1,2,…,l, and [18] constructed an iterative algorithm to solve the general coupled matrix equations ∑j=1pAijXjBij=Mi, i=1,2,…,p. Wang [19, 20] gave the centrosymmetric solution to the system of quaternion matrix equations A1X=C1, A3XB3=C3. Wang [21] also solved a system of matrix equations over arbitrary regular rings with identity.

Recently, some finite iterative algorithms have also been developed to solve matrix equations. Ding et al. [22, 23] and Xie et al. [24, 25] studied the iterative solutions of matrix equations AXB=F and AiXBi=Fi and generalized Sylvester matrix equations AXB+CXD=F and AXB+CXTD=F. They presented a gradient based and a least-squares based iterative algorithms for the solution. Li et al. [26, 27] and Zhou et al. [28, 29] considered iterative method for some coupled linear matrix equations. Deng et al. [30] studied the consistent conditions and the general expressions about the Hermitian solutions of the matrix equations (AX,XB)=(C,D) and designed an iterative method for its Hermitian minimum norm solutions. Li and Wu [31] gave symmetric and skew-antisymmetric solutions to certain matrix equations A1X=C1, XB3=C3 over the real quaternion algebra H. For more studies on iterative algorithms on coupled matrix equations, we refer to [3, 10–12, 17, 32–37]. Peng et al. [6] presented iterative methods to obtain the symmetric solutions of AXB=E, CXD=F. Sheng and Chen [8] presented a finite iterative method; when AXB=E, CXD=F is consistent. Liao and Lei [38] presented an analytical expression of the least-squares solution and an algorithm for AXB=E, CXD=F with the minimum norm. Peng et al. [7] presented an algorithm for the least-squares reflexive solution. Dehghan and Hajarian [2] presented an iterative algorithm for solving a pair of matrix equations AXB=E, CXD=F over generalized centrosymmetric matrices. Cai and Chen [39] presented an iterative algorithm for the least-squares bisymmetric solutions of the matrix equations AXB=E, CXD=F. Yin and Huang [40] presented an iterative algorithm to solve the least squares generalized reflexive solutions of the matrix equations AXB=E, CXD=F.

However, to our knowledge, there has been little information on finding the solutions to the system (1) by iterative algorithm. In this paper, an efficient iterative algorithm is presented to solve the system (1) for any real matrices X1,X2. The suggested iterative algorithm, automatically determines the solvability of equations pair (1). When the pair of equations is consistent, then, for any initial matrices X10 and X20, the solution can be obtained in the absence of round errors, and the least norm solution can be obtained by choosing a special kind of initial matrix. In addition, the unique optimal approximation solutions X^1 and X^2 to the given matrices X~1 and X~2 in Frobenius norm can be obtained by finding the least norm solution of a new pair of matrix equations A1X¯1B1+A2X¯2B2=E¯, C1X¯1D1+C2X¯2D2=F¯, where E¯=E-A1X~1B1-A2X~2B2, F¯=F-C1X~1D1-C2X~2D2. The given numerical examples demonstrate that our iterative algorithm is efficient. Especially, when the numbers of the parameter matrices A1,A2,B1,B2,C1,C2,D1,D2 are large, our algorithm is efficient as well while the algorithm of [32] is not convergent. That is, our algorithm has merits of good numerical stability and ease to program.

The rest of this paper is outlined as follows. In Section 2, we first propose an efficient iterative algorithm for solving Problem 1; then we give some properties of this iterative algorithm. We show that the algorithm can obtain a solution group (the least Frobenius norm solution group) for any (special) initial matrix group in the absence of roundoff errors. In Section 3, a numerical example is given to illustrate that our algorithm is quite efficient.

2. Iterative Algorithm for Solving Problems <xref ref-type="statement" rid="problem1.1">1</xref> and <xref ref-type="statement" rid="problem1.2">2</xref>

In this section, we present the iterative algorithm for the consistence of the system (1).

Algorithm 3.

(1) Input matrices A1∈Rp×k, A2∈Rp×m, B1∈Rr×q, B2∈Rn×q, C1∈Rs×k, C2∈Rs×m, D1∈Rr×t, D2∈Rn×t, E∈Rp×q, F∈Rs×t, X11∈Rk×r, and X21∈Rm×n (where X11, X21 are any initial matrices).

(3) If ΔX=diag(ΔX1k,ΔX2k)=0 (k=1,2,…), then stop. Otherwise,
(4)X1k+1=X1k+ΔX1k,X2k+1=X2k+ΔX2k.

(4) Calculate
(5)Ek+1=Ek-(A1ΔX1kB1+A2ΔX2kB2),Fk+1=Fk-(C1ΔX1kD1+C2ΔX2kD2),P1k+1=A1TEk+1B1T+C1TFk+1D1T,P2k+1=A2TEk+1B2T+C2TFk+1D2T,βk+1=(tr[(Ek+1)T(A1P1k+1B1+A2P2k+1B2)]+tr[(Fk+1)T(C1P1k+1D1+C2P2k+1D2)])×(∥A1P1k+1B1+A2P2k+1B2∥2+∥C1P1k+1D1+C2P2k+1D2∥2)-1,ΔX1k+1=βk+1P1k+1,ΔX2k+1=βk+1P2k+1,k=k+1.
Go to (3).

Lemma 4.

In Algorithm 3, the choice of βk makes ∥diag(Ek+1,Fk+1)∥ reach a minimum and diag(Ek+1,Fk+1) and diag(A1ΔX1kB1+A2ΔX2kB2,C1ΔX1kD1+C2ΔX2kD2) orthogonal to each other.

Proof.

From Algorithm 3, we have
(6)∥diag(Ek+1,Fk+1)∥2=∥diag(Ek-(A1ΔX1kB1+A2ΔX2kB2),Fk-(C1ΔX1kD1+C2ΔX2kD2))∥2=∥diag(Ek-(A1βkP1kB1+A2βkP2kB2),Fk-(C1βkP1kD1+C2βkP2kD2))∥2=∥Ek-(A1βkP1kB1+A2βkP2kB2)∥2+∥Fk-(C1βkP1kD1+C2βkP2kD2)∥2=∥Ek∥2+∥Fk∥2-2[tr(Ek,A1P1kB1+A2P2kB2)+tr(Fk,C1P1kD1+C2P2kD2)]βk+[∥A1P1kB1+A2P2kB2∥2+∥C1P1kD1+C2P2kD2∥2](βk)2.
From the above, the condition of ∥diag(Ek+1,Fk+1)∥ reaching a minimum is
(7)βk=(tr(Ek,A1P1kB1+A2P2kB2)+tr(Fk,C1P1kD1+C2P2kD2))×(∥A1P1kB1+A2P2kB2∥2+∥C1P1kD1+C2P2kD2∥2)-1.

On the other hand, if the choice of βk makes diag(Ek+1,Fk+1) and diag(A1ΔX1kB1+A2ΔX2kB2,C1ΔX1kD1+C2ΔX2kD2) orthogonal to each other, that is, tr[diag(Ek+1,Fk+1)Tdiag(A1ΔX1kB1+A2ΔX2kB2,C1ΔX1kD1+C2ΔX2kD2)]=0, we can have the same βk as (7).

Theorem 5.

Algorithm 3 is bound to be convergent.

Proof.

From Algorithm 3 and Lemma 4 we have
(8)diag(Ek,Fk)=diag(Ek+1,Fk+1)diag(Ek,Fk)m+diag(A1ΔX1kB1+A2ΔX2kB2,000000000000000000C1ΔX1kD1+C2ΔX2kD2),⟺∥diag(Ek,Fk)∥2=∥diag(Ek+1,Fk+1)+diag(A1ΔX1kB1+A2ΔX2kB2,00000000000000000000000C1ΔX1kD1+C2ΔX2kD2)∥2=∥diag(Ek+1,Fk+1)∥2+∥diag(A1ΔX1kB1+A2ΔX2kB2,C1ΔX1kD1+C2ΔX2kD2)∥2
such that
(9)∥diag(Ek+1,Fk+1)∥2=∥diag(Ek,Fk)∥2-∥diag(A1ΔX1kB1+A2ΔX2kB2,C1ΔX1kD1+C2ΔX2kD2)∥2≤∥diag(Ek,Fk)∥2.
From (9), we know that Algorithm 3 is convergent.

Lemma 6 (see [<xref ref-type="bibr" rid="B22">41</xref>]).

Suppose that the consistent system of linear equations My=b has a solution y0∈R(MT); then y0 is the least Frobenius norm solution of the system of linear equations.

Theorem 7.

Assume that the system (1) is consistent. Let X11=A1TYB1T+C1TZD1T, X21=A2TYB2T+C2TZD2T be initial matrices where Y∈Rp×q, Z∈Rs×t are any initial matrices, or, especially, X11=0, X21=0; then the solution generated by Algorithm 3 is the least Frobenius norm solution to (1).

Proof.

If (1) is consistent, from X11=A1TYB1T+C1TZD1T, X21=A2TYB2T+C2TZD2T, using Algorithm 3, we have the iterative solution pair X1k,X2k of (1) as the following:
(10)X1k=X1k-1+ΔX1k-1=X11+ΔX11+⋯+ΔX1k-1=A1TYB1T+C1TZD1T+A1T[β1E1+⋯+βk-1Ek-1]B1T+C1T[β1F1+⋯+βk-1Fk-1]D1T=A1TMB1T+C1TND1T,X2k=X2k-1+ΔX2k-1=X21+ΔX21+⋯+ΔX1k-1=A2TYB2T+C2TZD2T+A2T[β1E1+⋯+βk-1Ek-1]B2T+C2T[β1F1+⋯+βk-1Fk-1]D2T=A2TMB2T+C2TND2T.

We know that (1) is equivalent to the system (11)(B1T⊗A1B2T⊗A2D1T⊗C1D2T⊗C2)(vec(X1)vec(X2))=(vec(E)vec(F)).
From (10) and (11) we have
(12)(vec(A1TMB1T+C1TND1T)vec(A2TMB2T+C2TND2T))=(B1⊗A1TD1⊗C1TB2⊗A2TD2⊗C2T)(vec(M)vec(N))=(B1T⊗A1B2T⊗A2D1T⊗C1D2T⊗C2)T×(vec(M)vec(N))∈R((B1T⊗A1B2T⊗A2D1T⊗C1D2T⊗C2)T),
where R(*) is the column space of matrix *.

Considering Lemma 6, with the initial matrices X11=A1TYB1T+C1TZD1T, X21=A2TYB2T+C2TZD2T, where Y∈Rp×q, Z∈Rs×t are arbitrary, or, especially, X11=0 and X21=0, then the solution pair X1k,X2k generated by Algorithm 3 is the least Frobenius norm solution of the matrix equations (1).

Suppose that Problem 1 is consistent. Obviously the solution set S of (1) is nonempty. For given matrices pair X~1∈Rk×r, X~2∈Rm×n, we can write
(13)A1X1B1+A2X2B2=E,C1X1D1+C2X2D2=F,⟺{A1(X1-X~1)B1+A2(X2-X~2)B2=E-A1X~1B1-A2X~2B2,C1(X1-X~1)D1+C2(X2-X~2)D2=F-C1X~1D1-C2X~2D2.
Let X¯1=X1-X~1, X¯2=X2-X~2, E¯=E-A1X~1B1-A2X~2B2, and F¯=F-C1X~1D1-C2X~2D2. Then Problem 2 is equivalent to find the least Frobenius norm solution pair of the system
(14)A1X¯1B1+A2X¯2B2=E¯,C1X¯1D1+C2X¯2D2=F¯,
which can be obtained using Algorithm 3 with the initial matrix pair X¯11=A1TYB1T+C1TZD1T, X¯21=A2TYB2T+C2TZD2T where Y∈Rp×q and Z∈Rs×t are arbitrary, or especially, X¯11=0, X¯21=0, and the solution of the matrix optimal approximation Problem 2 can be represented as X^1=X¯1k+X~1, X^2=X¯2k+X~2.

3. An Example

In this section, we show a numerical example to illustrate the efficiency of Algorithm 3. All computations are performed by MATLAB 7. For the influence of the error of calculation, we consider the matrix R as a zero matrix if ∥R∥<10-10.

Example 1.

Consider the solution of the linear matrix equations
(15)A1X1B1+A2X2B2=E,C1X1D1+C2X2D2=F,
where
(16)A1=(139105541241765015935175191196147),B1=(1311710387116198856745152),A2=(27602132179574094),B2=(10676921283128157114121614213615910175),C1=(38819215410014519443821981291491586454),D1=(883114071135146),C2=(463115390779137519),D2=(9690551111435124179173),E=(843307775981667137224360096099422471126705896220689126164437410412443246148712941612351115980470653827120550138162343221431798113168541721480518956522),F=(810469110054438139792238672308125463671638483276665841138225813330665106135171383124017573398411150350328477285952).

In this example, the numbers of the parameter matrices A1,A2,B1,B2,C1,C2,D1,D2 are larger than the numbers of the parameter matrices in the example of [32]. It can be verified that these matrix equations are consistent and have the solution as
(17)X1=(534832129175193),X2=(13321641742786).

Let
(18)X10=(000000),X20=(000000).

(1) Using Algorithm 3 and iterate 10309 steps, we obtain the least Frobenius norm solution pair of the matrix equation in Example 1 as follows:
(19)X1=(53.000048.000032.0000129.0000175.0000193.0000),X2=(133.00002.0000164.0000174.000027.000086.0000).
The obtained sequence ∥ΔX∥ are presented in Figure 1.

The obtained sequence ∥ΔX∥ by Algorithm 3 for Example 1.

(2) Using the algorithm of [32], to this example, the iteration is not convergent. The obtained result is presented in Figure 2.

The relative error of the solution and the residual by the algorithm of [32] for Example 1.

This numerical example demonstrates that our algorithm has merits of good numerical stability and ease to program.

Acknowledgments

This research was supported by the Grants from the Key Project of Scientific Research Innovation Foundation of Shanghai Municipal Education Commission (13ZZ080), the National Natural Science Foundation of China (11171205), the Natural Science Foundation of Shanghai (11ZR1412500), and the Nature Science Foundation of Anhui Provincial Education (ky2008b253, KJ2013A248).

MengT.LeondesC.Experimental design and decision support, in expert systemsDehghanM.HajarianM.An iterative algorithm for solving a pair of matrix equations AYB=E, CYD=F over generalized centro-symmetric matricesDehghanM.HajarianM.An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximationAndrewA. L.Solution of equations involving centrosymmetric matricesNavarraA.OdellP. L.YoungD. M.A representation of the general common solution to the matrix equations A1XB1=C1 and A2XB2=C2 with applicationsPengY.-X.HuX.-Y.ZhangL.An iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A1XB1=C1, A2XB2=C2PengZ.-H.HuX.-Y.ZhangL.An efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1=C1;A2XB2=C2ShengX.ChenG.A finite iterative method for solving a pair of linear matrix equations (AXB,CXD)=(E,F)LiN.WangQ. W.Iterative algorithm for solving a class of quaternion matrix equation over the generalized (P,Q)-reflexive matricesWuA.-G.FengG.DuanG.-R.WuW.-J.Finite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknownsWuA.-G.FengG.DuanG.-R.WuW.-J.Iterative solutions to coupled Sylvester-conjugate matrix equationsWuA.-G.LiB.ZhangY.DuanG.-R.Finite iterative solutions to coupled Sylvester-conjugate matrix equationsYuanY.-X.On the minimum norm solution of matrix equation AXB=E;CXD=FMitraS. K.Common solutions to a pair of linear matrix equations A1XB1=C1 and A2XB2=C2197374213216MR0320028ZBL0262.15010ShinozakiN.SibuyaM.Consistency of a pair of matrix equations with an applicationvan der WoudeJ. W.DehghanM.HajarianM.The general coupled matrix equations over generalized bisymmetric matricesDehghanM.HajarianM.The reflexive and anti-reflexive solutions of a linear matrix equation and systems of matrix equationsWangQ.-W.SunJ.-H.LiS.-Z.Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebraWangQ.-W.Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equationsWangQ.-W.A system of matrix equations and a linear matrix equation over arbitrary regular rings with identityDingF.LiuP. X.DingJ.Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principleDingJ.LiuY.DingF.Iterative solutions to matrix equations of the form AiXBi=FiXieL.DingJ.DingF.Gradient based iterative solutions for general linear matrix equationsXieL.LiuY.YangH.Gradient based and least squares based iterative algorithms for matrix equations AXB+CXTD=FLiZ.-Y.WangY.ZhouB.DuanG.-R.Least squares solution with the minimum-norm to general matrix equations via iterationLiZ.-Y.ZhouB.WangY.DuanG.-R.Numerical solution to linear matrix equation by finite steps iterationZhouB.LamJ.DuanG.-R.On Smith-type iterative algorithms for the Stein matrix equationZhouB.LamJ.DuanG.-R.Gradient-based maximal convergence rate iterative method for solving linear matrix equationsDengY.-B.BaiZ.-Z.GaoY.-H.Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equationsLiY.-T.WuW.-J.Symmetric and skew-antisymmetric solutions to systems of real quaternion matrix equationsDehghanM.HajarianM.An efficient algorithm for solving general coupled matrix equations and its applicationDehghanM.HajarianM.On the reflexive and anti-reflexive solutions of the generalised coupled Sylvester matrix equationsZhouB.LiZ.-Y.DuanG.-R.WangY.Weighted least squares solutions to general coupled Sylvester matrix equationsJonssonI.KågströmB.Recursive blocked algorithm for solving triangular systems. I. One-sided and coupled Sylvester-type matrix equationsZhouB.DuanG.-R.LiZ.-Y.Gradient based iterative algorithm for solving coupled matrix equationsJonssonI.KågströmB.Recursive blocked algorithm for solving triangular systems. II. Two-sided and generalized Sylvester and Lyapunov matrix equationsLiaoA.-P.LeiY.Least-squares solution with the minimum-norm for the matrix equation (AXB,GXH)=(C,D)CaiJ.ChenG.An iterative algorithm for the least squares bisymmetric solutions of the matrix equations A1XB1=C1,A2XB2=C2YinF.HuangG.-X.An iterative algorithm for the least squares generalized reflexive solutions of the matrix equations AXB=E;CXD=FPengY.-X.HuX.-Y.ZhangL.An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB=C