We propose a new iterative method to find the bisymmetric minimum norm solution of a pair of consistent matrix equations A1XB1=C1, A2XB2=C2. The algorithm can obtain the bisymmetric solution with minimum Frobenius norm in finite iteration steps in the absence of round-off errors. Our algorithm is faster and more stable than Algorithm 2.1 by Cai et al. (2010).

1. Introduction

Let Rm×n denote the set of m×n real matrices. A matrix X=(xij)∈Rn×n is said to be bisymmetric if xij=xji=xn-i+1,n-j+1 for all 1≤i, j≤n. Let BSRn×n denote n×n real bisymmetric matrices. For any X∈Rm×n, XT, tr(X), ∥X∥, and ∥X∥2 represent the transpose, trace, Frobenius norm, and Euclidean norm of X, respectively. The symbol vec(·) stands for the vec operator; that is, for X=(x1,x2,…,xn)∈Rm×n, where xi(i=1,2,…,n) denotes the ith column of X, vec(X)=(x1T,x2T,…,xnT)T. Let mat(·) represent the inverse operation of vec operator. In the vector space Rm×n, we define the inner product as 〈X,Y〉=tr(YTX) for all X,Y∈Rm×n. Two matrices X and Y are said to be orthogonal if 〈X,Y〉=0. Let Sn=(en,en-1,…,e1) denote the n×n reverse unit matrix where ei(i=1,2,…,n) is the ith column of n×n unit matrix In; then SnT=Sn,Sn2=In.

In this paper, we discuss the following consistent matrix equations:
(1)A1XB1=C1,A2XB2=C2,X∈BSRn×n,
where A1∈Rp1×n, B1∈Rn×q1, C1∈Rp1×q1, A2∈Rp2×n, B2∈Rn×q2, and C2∈Rp2×q2 are given matrices, and X∈BSRn×n is unknown bisymmetric matrix to be found.

Research on solving a pair of matrix equations A1XB1=C1, A2XB2=C2 has been actively ongoing for the past 30 or more years (see details in [1–6]). Besides the works on finding the common solutions to the matrix equations A1XB1=C1, A2XB2=C2, there are some valuable efforts on solving a pair of the matrix equations with certain linear constraints on solution. For instance, Khatri and Mitra [7] derived the Hermitian solution of the consistent matrix equations AX=C, XB=D. Deng et al. [8] studied the consistent conditions and the general expressions about the Hermitian solutions of the matrix equations (AX,XB)=(C,D) and designed an iterative method for its Hermitian minimum norm solutions. Peng et al. [9] presented an iterative method to obtain the least squares reflexive solutions of the matrix equations A1XB1=C1, A2XB2=C2. Cai et al. [10, 11] proposed iterative methods to solve the bisymmetric solutions of the matrix equations A1XB1=C1, A2XB2=C2.

In this paper, we propose a new iterative algorithm to solve the bisymmetric solution with the minimum Frobenius norm of the consistent matrix equations A1XB1=C1, A2XB2=C2, which is faster and more stable than Cai’s algorithm (Algorithm 2.1) in [10].

The rest of the paper is organized as follows. In Section 2, we propose an iterative algorithm to obtain the bisymmetric minimum Frobenius norm solution of (1) and present some basic properties of the algorithm. Some numerical examples are given in Section 3 to show the efficiency of the proposed iterative method.

2. A New Iterative Algorithm

Firstly, we give the following lemmas.

Lemma 1 (see [<xref ref-type="bibr" rid="B8">12</xref>]).

There is a unique matrix P(m,n)∈Rmn×mn such that vec(XT)=P(m,n)vec(X) for all X∈Rm×n. This matrix P(m,n) depends only on the dimensions m and n. Moreover, P(m,n) is a permutation matrix and P(n,m)=P(m,n)T=P(m,n)-1.

Lemma 2.

If y0,y1,y2,…∈Rm are orthogonal to each other, then there exists a positive integer l^≤m such that yl^=0.

Proof.

If there exists a positive integer l^≤m-1 such that yl^=0, then Lemma 2 is proved.

Otherwise, we have yi≠0, i=0,1,2,…,m-1, and y0,y1,…,ym-1 are orthogonal to each other in the m-dimension vector space of Rm. So y0,y1,…,ym-1 form a set of orthogonal basis of Rm.

Hence ym can be expressed by the linear combination of y0,y1,…,ym-1. Denote
(2)ym=a0y0+a1y1+⋯+am-1ym-1
in which ai∈R, i=0,1,2…,m-1. Then
(3)〈yi,ym〉=a0〈yi,y0〉+a1〈yi,y1〉+⋯+am-1〈yi,ym-1〉=ai〈yi,yi〉+∑j=1j≠im-1aj〈yi,yj〉=ai〈yi,yi〉,i=0,1,2,…,m-1.
From 〈yi,ym〉=0 and 〈yi,yi〉≠0, i=0,1,2,…,m-1, we have ai=0,i=0,1,2,…,m-1; that is,
(4)ym=0.
This completes the proof.

Lemma 3.

A matrix X∈BSRn×n if and only if X=XT=SnXSn.

Lemma 4.

If Y∈Rn×n, then Y+YT+Sn(Y+YT)Sn∈BSRn×n.

Next, we review the algorithm proposed by Paige [13] for solving the following consistent problem:
(5)Mx=f,
with given M∈Rs×t,f∈Rs.

(ii) Iteration. For i=1,2,…, until {xi} convergence, do

ξi=-ξi-1βi/αi; zi=zi-1+ξivi;

θi=(τi-1-βiθi-1)/αi; wi=wi-1+θivi;

βi+1ui+1=Mvi-αiui;

τi=-τi-1αi/βi+1;

αi+1vi+1=MTui+1-βi+1vi;

γi=βi+1ξi/(βi+1θi-τi);

xi=zi-γiwi.

It is well known that if the consistent system of linear equations Mx=f has a solution x*∈R(MT), then x* is the unique minimum Euclidean norm solution of Mx=f. It is obvious that xi generated by Algorithm 5 belongs to R(MT) and this leads to the following result.

Theorem 6.

The solution generated by Algorithm 5 is the minimum Euclidean norm solution of (5).

If u1,u2,… and v1,v2,… are generated by Algorithm 5, then uiTuj=δij, viTvj=δij (see details in [13]), in which
(7)δij={1,i=j,0,i≠j.
If we denote
(8)ri=f-Mxi,
where xi is the approximation solution obtained by Algorithm 5 after the ith iteration, it follows that ri=-βi+1ξiui+1 (see details in [13]). So we have
(9)riTrj=hijδij
in which hij=βi+1βj+1ξiξj.

Now we derive our new algorithm, which is based on Paige algorithm.

Noting that X is the bisymmetric solution of (1) if and only if X is the bisymmetric solution of the following linear equations:
(10)A1XB1=C1,B1TXA1T=C1T,A1SnXSnB1=C1,B1TSnXSnA1T=C1T,A2XB2=C2,B2TXA2T=C2T,A2SnXSnB2=C2,B2TSnXSnA2T=C2T.
Furthermore, suppose (10) is consistent; let Y be a solution of (10). If Y is a bisymmetric matrix, then Y is a bisymmetric solution of (1); otherwise we can obtain a bisymmetric solution of (10) by X=(Y+YT+Sn(Y+YT)Sn)/4.

The system of (10) can be transformed into (5) with coefficient matrix M and vector f as
(11)M=(B1T⊗A1A1⊗B1TB1TSn⊗A1SnA1Sn⊗B1TSnB2T⊗A2A2⊗B2TB2TSn⊗A2SnA2Sn⊗B2TSn),f=(vec(C1)vec(C1T)vec(C1)vec(C1T)vec(C2)vec(C2T)vec(C2)vec(C2T)).
Therefore, β1u1=f, α1v1=MTu1, βi+1ui+1=Mvi-αiui, and αi+1vi+1=MTui+1-βi+1vi can be written as
(12)β1u1=(vec(C1)vec(C1T)vec(C1)vec(C1T)vec(C2)vec(C2T)vec(C2)vec(C2T)),α1v1=(B1T⊗A1A1⊗B1TB1TSn⊗A1SnA1Sn⊗B1TSnB2T⊗A2A2⊗B2TB2TSn⊗A2SnA2Sn⊗B2TSn)Tu1,βi+1ui+1=(B1T⊗A1A1⊗B1TB1TSn⊗A1SnA1Sn⊗B1TSnB2T⊗A2A2⊗B2TB2TSn⊗A2SnA2Sn⊗B2TSn)vi-αiui,i=1,2,…,αi+1vi+1=(B1T⊗A1A1⊗B1TB1TSn⊗A1SnA1Sn⊗B1TSnB2T⊗A2A2⊗B2TB2TSn⊗A2SnA2Sn⊗B2TSn)Tui+1-βi+1vi,i=1,2,….
From (12), we have
(13)ui=(vec(Ui1)vec(Ui1T)vec(Ui1)vec(Ui1T)vec(Ui2)vec(Ui2T)vec(Ui2)vec(Ui2T)),vi=vec(Vi),
where Ui1∈Rp1×q1, Ui2∈Rp2×q2, Vi∈Rn×n, and Vi is a bisymmetric matrix.

And so, the vector form of β1u1=f, α1v1=MTu1, βi+1ui+1=Mvi-αiui, and αi+1vi+1=MTui+1-βi+1vi in Algorithm 5 can be rewritten as matrix form. Then we now propose the following matrix-form algorithm.

(ii) Iteration. For i=1,2,…, until {Xi} convergence, do

ξi=-ξi-1βi/αi;Zi=Zi-1+ξiVi;

θi=(τi-1-βiθi-1)/αi;Wi=Wi-1+θiVi;

U-i+1,j=AjViBj-αiUij,j=1,2;

βi+1=2∥U-i+1,1∥2+∥U-i+1,2∥2;

Ui+1,j=U-i+1,j/βi+1,j=1,2;

τi=-τi-1αi/βi+1;

Ti+1=A1TUi+1,1B1T+A2TUi+1,2B2T;

V-i+1=Ti+1+Ti+1T+Sn(Ti+1+Ti+1T)Sn-βi+1Vi;

αi+1=∥V-i+1∥; Vi+1=V-i+1/αi+1;

γi=βi+1ξi/(βi+1θi-τi);

Xi=Zi-γiWi.

Remark 8.

The stopping criteria on Algorithm 7 can be used as
(14)∥C1-A1XiB1∥+∥C2-A2XiB2∥≤ϵ,|ξi|≤ϵor∥Xi-Xi-1∥≤ϵ,
where ϵ is a small tolerance.

Remark 9.

As Vi, Zi, and Wi in Algorithm 7 are bisymmetric matrices, we can see that Xi obtained by Algorithm 7 are also bisymmetric matrices.

Some basic properties of Algorithm 7 are listed in the following theorems.

Theorem 10.

The solution generated by Algorithm 7 is the bisymmetric minimum Frobenius norm solution of (1).

Theorem 11.

The iteration of Algorithm 7 will be terminated in at most p1q1+p2q2 steps in the absence of round-off errors.

Proof.

By (8) and (11), we have by simple calculation that
(15)ri=(vec(Ri1)vec(Ri1T)vec(Ri1)vec(Ri1T)vec(Ri2)vec(Ri2T)vec(Ri2)vec(Ri2T)),
in which Ri1=C1-A1XiB1, and Ri2=C2-A2XiB2, where Xi is the approximation solution obtained by Algorithm 7 after the ith iteration.

By Lemma 1, we have that
(16)vec(Ri1T)=P(p1,q1)vec(Ri1),vec(Ri2T)=P(p2,q2)vec(Ri2),
where P(p1,q1)∈Rp1q1×p1q1 and P(p2,q2)∈Rp2q2×p2q2 are permutation matrices. For simplicity, we denote P1=P(p1,q1), P2=P(p2,q2). Then P1TP1=Ip1q1,P2TP2=Ip2q2.

If we let ti=(vec(Ri1)vec(Ri2))∈Rp1q1+p2q2, then we have by (9) that t0,t1,t2,… are orthogonal to each other in Rp1q1+p2q2. By Lemma 2, there exists a positive integer l^≤(p1q1+p2q2) such that tl^=0. Hence
(18)Rl^1=Rl^2=0,
that is, the iteration of Algorithm 7 will be terminated in at most p1q1+p2q2 steps in the absence of round-off errors.

3. Numerical Examples

In this section, we use some numerical examples to illustrate the efficiency of our algorithm. The computations are carried out at PC computer, with software MATLAB 7.0. The machine precision is around 10-16.

We stop the iteration when ∥Ri1∥+∥Ri2∥≤10-12.

Example 12.

Given matrices A1, B1, C1, A2, B2, and C2 as follows:
(19)A1=(1-4-2-101-331-13-1-214-3-32-1-1-22514-1-34-14210-13-3-11-312-1),B1=(-32-13-212-3-1-23-4-1101-110110-12123-1-253-30-33-30-1-101-2),C1=(-19301119-3041-5547-855-4739-7477374-7780-3617-1936-17-219-30-11-1930-4155-478-5547-39),A2=(3-2-11-40-10-31-3231-2-41-303103-13-2-3-11-60-2-430),B2=(213-2-3-1-43123-10440-20-221-5-4-1-1-2-31),C2=(33107140-3317-34-17-1727-29-2-27-173417176078138-60)
then (1) is consistent, for one can easily verify that it has a bisymmetric solution:
(20)X^=(1-1121-11-131111-1110-2-11121-21-21211-1-2011-111113-11-1121-11).
We choose the initial matrix X0=0, then using Algorithm 7 and iterating 13 steps, we have the unique bisymmetric minimum Frobenius norm solution of (1) as follows:(21)X13=(0.4755-0.68220.62741.45860.2774-1.2112-0.1053-0.68222.66280.40460.07161.01330.4001-1.21120.62740.4046-1.0215-2.2128-1.61761.01330.27741.45860.0716-2.2128-1.1548-2.21280.07161.45860.27741.0133-1.6176-2.2128-1.02150.40460.6274-1.21120.40011.01330.07160.40462.6628-0.6822-0.1053-1.21120.27741.45860.6274-0.68220.4755),with ∥R13,1∥+∥R13,2∥=6.2303e-013.

Figure 1 illustrates the performance of our algorithm and Cai’s algorithm [10]. From Figure 1, we see that our algorithm is faster than Cai’s algorithm.

Convergence curves of log10(∥Ri1∥+∥Ri2∥).

Example 13.

Let
(22)A1=hilb(7),B1=pascal(7),A2=rand(7,7),B2=rand(7,7),
with hilb, pascal, and rand being functions in Matlab. And we let C1=A1X^B1, C2=A2X^B2, in which X^ is defined in Example 12. Hence (1) is consistent.

We choose the initial matrix X0=0; Figure 2 illustrates the performance of our algorithm and Cai’s algorithm [10]. From Figure 2, we see that our algorithm is faster and more stable than Cai’s algorithm.

Convergence curves of log10(∥Ri1∥+∥Ri2∥).

Acknowledgments

This research was supported by the Natural Science Foundation of China (nos. 10901056, 11071079, and 11001167), the Shanghai Science and Technology Commission “Venus” Project (no. 11QA1402200), and the Natural Science Foundation of Zhejiang Province (no. Y6110043).

ShengX. P.ChenG. L.A finite iterative method for solving a pair of linear matrix equations (AXB,CXD)=(E,F)YuanY. X.Least squares solutions of matrix equation AXB=E; CXD=FNavarraA.OdellP. L.YoungD. M.A representation of the general common solution to the matrix equations A1XB1=C1 and A2XB2=C2 with applicationsMitraS. K.A pair of simultaneous linear matrix equations and a matrix programming problemMitraS. K.The matrix equations AX=C, XB=DBhimasankaramP.Common solutions to the linear matrix equations AX=C, XB=D and FXG=HKhatriC. G.MitraS. K.Hermitian and nonnegative definite solutions of linear matrix equationsDengY.-B.BaiZ.-Z.GaoY.-H.Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equationsPengZ.-H.HuX.-Y.ZhangL.An efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1=C1CaiJ.ChenG. L.LiuQ. B.An iterative method for the bisymmetric solutions of the consistent matrix equations A1XB1=C1, A2XB2=C2CaiJ.ChenG. L.An iterative algorithm for the least squares bisymmetric solutions of the matrix equations A1XB1=C1HornR. A.JohnsonC. R.PaigeC. C.Bidiagonalization of matrices and solutions of the linear equations