By extending the idea of LSMR method, we present an iterative method to solve the general coupled matrix equations ∑k=1qAikXkBik=Ci, i=1,2,…,p, (including the generalized (coupled) Lyapunov and Sylvester matrix equations as special cases) over some constrained matrix groups (X1,X2,…,Xq), such as symmetric, generalized bisymmetric, and (R,S)-symmetric matrix groups. By this iterative method, for any initial matrix group (X1(0),X2(0),…,Xq(0)), a solution group (X1*,X2*,…,Xq*) can be obtained within finite iteration steps in absence of round-off errors, and the minimum Frobenius norm solution or the minimum Frobenius norm least-squares solution group can be derived when an appropriate initial iterative matrix group is chosen. In addition, the optimal approximation solution group to a given matrix group (X¯1,X¯2,…,X¯q) in the Frobenius norm can be obtained by finding the least Frobenius norm solution group of new general coupled matrix equations. Finally, numerical examples are given to illustrate the effectiveness of the presented method.
1. Introduction
In control and system theory [1–7], we often encounter Lyapunov and Sylvester matrix equations which have been playing a fundamental role. Due to the important roles of the matrix equations, the studies on the matrix equations have been addressed in a large body of papers [8–27]. By using the hierarchical identification principle [9–11, 28–32], a gradient-based iterative (GI) method was presented to solve the solutions and the least-squares solutions of the general coupled matrix equations. In [19, 33], Zhou et al. deduced the optimal parameter of the GI method for solving the solutions and the weighted least-squares solutions of the general coupled matrix equations. Dehghan and Hajarian [34–36] introduced several iterative methods to solve various linear matrix equations.
In [12, 17], Huang et al. presented finite iterative algorithms for solving generalized coupled Sylvester systems. Li and Huang [37] proposed a matrix LSQR iterative method to solve the constrained solutions of the generalized coupled Sylvester matrix equations. Hajarian [38] presented the generalized QMRCGSTAB algorithm for solving Sylvester-transpose matrix equations. Recently, Lin and Simoncini [39] established minimal residual methods for large scale Lyapunov equations. They explored the numerical solution of this class of linear matrix equations when a minimal residual (MR) condition is used during the projection step.
In this paper, we construct a matrix iterative method based on the LSMR algorithm [40] to solve the constrained solutions of the following problems
Compatible matrix equations are as follows:(1)∑k=1qAikXkBik=Ci,i=1,2,…,p.Least-squares problem is as follows:(2)min∑i=1p∑k=1qAikXkBik-Ci21/2.Matrix nearness problem is as follows:(3)minX1,X2,…,Xq∈SE∑k=1qXk-X¯k2,where Aik, Bik, Ci, i=1,2,…,p, k=1,2,…,q, are constant matrices with suitable dimensions, Xk, k=1,2,…,q, are unknown matrices to be solved, X¯k, k=1,2,…,q, are given matrices, and SE is the solution set of (1) or problem (2).
This paper is organized as follows. In Section 2, we will briefly review the LSMR algorithm for solving linear systems of equations. In Section 3, we propose the matrix LSMR iterative algorithms for solving the problems (1)-(2). In Section 4, we solve the problem (3) by finding the minimum Frobenius norm solution group of the corresponding new general coupled matrix equations. In Section 5, numerical examples are given to illustrate the efficiency of the proposed iterative method. Finally, we make some concluding remarks in Section 6.
The notations used in this paper can be summarized as follows. tr(A) represents the trace of the matrix A. For A,B∈Rm×n, notation A⊗B is Kronecker product and 〈A,B〉=tr(BTA) is the inner product with the Frobenius norm A=〈A,A〉=tr(ATA). The use of vec(A) represents the vector operator defined as(4)vec(A)=a1Ta2T⋯anTT,where ak is the kth column of A. The generalized bisymmetric matrices, the (R,S)-symmetric matrices, and the symmetric orthogonal matrices can be defined as follows.
Definition 1 (see [41]).
A matrix P∈Rn×n is said to be a symmetric orthogonal matrix (P∈SORn×n), if PT=P and P2=In.
Definition 2 (see [42]).
For given symmetric orthogonal matrices R∈Rm×m, S∈Rn×n, we say a matrix X∈Rm×n is (R,S)-symmetric (X∈RSSRn×n), if X=RXS.
Definition 3 (see [43]).
For a given symmetric orthogonal matrix P∈Rn×n, a matrix X∈Rn×n is said to be a generalized bisymmetric matrix (X∈GBRSn×n), if XT=X and X=PXP.
2. LSMR Algorithm
In this section, we briefly review some fundamental properties of the LSMR algorithm [40], which is an iterative method for computing a solution x to either of the following problems.
Compatible linear systems are as follows: (5)Mx=f.Least-squares problem is as follows:(6)minMx-f2,where M∈Rm×n and f∈Rm. The LSMR algorithm uses an algorithm of Golub and Kahan [44], which stated as procedure Bidiag 4, to reduce M to lower bidiagonal form. The procedure Bidiag 4 can be described as follows.
Bidiag 4 (starting vector f; reduction to lower bidiagonal form).
(7)β1u1=f,α1v1=MTu1,βi+1ui+1=Mvi-αiui,αi+1vi+1=MTui+1-βi+1vi,mmmmmmmmi=1,2,….The scalers αi≥0 and βi≥0 are chosen such that ui2=vi2=1.
The following properties presented in [7] illustrate that the procedure Bidiag 4 has finite termination property.
Property 1.
Suppose that k steps of the procedure Bidiag 4 have been taken; then the vectors v1,v2,…,vk and u1,u2,…,uk,uk+1 are the orthonormal basis of the Krylov subspaces Kk(MTM,v1) and Kk+1(MMT,u1), respectively.
Property 2.
The procedure Bidiag 4 will stop at step m if and only if min{μ,λ} is m, where μ is the grade of v1 with respect to MTM and λ is the grade of u1 with respect to MMT.
By using the procedure Bidiag 4, the LSMR method constructs an approximation solution of the form xk=Vkyk, where Vk=(v1,v2,…,vk), which solves the least-squares problem, minykMTrk, where rk=f-Mxk is the residual for the approximate solution xk. The main steps of the LSMR algorithm can be summarized as shown in Algorithm 1.
More details about the LSMR algorithm can be found in [40].
The stopping criterion may be used as MTrk=MTf-Mxk2=ζ¯k+1 for the compatible linear systems (5) and for the least-squares problem (6). Other stopping criteria can also be used and are not listed here. Reader can see [40] for details. Clearly, the sequence xk∈Range(MT) generated by the LSMR algorithm converges to the unique minimum norm solution of (5) or the unique minimum norm least-squares solution of problem (6).
3. A Matrix LSMR Iterative Method
In this section, we will present our matrix iterative method based on the LSMR algorithm, for solving (1) and problem (2). For the unknown matrices Xi∈Rn×n, i=1,2,…,q, by using the Kronecker product, (1) and problem (2) are equivalent to (5) and problem (6), respectively, with(8)M=B11T⊗A11B12T⊗A12⋯B1qT⊗A1qB21T⊗A21B22T⊗A22⋯B2qT⊗A2q⋮⋮⋮Bp1T⊗Ap1Bp2T⊗Ap2⋯BpqT⊗Apq,x=vec(X1)vec(X2)⋮vec(Xq),f=vec(C1)vec(C2)⋮vec(Cp).
Hence, by using the invariance of the Frobenius norm under unitary transformations, it is easy to prove that the vector form β1u1=f, α1v1=MTu1, βi+1ui+1=Mvi-αiui, and αi+1vi+1=MTui+1-βi+1vi (i=1,2,…) in LSMR algorithm can be rewritten in matrix forms, respectively, as(9)β1U1j=Cj,j=1,2,…,p,β1=∑j=1pCj21/2,α1V1j=∑i=1pAijTU1iBijT,j=1,2,…,q,α1=∑k=1q∑i=1pAikTU1iBikT21/2,βi+1Ui+1j=∑k=1qAjkVikBjk-αiUij,j=1,2,…,p,βi+1=∑j=1p∑k=1qAjkVikBjk-αiUij21/2,αi+1Vi+1k=∑j=1pAjkTUi+1jBjkT-βi+1Vik,k=1,2,…,q,αi+1=∑k=1q∑j=1pAjkTUi+1jBjkT-βi+1Vik21/2.If the unknown matrices X1,X2,…,Xq∈GBRSn×n, then (1) and problem (2) are equivalent to (5) and problem (6), respectively, with(10)M=B11T⊗A11B12T⊗A12⋯B1qT⊗A1qB21T⊗A11B22T⊗A22⋯B2qT⊗A2q⋮⋮⋮Bp1T⊗Ap1Bp2T⊗Ap2⋯BpqT⊗ApqA11⊗B11TA12⊗B12T⋯A1q⊗B1qTA21⊗B21TA22⊗B22T⋯A2q⊗B2qT⋮⋮⋮Ap1⊗Bp1TAp2⊗Bp2T⋯Apq⊗BpqT(B11TP1)⊗(A11P1)(B12TP2)⊗(A12P2)⋯(B1qTPq)⊗(A1qPq)(B21TP1)⊗(A21P1)(B22TP2)⊗(A22P2)⋯(B2qTPq)⊗(A2qPq)⋮⋮⋮(Bp1TP1)⊗(Ap1P1)(Bp2TP2)⊗(Ap2P2)⋯(BpqTPq)⊗(ApqPq)(A11P1)⊗(B11TP1)(A12P2)⊗(B12TP2)⋯(A1qPq)⊗(B1qTPq)(A21P1)⊗(B21TP1)(A22P2)⊗(B22TP2)⋯(A2qPq)⊗(B2qTPq)⋮⋮⋮(Ap1P1)⊗(Bp1TP1)(Ap2P2)⊗(Bp2TP2)⋯(ApqPq)⊗(BpqTPq),x=vec(X1)vec(X2)⋮vec(Xq),f=vec(C1)vec(C2)⋮vec(Cp)vec(C1T)vec(C2T)⋮vec(CpT)vec(C1)vec(C2)⋮vec(Cp)vec(C1T)vec(C2T)⋮vec(CpT),where P1,P2,…,Pq∈SORn×n. Hence, the vector forms of β1u1=f, α1v1=MTu1, βi+1ui+1=Mvi-αiui, and αi+1vi+1=MTui+1-βi+1vi (i=1,2,…) in LSMR algorithm can be rewritten in matrix forms, respectively, as(11)β1U1j=Cj,j=1,2,…,p,β1=2∑j=1pCj21/2,α1V1(j)=∑i=1pAijTU1(i)BijT+BijU1(i)TAij+(PjAijT)U1(i)(BijTPj)+(PjBij)U1(i)T(AijPi),j=1,2,…,q,α1=∑k=1q∑i=1pAikTU1(i)BikT+BikU1(i)TAik+(PkAikT)U1(i)(BikTPk)+PkBikU1iTAikPi∑i=1p2∑i=1p1/2,βi+1Ui+1j=∑k=1qAjkVikBjk-αiUij,j=1,2,…,p,βi+1=2∑j=1p∑k=1qAjkVikBjk-αiUij21/2,αi+1Vi+1k=∑j=1pAjkTUi+1jBjkT+BjkUi+1jTAjk+PkAjkT×Ui+1jBjkTPk+PkBjkUi+1jTAjkPk-βi+1Vik,k=1,2,…,q,αi+1=∑k=1q∑j=1pAjkTUi+1jBjkT+BjkUi+1jTAjk+PkAjkTUi+1jBjkTPk+PkBjkUi+1jTAjkPk-βi+1Vik∑k=1q2∑k=1q1/2.If the unknown matrices X1,X2,…,Xq∈RSSRn×n, then (1) and problem (2) are equivalent to (5) and problem (6), respectively, with(12)M=B11T⊗A11B12T⊗A12⋯B1qT⊗A1qB21T⊗A21B22T⊗A22⋯B2qT⊗A2q⋮⋮⋮Bp1T⊗Ap1Bp2T⊗Ap2⋯BpqT⊗Apq(B11TS1)⊗(A11R1)(B12TS2)⊗(A12R2)⋯(B1qTSq)⊗(A1qRq)(B21TS1)⊗(A21R1)(B22TS2)⊗(A22R2)⋯(B2qTSq)⊗(A2qRq)⋮⋮⋮(Bp1TS1)⊗(Ap1R1)(Bp2TS2)⊗(Ap2R2)⋯(BpqTSq)⊗(ApqRq),x=vec(X1)vec(X2)⋮vec(Xq),f=vec(C1)vec(C2)⋮vec(Cp)vec(C1)vec(C2)⋮vec(Cp),where Pi, Ri∈SORn×n,i=1,2,…,q. Hence, the vector forms of β1u1=f, α1v1=MTu1, βi+1ui+1=Mvi-αiui, and αi+1vi+1=MTui+1-βi+1vi (i=1,2,…) in LSMR algorithm can be rewritten in matrix forms, respectively, as(13)β1U1j=Cj,j=1,2,…,p,β1=2∑j=1pCj21/2,α1V1j=∑i=1pAijTU1iBijT+RjAijTU1iBijTSj,nj=1,2,…,q,α1=∑k=1q∑i=1pAikTU1iBikT+RkAikTU1iBikTSk21/2,βi+1Ui+1j=∑k=1qAjkVikBjk-αiUij,j=1,2,…,p,βi+1=2∑j=1p∑k=1qAjkVikBjk-αiUij21/2,αi+1Vi+1k=∑j=1pAjkTUi+1jBjkT+RkAjkTUi+1jBjkTSk-βi+1Vik,k=1,2,…,q,αi+1=∑k=1q∑j=1p(AjkTUi+1(j)BjkT+Rk(AjkTUi+1(j)BjkT)Sk)-βi+1Vik∑k=1q2∑k=1q1/2.
If the unknown matrices X1,X2,…,Xq∈SRn×n (the set of symmetric matrices), then (1) and problem (2) are equivalent to (5) and problem (6), respectively, with(14)M=B11T⊗A11B12T⊗A12⋯B1qT⊗A1qB21T⊗A21B22T⊗A22⋯B2qT⊗A2q⋮⋮⋮Bp1T⊗Ap1Bp2T⊗Ap2⋯BpqT⊗ApqA11⊗B11TA12⊗B12T⋯A1q⊗B1qTA21⊗B21TA22⊗B22T⋯A2q⊗B2qT⋮⋮⋮Ap1⊗Bp1TAp2⊗Bp2T⋯Apq⊗BpqT,(15)x=vec(X1)vec(X2)⋮vec(Xq),f=vec(C1)vec(C2)⋮vec(Cp)vec(C1T)vec(C2T)⋮vec(CpT).Hence, the vector forms of β1u1=f, α1v1=MTu1, βi+1ui+1=Mvi-αiui, and αi+1vi+1=MTui+1-βi+1vi (i=1,2,…) in LSMR algorithm can be rewritten in matrix forms, respectively, as(16)β1U1j=Cj,j=1,2,…,p,β1=2∑j=1pCj21/2,α1V1j=∑i=1pAijTU1iBijT+BijU1iTAij,j=1,2,…,q,α1=∑k=1q∑i=1pAikTU1iBikT+BikU1iTAik21/2,βi+1Ui+1j=∑k=1qAjkVikBjk-αiUij,j=1,2,…,p,βi+1=2∑j=1p∑k=1qAjkVikBjk-αiUij21/2,αi+1Vi+1k=∑j=1pAjkTUi+1jBjkT+BjkUi+1jTAjk-βi+1Vik,mmmmmmmmmmmmmmk=1,2,…,q,αi+1=∑k=1q∑j=1pAjkTUi+1jBjkT+BjkUi+1jTAjk-βi+1Vik21/2.
From above results, we can obtain the matrix form iteration method of LSMR algorithm for solving the constrained solution group of (1) and problem (2). When the unknown matrices X1,X2,…,Xq∈SRn×n, the matrix form iterative method is given as shown in Algorithm 2.
Now, we consider the solution group of the matrix nearness problem (3) for given matrix group (X¯1,X¯2,…,X¯q), where X¯k∈Rn×n, k=1,2,…,q. If Xk∈SRn×n, it is easy to prove that(17)minXk∈SRn×n,k=1,2,…,q∑k=1qXk-X¯k2=minXk∈SRn×n,k=1,2,…,q∑k=1qXk-X¯k+X¯kT22+∑k=1qX¯k-X¯kT22.Let(18)X^k=Xk-X¯k+X¯kT2,k=1,2,…,q,C^j=Cj-∑k=1qAjkX¯k+X¯kT2Bjk,j=1,2,…,p;then problem (3) is equivalent to finding the minimum Frobenius norm symmetric solution group or minimum Frobenius norm least-squares symmetric solution group of the following problems, respectively.
Compatible matrix equations are as follows:(19)∑k=1qAikX^kBik=C^i,i=1,2,…,p.Least-squares problem is as follows:(20)min∑i=1p∑k=1qAikX^kBik-C^i21/2.By LSMR_SR_M method, we can get the minimum Frobenius norm symmetric solution group (X^1*,X^2*,…,X^q*) of (19) (or the minimum Frobenius norm least-squares symmetric solution group of problem (20)). Then, the optimal approximate solution group (X~1,X~2,…,X~q) of problem (3) can be obtained; that is, X~k=X^k*+(X¯k+X¯kT)/2.
5. Numerical Examples
To compare the behavior of the proposed matrix method discussed in the previous section with the CGNE method [43] and the matrix LSQR iterative method (LSQR_M) [37], we present in this section numerical results for three examples. All the numerical computations are performed in MATLAB 7.
Example 1.
Suppose that the matrices Aij, Bij, Cj, i,j=1,2, are given by the following matrices:(21)A11=8-12-345022103-724-2-1-12441121,B11=9-2-311054-2-225-4-8190251-3-3119,A12=4-34-41-2-26-449605154533917-59,B12=-2-1-2-32154969-3111-525414-8221,A21=123-52461-3944-2232281-9314-38,B21=923-1-13-1885-1-2-8-5-2-1-12731-1-882,A22=-92321-33943826-8812-94-22-1-2-2-3,B22=-1-29891-3-3-39-1-3-9-591-9-65-12-7-458.The C1 and C2 matrices are chosen such that X1=In and X2=En, where In and En are the n×n identity matrix and the n×n matrix whose entries are all one, respectively.
In Figure 1, we display the convergence curves of the function log10δk, with(22)δk=maxR1kFR10F,R2kFR20F,where Ri(k), i,k=1,2, is the residual matrix of the ith equation in kth iteration. The initial iterative matrices in all the iterative methods are chosen as zero matrices of suitable size. Figure 1 confirms that the proposed algorithm has faster convergence rate and higher accuracy than the CGNE method and similar behavior to the matrix LSQR iterative method.
Convergence history of the LSMR, CGNE, and LSQR iterative methods for Example 1.
Example 2.
Suppose that the matrices Aij, Bij, Cj, i,j=1,2, are given by the following matrices:(23)A11=tridiag(-1,6,-1),B11=tridiag(1,8,-1),A12=(0.1)In,B12=tridiag(1,0,1),A21=0.1In,B21=tridiag-2,1,-2,A22=tridiag(-1,-3,-1),B22=tridiag(1,6,2).As Example 1, the C1 and C2 matrices are chosen such that X1=In and X2=En with n=400 and the initial iterative matrices in all the iterative methods are chosen as zero matrices of suitable size. In Figure 2, as Figure 1, we display the convergence curves of the function log10δk,. This figure shows that the LSMR method outperforms the CGNE and LSQR methods.
Convergence history of the LSMR, CGNE, and LSQR iterative methods for Example 2.
Example 3 (see [45]).
Consider the convection diffusion equation with the Dirichlet boundary conditions(24)Lu:=-Δu+2ν∂u∂x+2ν∂u∂y=fonΩ,u=gon∂Ω.Here Ω is the unit square [0,1]×[0,1]. The operator L was discretized using central finite differences on Ω, with mesh size h=1/(n+1) in the “x” direction and k=1/(p+1) in the “y” direction. This yields a linear system of algebraic equations that can be written as a Sylvester matrix equation(25)AX-XB=C,(as a particular case of (1) with A11=A, A12=-In, B11=Ip, B12=B, C1=C, and X1=X2=X) where tridiagonal matrices A and B are given by(26)A=-1h2tridiag1+νh,-2,1-νh,B=1k2tridiag1+νk,-2,1-νk.The right-hand side matrix C is obtained as follows:(27)Ci,j=fxi+1,yj+1,fori=2,…,n-1,mmmmmmmmmmj=2,…,p-1,C1,1=fx2,y2+1+νhh2g0,y2+1+νkk2gx2,0,C1,p=fx2,yp+1+1+νhh2g0,yp+1+1-νkk2gx2,1,Cn,1=fxn+1,y2+1-νhh2g1,y2+1+νkk2gxn+1,0,Cn,p=fxn+1,yp-1+1-νhh2g1,yp+1+1-νkk2gxn+1,1,C1,j=fx2,yj+1+1+νhh2g0,yj+1,mmmmmmforj=2,…,p-1,Cn,j=fxn+1,yj+1+1-νhh2g1,yj+1,mmmmmmmforj=2,…,p-1,Ci,1=fxi+1,y2+1+νkk2gxi+1,0,mmmmmmfori=2,…,n-1,Ci,p=f(xi+1,yp+1)+1-νkk2g(xi+1,1),mmmmmmmfori=2,…,n-1.In this example, the functions f and g were chosen such that the exact solution is (28)u(x,y)=xe-x2-y2on the domain Ω. In addition, we used the symmetric successive overrelaxation (SSOR) preconditioner for the matrix equation (25) to increase the convergence rate. It is easy to prove that the matrix equation (25) is equivalent to the np×np linear system:(29)A~x~=c~,where A~=Ip⊗A-BT⊗In, c~=vec(C), and x~=vec(X).
The matrices A and B can be written as(30)A=DA-EA-FA,B=DB-EB-FB,where DA is the diagonal of A and -EA and -FA are the strict lower and upper part of A, respectively. Then the splitting of the matrix A~ is given as(31)A~=D~A~-E~A~-F~A~,with(32)D~A~=Ip⊗DA-DB⊗In,E~A~=Ip⊗EA-FBT⊗In,F~A~=Ip⊗FA-EBT⊗In.Now instead of solving the matrix equation (25), we will apply the LSMR-M algorithm to the preconditioned system(33)A~μ~-1y~=c~withy~=μ~x~,where μ~ is a preconditioner. As said, we use the SSOR preconditioner defined by(34)μ~SSOR=1ω(2-ω)(D~A~-ωE~A~)D~A~-1(D~A~-ωF~A~).We note that the np×np matrix A~ is not used explicitly. We only use the action of the linear operator A on a matrix V∈Rn×p, defined by A(V)=AV-VB. In addition, we use only matrix-by-vector products; then when using the SSOR preconditioner we have to compute, for a given V∈Rn×p, the matrix W∈Rn×p such that(35)w~=A~μ~SSOR-1v~withw~=vecW,v~=vec(V),or(36)w~=A~μ~SSOR-1Tv~=μ~SSORT-1A~Tv~withw~=vecW,v~=vecV.With setting (37)r~=μ~-1v~⟺v~=μ~r~withr~=vec(R)the linear system (35) is equivalent to (38)w~=A~μ~SSOR-1v~⟺w~=A~r~.
For computing R such that w~=A~μ~SSOR-1v~, we have to solve the following matrix equations:(39)(DA-ωEA)Y-Y(DB-ωFB)=ω(2-ω)V,(40)DAY-YDB=Z,(41)(DA-ωFA)R-R(DB-ωEB)=Z.The matrix equations (39) and (41) are also Sylvester matrix equations. But as was stated in [45], since the matrices involved in these equations are triangular, they are solved easily. In (39), the matrix Y can be computed from left to right and from top to bottom in each column; this corresponds to backward substitution. Equation (41) is solved in the opposite sense and this corresponds to forward substitution. Now, to compute W in (35), it is sufficient to use the action of the operator A on the matrix r~, defined by A(r~)=Ar~-r~B.
To compute W in (36), first, we use the action of the operator AT on the matrix, defined by AT(v~)=ATv~-v~BT. Then, by setting(42)v~~=ATv~-v~BT,the linear system (36) is equivalent to (43)w~=μ~SSORT-1v~~⟺μ~SSORTw~=v~~.Therefore, W can be obtained by solving the following matrix equations:(44)DA-ωEATY-YDB-ωFBT=ω2-ωv~~,(45)DAY-YDB=Z,(46)(DA-ωFAT)w~-w~(DB-ωEBT)=Z.
Similarly, the matrix equations (44) and (46) are also Sylvester matrix equations. But since the matrices involved in these equations are triangular, in (44), the matrix Y can be computed from right to left and from bottom to top in each row; this corresponds to forward substitution. Equation (46) is solved in the opposite sense and this corresponds to backward substitution.
In Figure 3, we exhibited the function log10δk with(47)δk=C-(AXk-XkB)FC-(AX0-X0B)F,versus the number of iterations for LSMR-M and the SSOR-LSMR-M. Furthermore, we note that for computing the quantity RkF, (Rk is the residual matrix in kth iteration) we used the pseudocode stated in [40]. These results were obtained for ν=100, n=300, p=300, and ω=0.9. The initial iterative matrix was chosen as zero matrix of suitable size. As we observe by using the SSOR preconditioner the convergence rate of the LSMR-M algorithm has increased, effectively.
Convergence history of the LSMR-M and SSOR-LSMR-M.
6. Conclusion
Solving the linear matrix equations is an attractive part of research. By extending the idea of LSMR method, we have proposed Algorithm 2 to solve the coupled matrix equations (1) or the least-squares problem (2) over generalized symmetric matrices. By this new iterative method on the selection of special initial matrix group, we obtain the minimum Frobenius norm solutions or the minimum Frobenius norm least-squares solutions over generalized symmetric matrices. All the presented results show that the matrix LSMR iterative method is efficient to compute the solution group of the general coupled matrix equations.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The authors would like to thank the referees for their valuable remarks and helpful suggestions.
ChenT.FrancisB. A.1995London, UKSpringerChenT.QiuL.H∞ design of general multirate sampled-data control systems19943071139115210.1016/0005-1098(94)90210-0MR12874582-s2.0-0028462341LaSalleL. P.LefschetzS.1961New York, NY, USAAcademic PressLiZ. Y.WangY.ZhouB.DuanG. R.Least squares solution with the minimum-norm to general matrix equations via iteration2010215103547356210.1016/j.amc.2009.10.052MR25789382-s2.0-72749098584MooreB. C.Principal component analysis in linear systems: controllability, observability, and model reduction1981261173210.1109/TAC.1981.1102568MR6092482-s2.0-0019533482QiuL.ChenT.Multirate sampled-data systems: all H∞ suboptimal controllers and the minimum entropy controller199944353755010.1109/9.751347MR16802862-s2.0-0033100301ToutounianF.KarimiS.Global least squares method (Gl-LSQR) for solving general linear systems with several right-hand sides2006178245246010.1016/j.amc.2005.07.025MR22485042-s2.0-33746253645DehghanM.HajarianM.An efficient algorithm for solving general coupled matrix equations and its application2010519-101118113410.1016/j.mcm.2009.12.022MR2608897ZBL1208.650542-s2.0-77649337553DingF.ChenT.Gradient based iterative algorithms for solving a class of matrix equations20055081216122110.1109/TAC.2005.852558MR21560532-s2.0-26244448321DingF.ChenT.Iterative least-squares solutions of coupled Sylvester matrix equations20055429510710.1016/j.sysconle.2004.06.008MR21095762-s2.0-10444247504DingF.ChenT.On iterative solutions of general coupled matrix equations20064462269228410.1137/S0363012904441350MR22481832-s2.0-33751185751HuangG. X.WuN.YinF.ZhouZ. L.GuoK.Finite iterative algorithms for solving generalized coupled SYLvester systems—part I: one-sided and generalized coupled SYLvester matrix equations over generalized reflexive solutions20123641589160310.1016/j.apm.2011.09.027MR28781312-s2.0-84855710356JonssonI.KågströmB.Recursive blocked algorithm for solving triangular systems. I. One-sided and coupled Sylvester-type matrix equations200228439241510.1145/592843.592845MR20031272-s2.0-19044380439JonssonI.KågströmB.Recursive blocked algorithms for solving triangular systems-Part II: two-sided and generalized Sylvester and Lyapunov matrix eqations200228441643510.1145/592843.592846MR20031282-s2.0-19044400922PengZ. H.HuX. Y.ZhangL.The bisymmetric solutions of the matrix equation image and its optimal approximation20074262-3583595StarkeG.NiethammerW.SOR for AX-XB=C1991154–156335537510.1016/0024-3795(91)90384-9MR11131502-s2.0-0000133865TsuiC.New approach to robust observer design198847374575110.1080/00207178808906052MR935049ZBL0636.930302-s2.0-0023979170YinF.HuangG.ChenD.Finite iterative algorithms for solving generalized coupled SYLvester systems, Part II: two-sided and generalized coupled SYLvester matrix equations over reflexive solutions20123641604161410.1016/j.apm.2011.09.025MR28781322-s2.0-84855654783ZhouB.DuanG.LiZ.Gradient based iterative algorithm for solving coupled matrix equations200958532733310.1016/j.sysconle.2008.12.004MR25124862-s2.0-61449108996ChenZ.LuL.A gradient based iterative solutions for Sylvester tensor equations201320137819479MR303676510.1155/2013/819479ChenD.YinF.HuangG. X.An iterative algorithm for the generalized reflexive solution of the matrix equations AXB=E, CXD=F201220122049295110.1155/2012/492951DingF.Combined state and least squares parameter estimation algorithms for dynamic systems201438140341210.1016/j.apm.2013.06.007MR31277262-s2.0-84880182096DingF.LiuY.BaoB.Gradient-based and least-squares-based iterative estimation algorithms for multi-input multi-output systems20122261435510.1177/09596518114094912-s2.0-84863012837WangK.LiuZ.XuC.A modified gradient based algorithm for solving matrix equations AXB+CXTD=F20142014695452310.1155/2014/954523YinH.ZhangH.Least squares based iterative algorithm for the coupled sylvester matrix equations20142014883132110.1155/2014/831321MR3226378ZhangH.DingF.A property of the eigenvalues of the symmetric positive definite matrix and the iterative algorithm for coupled Sylvester matrix equations2014351134035710.1016/j.jfranklin.2013.08.023MR3130397ZhouJ.WangR.NiuQ.A preconditioned iteration method for solving Sylvester equations2012201212401059MR294815310.1155/2012/401059DingF.ChenT.Hierarchical gradient-based identification of multivariable discrete-time systems200541231532510.1016/j.automatica.2004.10.010MR21576672-s2.0-22144434326DingF.ChenT.Hierarchical least squares identification methods for multivariable systems200550339740210.1109/TAC.2005.843856MR21231032-s2.0-16244397443DingF.LiuX.ChenH.YaoG.Hierarchical gradient based and hierarchical least squares based iterative parameter identification for CARARMA systems201497313910.1016/j.sigpro.2013.10.018DingF.LiuP. X.DingJ.Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle20081971415010.1016/j.amc.2007.07.040MR2396289ZBL1143.650352-s2.0-38949156204LiuY.DingF.ShiY.An efficient hierarchical identification method for general dual-rate sampled-data systems201450396297310.1016/j.automatica.2013.12.025MR3173998ZhouB.DuanG. R.On the generalized Sylvester mapping and matrix equations200857320020810.1016/j.sysconle.2007.08.010MR23869172-s2.0-43049136924DehghanM.HajarianM.Efficient iterative method for solving the second-order Sylvester matrix equation EVF2-AVF-CV=BW20093101401140810.1049/iet-cta.2008.0450MR25735162-s2.0-70349622240DehghanM.HajarianM.The general coupled matrix equations over generalized bisymmetric matrices201043261531155210.1016/j.laa.2009.11.014MR2580446ZBL1187.650422-s2.0-73149119779HajarianM.DehghanM.The generalized centro-symmetric and least squares generalized centro-symmetric solutions of the matrix equation AYB+CYTD=E201134131562157910.1002/mma.1459MR28505422-s2.0-79960921037LiS.HuangT.LSQR iterative method for generalized coupled Sylvester matrix equations20123683545355410.1016/j.apm.2011.10.030MR29209112-s2.0-84859827907HajarianM.The generalized QMRCGSTAB algorithm for solving Sylvester-transpose matrix equations201326101013101710.1016/j.aml.2013.05.009MR30789862-s2.0-84878701728LinY.SimonciniV.Minimal residual methods for large scale Lyapunov equations201372527110.1016/j.apnum.2013.04.004MR31024152-s2.0-84878911321FongD. C.SaundersM.LSMR: an iterative algorithm for sparse least-squares problems20113352950297110.1137/10079687XMR28616562-s2.0-81555200713DehghanM.HajarianM.An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation2008202257158810.1016/j.amc.2008.02.035MR2435692ZBL1154.650232-s2.0-48149092968LiangM. L.DaiL. F.WangS. F.An iterative method for R,S-symmetric solution of matrix equation AXB=C2008436070MR2489666DehghanM.HajarianM.An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices201034363965410.1016/j.apm.2009.06.018MR25633462-s2.0-70350622178GolubG.KahanW.Calculating the singular values and pseudo-inverse of a matrix196522205224MR0183105BouhamidiA.JbilouK.A note on the numerical approximate solutions for generalized Sylvester matrix equations with applications2008206268769410.1016/j.amc.2008.09.022MR2483041ZBL1162.650192-s2.0-56949103609