LSMR Iterative Method for General Coupled Matrix Equations

k=1 A ik X k B ik = C i , i = 1, 2, . . . , p, (including the generalized (coupled) Lyapunov and Sylvester matrix equations as special cases) over some constrained matrix groups (X 1 , X 2 , . . . , X q ), such as symmetric, generalized bisymmetric, and (R, S)-symmetric matrix groups. By this iterativemethod, for any initialmatrix group (X 1 , X 2 , . . . , X q ), a solution group (X 1 , X 2 , . . . , X q ) can be obtained within finite iteration steps in absence of round-off errors, and the minimum Frobenius norm solution or the minimum Frobenius norm least-squares solution group can be derived when an appropriate initial iterative matrix group is chosen. In addition, the optimal approximation solution group to a given matrix group (X 1 , X 2 , . . . , X q ) in the Frobenius norm can be obtained by finding the least Frobenius norm solution group of new general coupledmatrix equations. Finally, numerical examples are given to illustrate the effectiveness of the presented method.

In [12,17], Huang et al. presented finite iterative algorithms for solving generalized coupled Sylvester systems.
Li and Huang [37] proposed a matrix LSQR iterative method to solve the constrained solutions of the generalized coupled Sylvester matrix equations.Hajarian [38] presented the generalized QMRCGSTAB algorithm for solving Sylvestertranspose matrix equations.Recently, Lin and Simoncini [39] established minimal residual methods for large scale Lyapunov equations.They explored the numerical solution of this class of linear matrix equations when a minimal residual (MR) condition is used during the projection step.
In this paper, we construct a matrix iterative method based on the LSMR algorithm [40] to solve the constrained solutions of the following problems.
Compatible matrix equations are as follows where   ,   ,   ,  = 1, 2, . . ., ,  = 1, 2, . . ., , are constant matrices with suitable dimensions,   ,  = 1, 2, . . ., , are unknown matrices to be solved,   ,  = 1, 2, . . ., , are given matrices, and   is the solution set of (1) or problem (2).This paper is organized as follows.In Section 2, we will briefly review the LSMR algorithm for solving linear systems of equations.In Section 3, we propose the matrix LSMR iterative algorithms for solving the problems (1)- (2).In Section 4, we solve the problem (3) by finding the minimum Frobenius norm solution group of the corresponding new general coupled matrix equations.In Section 5, numerical examples are given to illustrate the efficiency of the proposed iterative method.Finally, we make some concluding remarks in Section 6.
The notations used in this paper can be summarized as follows.tr() represents the trace of the matrix .For ,  ∈ R × , notation  ⊗  is Kronecker product and ⟨, ⟩ = tr(  ) is the inner product with the Frobenius norm ‖‖ = √⟨, ⟩ = √ tr(  ).The use of vec() represents the vector operator defined as where   is the th column of .The generalized bisymmetric matrices, the (, )-symmetric matrices, and the symmetric orthogonal matrices can be defined as follows.

LSMR Algorithm
In this section, we briefly review some fundamental properties of the LSMR algorithm [40], which is an iterative method for computing a solution  to either of the following problems.
Compatible linear systems are as follows: Least-squares problem is as follows: where  ∈ R × and  ∈ R  .The LSMR algorithm uses an algorithm of Golub and Kahan [44], which stated as procedure Bidiag 4, to reduce  to lower bidiagonal form.
The procedure Bidiag 4 can be described as follows.
The scalers   ≥ 0 and   ≥ 0 are chosen such that The following properties presented in [7] illustrate that the procedure Bidiag 4 has finite termination property.
Property 1. Suppose that  steps of the procedure Bidiag 4 have been taken; then the vectors V 1 , V 2 , . . ., V  and  1 ,  2 , . . .,   ,  +1 are the orthonormal basis of the Krylov subspaces   (  , V 1 ) and  +1 (  ,  1 ), respectively.Property 2. The procedure Bidiag 4 will stop at step  if and only if min{, } is , where  is the grade of V 1 with respect to    and  is the grade of  1 with respect to   .By using the procedure Bidiag 4, the LSMR method constructs an approximation solution of the form   =     , where   = (V 1 , V 2 , . . ., V  ), which solves the least-squares problem, min   ‖    ‖, where   =  −   is the residual for the approximate solution   .The main steps of the LSMR algorithm can be summarized as shown in Algorithm 1.
More details about the LSMR algorithm can be found in [40].
The stopping criterion may be used as ‖    ‖ = ‖  ( −   )‖ 2 = | +1 | for the compatible linear systems (5) and for the least-squares problem (6).Other stopping criteria can also be used and are not listed here.Reader can see [40] for details.Clearly, the sequence   ∈ Range (  ) generated by the LSMR algorithm converges to the unique minimum norm solution of (5) or the unique minimum norm least-squares solution of problem (6).

A Matrix LSMR Iterative Method
In this section, we will present our matrix iterative method based on the LSMR algorithm, for solving (1) and problem (2).For the unknown matrices   ∈ R × ,  = 1, 2, . . ., , by using the Kronecker product, (1) and problem (2) are equivalent to (5) and problem (6), respectively, with Hence, by using the invariance of the Frobenius norm under unitary transformations, it is easy to prove that the vector form algorithm can be rewritten in matrix forms, respectively, as )) If the unknown matrices  1 ,  2 , . . .,   ∈  × , then (1) and problem (2) are equivalent to (5) and problem (6), respectively, with  . . .

The Solution Group of Problem (3)
Now, we consider the solution group of the matrix nearness problem (3) for given matrix group ( 1 ,  2 , . . .,   ), where

Numerical Examples
To compare the behavior of the proposed matrix method discussed in the previous section with the CGNE method [43] and the matrix LSQR iterative method (LSQR M) [37], we present in this section numerical results for three examples.All the numerical computations are performed in MATLAB 7.
Example 1. Suppose that the matrices   ,   ,   , ,  = 1, 2, are given by the following matrices: where  ()  , ,  = 1, 2, is the residual matrix of the th equation in th iteration.The initial iterative matrices in all the iterative methods are chosen as zero matrices of suitable size.Figure 1 confirms that the proposed algorithm has faster convergence rate and higher accuracy than the CGNE method and similar behavior to the matrix LSQR iterative method.
Example 2. Suppose that the matrices   ,   ,   , ,  = 1, 2, are given by the following matrices: As Example 1, the  1 and  2 matrices are chosen such that  1 =   and  2 =   with  = 400 and the initial iterative matrices in all the iterative methods are chosen as zero matrices of suitable size.In Figure 2, as Figure 1, we display the convergence curves of the function log 10   ,.This figure shows that the LSMR method outperforms the CGNE and LSQR methods.( Here Ω is the unit square [0, 1] × [0, 1].The operator  was discretized using central finite differences on Ω, with mesh size ℎ = 1/( + 1) in the "" direction and  = 1/( + 1) in the "" direction.This yields a linear system of algebraic equations that can be written as a Sylvester matrix equation