Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations

By analyzing the eigenvalues of the relatedmatrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equations AX + YB = C and DX + YE = F in this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equation AXB +CXD = F. The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm.


Introduction
Matrix equations arise in systems and control, such as Lyapunov matrix equation and Riccati equation.How to solve these matrix equations becomes a major field of the matrix computations [1][2][3][4].Main points of this field are the decompositions and transformations of the matrices, eigenvalues and eigenvectors, and Krylov subspace [5][6][7][8].Other points contain the algorithms and the convergence analysis.The algorithms provided a set of operation steps, and according to these steps it can find a solution of a matrix equation within finite steps in given error bounds [9][10][11].The convergence analysis offered more details of an algorithm, and in general, these details indicated some new research areas [12][13][14][15].
The direct method and the indirect method are the two main approaches of solving the matrix equations [16][17][18].With the development of the requirement of the computation and the matrix theory, the indirect method or iterative method becomes the main approach of solving the matrix equations [19,20].
The Jacobi and Gauss-Seidel iterative methods were discussed in the literature.By extending the Jacobi and Gauss-Seidel iterative methods, Ding and his coworkers recently presented a large family of the least squares based iterative methods for solving the matrix equations Ax = b and AXB = F [47][48][49].It has been proved that these least squares based iterative solutions always converge fast to the exact ones as long as the unique solutions exist.But the range of the convergence factor is still open.Motivated by the importance of this algorithm, we develop a new way to prove the convergence of the least squares based iterative algorithm for the coupled Sylvester matrix equations AX + YB = C and DX + YE = F.According to the new proof, we obtain the optimal convergence factor of this iterative algorithm and extend the iterative algorithm to solve the generalized Sylvester matrix equation AXB + CXD = F.The algorithm in this paper can be extended to other more general and complex matrix equations [50][51][52].

Mathematical Problems in Engineering
The paper is organized as follows.Section 2 gives some preliminaries.Section 3 presents a new proof to the least squares iterative algorithm for solving the coupled Sylvester matrix equations AX + YB = C and DX + YE = F. Section 4 revises this algorithm to solve the equation AXB + CXD = F. Section 5 gives an example to illustrate the effectiveness of the proposed results.Finally, we offer some concluding remarks in Section 6.

Basic Preliminaries
Some symbols and lemmas are introduced first.I  is the identity matrix of size  × .I is an identity matrix with an appropriate size.[M] denotes the eigenvalue set of matrix M. |X| = det[X] denotes the matrix determinant.‖X‖ denotes the Frobenius norm of X and is defined as (1) col[X] denotes an -dimensional vector formed by A ⊗ B represents the Kronecker product of matrices A and B.
A formula involved the vec-operator col and the Kronecker product is col [AXB] = (B  ⊗ A)col[X] [53].
If matrix A  A is invertible, then A(A  A) −1 A  is symmetric and idempotent.To be more specific, we have the following lemma.Lemma 1.For the symmetric matrix A(A  A) −1 A  , there exists an orthogonal matrix Q such that

The Coupled Sylvester Matrix Equations
In this section, we will prove the convergence of the least squares iterative algorithm for solving the following coupled Sylvester matrix equations: where A, D ∈ R × , B, E ∈ R × , and C, F ∈ R × are known and X, Y ∈ R × are to be determined.By using the hierarchical identification principle, the least squares based iterative algorithm for solving (4) is given by [47] X Here, To initialize the algorithm, we take X(0) and Y(0) as some small real matrices, for example, X(0) = Y(0) = 10 −6 1 × with 1 × being an  ×  matrix whose elements are all 1.
Theorem 2. If (4) has unique solutions X and Y, then for any initial values X(0) and Y(0) the iterative solutions X() and Y() given by iteration in (5) converge to X and Y; that is, or the error matrices X() − X and Y() − Y converge to zero; in this case, the optimal convergence factor is  = 1.

The Generalized Sylvester Matrix Equation AXB + CXD = F
In this section, we use iteration in (5) to solve the generalized Sylvester matrix equation.Consider the following equation: where A, C ∈ R × , B, D ∈ R × , and F ∈ R × are given constant matrices and X ∈ R × is the unknown matrix to be solved.The following conclusion is obvious.
Equation (36) has a unique solution if and only if In this case, the unique solution is given by col Setting V := XB ∈ R × and W := CX ∈ R × , (36) can be equivalently expressed as then (38) has a unique solution.It is easy to show that if |C| ̸ = 0 or |B| ̸ = 0, then ( 38) is equivalent to (36).Next, we show that if |C| ̸ = 0 or |B| ̸ = 0, then ( 39) is equivalent to (37).That is, we have the following determinant result: According to Theorem 2, (38) can be solved by iteration in (5), and from XB = V or CX = W, (36) can be solved.

Example
In this section, an example is offered to illustrate the convergence of the proposed iterative algorithm.
The unique solution is found to be   Taking X(0) = Y(0) = 10 −6 1 2 × 2 as the initial iterative values and using iteration (5) to compute X() and Y(), the iterative values of X() and Y() are shown in Table 1 with the relative error The effect of changing the convergence factor  is illustrated in Figure 1.
From Table 1 and Figure 1, we find that the relative error goes to zero with the increasing of the iterative times.This shows that the proposed iterative algorithm is effective.In addition, Figure 1 shows that the optimal convergence factor  = 1.0.This indicates that the result of the optimal convergence suggested in this paper is correct.

Conclusions
This paper proved the convergence of the least squares based iterative algorithm of the coupled Sylvester matrix equations AX+YB = C and DX+YE = F, and the proof determined the range of the convergence factor and the optimal convergence factor.The suggested algorithm can also be used to solve the generalized Sylvester equation AXB + CXD = F.An example indicated that the iterative solution given by the least squares based iterative algorithm converges fast to its exact solution under proper conditions.

Example 1 .
Consider the coupled Sylvester matrix equations in the form of (4) with