An Iterative Algorithm for the Least Squares Generalized Reflexive Solutions of the Matrix Equations

and Applied Analysis 3 to a class of complex matrix equations with conjugate and transpose of the unknowns. Jonsson and Kågström 24, 25 proposed recursive block algorithms for solving the coupled Sylvester matrix equations and the generalized Sylvester and Lyapunov Matrix equations. Very recently, Huang et al. 26 presented a finite iterative algorithms for the one-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions. Yin et al. 27 presented a finite iterative algorithms for the two-sided and generalized coupled Sylvester matrix equations over reflexive solutions. For more studies on the matrix equations, we refer to 1–4, 16, 17, 28–40 . However, the problem of finding the least squares generalized reflexive solution of the matrix equation pair has not been solved. The following notations are also used in this paper. Let Rm×n denote the set of all m×n real matrices. We denote by the superscript T the transpose of a matrix. In matrix space Rm×n, define inner product as 〈A,B〉 trace BA for all A,B ∈ Rm×n, and ‖A‖ represents the Frobenius norm of A. R A represents the column space of A. vec · represents the vector operator, that is, vec A aT1 , a T 2 , . . . , a T n T ∈ Rmn for the matrix A a1, a2, . . . , an ∈ Rm×n, ai ∈ Rm, i 1, 2, . . . , n. A ⊗ B stands for the Kronecker product of matrices A and B. This paper is organized as follows. In Section 2, we will solve Problem 1 by constructing an iterative algorithm, that is, for an arbitrary initial matrix X1 ∈ Rm×n r P,Q , we can obtain a solution X∗ ∈ Rm×n r P,Q of Problem 1 within finite iterative steps in the absence of round-off errors. The convergence of the algorithm is also proved. Let X1 AHB CĤD PAHBQ PCĤDQ, where H ∈ Rp×q, Ĥ ∈ Rs×t are arbitrary matrices, or more especially, let X1 0 ∈ Rm×n r P,Q ; we can obtain the unique least-norm solution X∗ of Problem 1. Then in Section 3, we give the optimal approximate solution of Problem 2 by finding the least-norm generalized reflexive solution of a corresponding new minimum Frobenius norm residual problem. In Section 4, several numerical examples are given to illustrate the application of our iterative algorithm. 2. Solution of Problem 1 In this section, we firstly introduce some definitions, lemmas, and theorems which are required for solving Problem 1. Then we present an iterative algorithm to obtain the solution of Problem 1. We also prove that it is convergent. The following definitions and lemmas come from 41 , which are needed for our derivation. Definition 2.1. A set of matrices S ∈ Rm×n is said to be convex if for X1, X2 ∈ S and α ∈ 0, 1 , αX1 1 − α X2 ∈ S. Let Rc denote a convex subset of Rm×n. Definition 2.2. A matrix function f : Rc → R is said to be convex if f αX1 1 − α X2 ≤ αf X1 1 − α f X2 2.1 for X1, X2 ∈ Rc and α ∈ 0, 1 . Definition 2.3. Let f : Rc → R be a continuous and differentiable function. The gradient of f is defined as ∇f X ∂f X /∂xij . 4 Abstract and Applied Analysis Lemma 2.4. Let f : Rc → R be a continuous and differentiable function. Then f is convex on Rc if and only if f Y ≥ f X ∇f X , Y −X 2.2

The generalized coupled Sylvester systems play a fundamental role in wide applications in several areas, such as stability theory, control theory, perturbation analysis, and some other fields of pure and applied mathematics. The iterative method is an important way to solve the generalized coupled Sylvester systems. In this paper, an iterative algorithm is constructed to solve the minimum Frobenius norm residual problem: AXB CXD − E F min over generalized reflexive matrix X. For any initial generalized reflexive matrix X 1 , by the iterative algorithm, the generalized reflexive solution X * can be obtained within finite iterative steps in the absence of round-off errors, and the unique least-norm generalized reflexive solution X * can also be derived when an appropriate initial iterative matrix is chosen. Furthermore, the unique optimal approximate solution X to a given matrix X 0 in Frobenius norm can be derived by finding the least-norm generalized reflexive solution X * of a new corresponding minimum Frobenius norm residual problem: min A XB examples are given to illustrate that our iterative algorithm is effective.

Introduction
A matrix P ∈ R n×n is said to be a generalized reflection matrix if P satisfies that P T P, P 2 I. Let P ∈ R m×m and Q ∈ R n×n be two generalized reflection matrices. A matrix A ∈ R m×n is called generalized reflexive or generalized anti-reflexive with respect to the matrix pair P, Q if PAQ A or PAQ −A . The set of all m-by-n generalized reflexive matrices with respect to matrix pair P, Q is denoted by R m×n r P, Q . The generalized reflexive and antireflexive matrices have many special properties and usefulness in engineering and scientific computations 1-3 .
In this paper, we will consider the minimum Frobenius norm residual problem and its optimal approximation problem as follows.
Problem 1. For given matrices A ∈ R p×m , B ∈ R n×q , C ∈ R s×m , D ∈ R n×t , E ∈ R p×q , F ∈ R s×t , find matrix X ∈ R m×n r P, Q such that Problem 2. Let S E denote the set of the generalized reflexive solutions of Problem 1. For a given matrix X 0 ∈ R m×n r P, Q , find X ∈ S E such that Problem 1 plays a fundamental role in wide applications in several areas, such as Pole assignment, measurement feedback, and matrix programming problem.Liao and Lei 4 presented some examples to show a motivation for studying Problem 1. Problem 2 arises frequently in experimental design. Here the matrix X 0 may be a matrix obtained from experiments, but it may not satisfy the structural requirement generalized reflexive and/or spectral requirement the solution of Problem 1 . The best estimate X is the matrix that satisfies both requirements and is the best approximation of A in the Frobenius norm.
Least-squares-based iterative algorithms are very important in system identification, parameter estimation, and signal processing, including the recursive least squares RLS and iterative least squares ILS methods for solving the solutions of some matrix equations, for example, the Lyapunov matrix equation, Sylvester matrix equations, and coupled matrix equations as well. Some related contributions in solving matrix equations and parameter identification/estimation should be mentioned in this paper. For example, novel gradientbased iterative GI method 5-9 and least-squares-based iterative methods 5, 9, 10 with highly computational efficiencies for solving coupled matrix equations are presented and have good stability performances, based on the hierarchical identification principle 11-13 which regards the unknown matrix as the system parameter matrix to be identified.
The The following notations are also used in this paper. Let R m×n denote the set of all m × n real matrices. We denote by the superscript T the transpose of a matrix. In matrix space R m×n , define inner product as A, B trace B T A for all A, B ∈ R m×n , and A represents the Frobenius norm of A. R A represents the column space of A. vec · represents the vector operator, that is, vec A a T 1 , a T 2 , . . . , a T n T ∈ R mn for the matrix A a 1 , a 2 , . . . , a n ∈ R m×n , a i ∈ R m , i 1, 2, . . . , n. A ⊗ B stands for the Kronecker product of matrices A and B.
This paper is organized as follows. In Section 2, we will solve Problem 1 by constructing an iterative algorithm, that is, for an arbitrary initial matrix X 1 ∈ R m×n r P, Q , we can obtain a solution X * ∈ R m×n r P, Q of Problem 1 within finite iterative steps in the absence of round-off errors. The convergence of the algorithm is also proved. Let X 1

A T HB T C T HD T PA T HB T Q PC T HD T Q,
where H ∈ R p×q , H ∈ R s×t are arbitrary matrices, or more especially, let X 1 0 ∈ R m×n r P, Q ; we can obtain the unique least-norm solution X * of Problem 1. Then in Section 3, we give the optimal approximate solution of Problem 2 by finding the least-norm generalized reflexive solution of a corresponding new minimum Frobenius norm residual problem. In Section 4, several numerical examples are given to illustrate the application of our iterative algorithm.

Solution of Problem 1
In this section, we firstly introduce some definitions, lemmas, and theorems which are required for solving Problem 1. Then we present an iterative algorithm to obtain the solution of Problem 1. We also prove that it is convergent. The following definitions and lemmas come from 41 , which are needed for our derivation.
3. Let f : R c → R be a continuous and differentiable function. The gradient of f is defined as ∇f X ∂f X /∂x ij .

Lemma 2.4. Let f : R c → R be a continuous and differentiable function. Then f is convex on R c if and only if
Lemma 2.5. Let f : R c → R be a continuous and differentiable function, and there exists X * in the interior of R c such that f X * min X∈R c f X , then ∇f X * 0.
Note that the set R m×n r P, Q is unbounded, open, and convex. Denote then F X is a continuous, differentiable, and convex function on R m×n r P, Q . Hence, by applying Lemmas 2.4 and 2.5, we obtain the following lemma.
From the Taylor series expansion, we have On the other hand, by the basic properties of Frobenius norm and the matrix inner product, we get the expression Abstract and Applied Analysis 5 Note that

2.6
Thus, we have

2.7
By comparing 2.4 with 2.7 , we have According to Lemma 2.6 and 2.8 , we obtain the following theorem. For the convenience of discussion, we adopt the following notations: The following algorithm is constructed to solve Problems 1 and 2.
Step 1. Input matrices A ∈ R p×m , B ∈ R n×q , C ∈ R s×m , D ∈ R n×t , E ∈ R p×q , F ∈ R s×t , and two generalized reflection matrix P ∈ R m×m , Q ∈ R n×n ; Step 2. Choose an arbitrary matrix X 1 ∈ R m×n r P, Q . Compute
This completes the proof.
Proof. Since A, B B, A holds for all matrices A and B in R m×n , we only need prove that P i , P j 0, Q i , M Q j 0 for all 1 ≤ i < j ≤ k. We prove the conclusion by induction and two steps are required.
Step 1. We will show that To prove this conclusion, we also use induction. For i 1, by Algorithm 2.8 and Lemma 2.10, we have that Thus

2.22
This complete the proof. It can be seen that the set of P 1 , P 2 , . . . , P mn is an orthogonal basis of the matrix space R m×n r P, Q , which implies that P mn 1 0, that is, X mn 1 is a solution of Problem 1. This completes the proof.
To show the least-norm generalized reflexive solution of Problem 1, we first introduce the following result. By Lemma 2.15, the following result can be obtained.

Theorem 2.16. If one chooses the initial iterative matrix X 1 A T HB T C T HD T PA T HB T Q PC T HD T Q,
where H ∈ R p×q , H ∈ R s×t are arbitrary matrices, especially, let X 1 0 ∈ R m×n r , one can obtain the unique least-norm generalized reflexive solution of Problem 1 within finite iteration steps in the absence of round-off errors by using Algorithm 2.8.

Proof. By Algorithm 2.8 and Theorem 2.14, if we let X 1 A T HB T C T HD T PA T HB T Q PC T HD T Q,
where H ∈ R p×q , H ∈ R s×t are arbitrary matrices, we can obtain the solution X * of Problem 1 within finite iteration steps in the absence of round-off errors, the solution X * can be represented that

X * A T GB T C T GD T PA T GB T Q PC T GD T Q. 2.24
In the sequel, we will prove that X * is just the least-norm solution of Problem 1. Consider the following minimum residual problem min X∈R m×n Obviously, the solvability of Problem 1 is equivalent to that of the minimum residual problem 2.25 , and the least-norm solution of Problem 1 must be the least-norm solution of the minimum residual problem 2.25 .
In order to prove that X * is the least-norm solution of Problem 1, it is enough to prove that X * is the least-norm solution of the minimum residual problem 2.25 . Denote vec X x, vec X * x * , vec G g 1 , vec G g 2 , vec E e, vec F f, then the minimum residual problem 2.25 is equivalent to the minimum residual problem as follows:

2.26
Abstract and Applied Analysis

11
Noting that

2.27
by Lemma 2.15 we can see that x * is the least-norm solution of the minimum residual problem 2.26 . Since vector operator is isomorphic, X * is the unique least-norm solution of the minimum residual problem 2.25 ; furthermore X * is the unique least-norm solution of Problem 1.

Solution of Problem 2
Since the solution set of Problem 1 is no empty, when X ∈ S E , then min X∈R m×n By using Algorithm 2.8, let initially iterative matrix X 1 A T HB T C T HD T PA T HB T Q PC T HD T Q, or more especially, let X 1 0 ∈ R m×n r P, Q ; we can obtain the unique least-norm generalized reflexive solution X * of minimum residual problem 3.2 , then we can obtain the generalized reflexive solution X of Problem 2, and X can be represented that X X * X 0 .

Numerical Examples
In this section, we will show several numerical examples to illustrate our results. All the tests are performed by MATLAB 7.8.

4.2
We will find the least squares generalized reflexive solution of the matrix equation pair AXB E, CXD F by using Algorithm 2.8. Because of the influence of the error of calculation, P k − ∇F X k is usually unequal to zero in the process of the iteration, where k 1, 2, . . .. For any chosen positive number ε, however small enough, for example, ε 1.0000e − 8, whenever P k < ε, stop the iteration, and X k is regarded to be the least squares generalized reflexive solution of the matrix equation pair AXB E, CXD F. Let

4.4
The convergence curve for the Frobenius norm of the residual is shown in Figure 1.

4.9
The convergence curve for the Frobenius norm of the residual is shown in Figure 3.

Conclusion
This paper mainly solves the minimum Frobenius norm residual problem and its optimal approximate problem over generalized reflexive matrices by constructing an iterative algorithm. We solve the minimum Frobenius norm residual problem by constructing an iterative algorithm, that is, for an arbitrary initial matrix X 1 ∈ R m×n r P, Q , we obtain a solution X * ∈ R m×n r P, Q of Problem 1 within finite iterative steps in the absence of roundoff errors. The convergence of the algorithm is also proved. Let X 1 A T HB T C T HD T PA T HB T Q PC T HD T Q, where H ∈ R p×q , H ∈ R s×t are arbitrary matrices, or more especially, let X 1 0 ∈ R m×n r P, Q ; we obtain the unique least-norm solution X * of the minimum Frobenius norm residual problem. Then we give the generalized reflexive solution of the optimal approximate problem by finding the least-norm generalized reflexive solution of a corresponding new minimum Frobenius norm residual problem.
Several numerical examples are given to confirm our theoretical results. We can see that our iterative algorithm is effective. We also note that for the minimum Frobenius norm residual problem with large but not sparse matrices A, B, C, D, E, and F, Algorithm 2.8 may be terminated more than mn steps because of computational errors.