An Iterative Method for the Least-Squares Problems of a General Matrix Equation Subjects to Submatrix Constraints

(i = 1, 2, . . . , t) are to be determined centro-symmetric matrices with given central principal submatrices. For any initial iterative matrices, we show that the least-squares solution can be derived by this method within finite iteration steps in the absence of roundoff errors. Meanwhile, the unique optimal approximation solution pair for given matrices Z i can also be obtained by the least-norm least-squares solution of matrix equation ∑t i=1 M i Z i N i = F, in which Z i = Z i − Z i , F = F − ∑ t


Introduction
Throughout this paper, we denote the set of all  ×  real matrices by  × .The symbol   represents the transpose of matrix .  and   stand for the reverse unit matrix, and identity matrix, respectively.For ,  ∈  × , the inner product of matrices  and  is defined by ⟨, ⟩ = trace (  ), which leads to the Frobenius norm, that is, ‖‖ = √⟨, ⟩.
We firstly introduce the concept of the central principal submatrix which is originally put forward by Yin [17].Evidently, a matrix with odd (even) order only has central principal submatrices of odd (even) order.Now, the first problem to be studied here can be stated as follows.
Although these iterative algorithms are efficient, there still exist some handicaps when meeting the constrained matrix equation problem (i.e., to find the solution of matrix equation in some matrices sets with specifical structure, for instance, symmetric matrices, centro-symmetric matrices, and bisymmetric matrices sets) and the submatrix constrained problem, since these methods cannot keep the special properties of the unknown matrix in the iterative process.Based on the classical conjugate gradient (CG) method, Peng et al. [27] gave an iterative method to find the bisymmetric solution of matrix equation (2).Similar method was constructed to solve matrix equations (4) with generalized bisymmetric   in [28].In particular, Li et al. [29] proposed an elegant algorithm for solving the generalized Sylvester (Lyapunov) matrix equation  +  =  with bisymmetric  and symmetric , the two unknown matrices include the given central principal submatrix and leading principal submatrix, respectively.This method shunned the difficulties in numerical instability and computational complexity, and solved the problem, completely.By borrowing the thinking of this iterative algorithm, we will solve Problem 2 by iteration method.
The second problem to be considered is the optimal approximation problem.( This problem occurs frequently in experimental design (see for instance [30]).Here, the preliminary estimation Z of the unknown matrix   can be obtained from experiments, but it may not satisfy the structural requirement and/or spectral requirement.The best estimation of   , is the matrix Ẑ that satisfies both requirements, which is the optimal approximation of   (see, e.g., [31,32]).About this problem, we also refer the authors to [9-11, 13, 15, 16, 20-23, 27-29, 33-36] and therein.
The rest of this paper is outlined as follows.In Section 2, an iterative algorithm will be proposed to solve Problem 2, and the properties of which will be investigated.In Section 3, we will consider the optimal approximation Problem 3 by using the iterative algorithm.In Section 4, some numerical examples will be given to verify the efficiency of this algorithm.
It is clear that both CS ⋆,  is the least-squares solution of matrix equation Proof.Noting that the definition of CS The proof is completed.
Remark 6.It follows, from Theorem 5, that Problem 2 can be solved completely by finding the least-squares solution of matrix equations (11) in subspaces CS In the next part of this section, we will establish an iterative algorithm for (11) and analysis its properties.For the convenience of expression, we define a matrix function then matrix equation ( 11) can be simplified as Moreover, we can easily verify that holds for arbitrary  ∈  × .
The iterative algorithm for the least squares problem of matrix equations ( 11) can be expressed as follows.
Algorithm 7. Consider the following.
From Algorithm 7, we can see that ) is a least-squares solution group belonging to matrix equation (11) if  ()  = 0 for all  ∈ Γ.The following lemma gives voice to the reason.
In addition, we should point out that if   = 0 or ∞, the conclusions may not be true, and the iteration will break down before  () 1 ,  () 2 , . . .,  ()  ) = 0, making inner product with   by both sides, it follows from Algorithm 7 that which also implies the same situation as   = 0. Hence, if there exists a positive integer  such that the coefficient   = 0 or   = ∞, then the corresponding matrix group  () 1 ,  () 2 , . . .,  ()   is just the solution of matrix equation (11).
Together with the above analysis and Lemma 9, we can conclude the following theorem.

Theorem 11. For any initial iteration matrices 𝑌
⬦  as in Theorem 5.
In order to show the validity of Theorem 11, it is adequate to prove the following conclusion.(33).
Proof.According to the assumptions, we obtain min On the other hand, noting that F( 1 ,  2 , . . .,   ) = 0, then The proof is completed.
Next, we will show that the unique least norm solution of matrix equation ( 11) can be derived by choosing a special kind of initial iteration matrices.

Theorem 13. Let the initial iteration matrices 𝑌
with arbitrary  ∈  × ,  = 1, 2, . . ., , and then ( * 1 ,  * 2 , . . .,  *  ) generated by Algorithm 7 is the leastnorm least-squares solution group of matrix equation (11).Furthermore, the least-norm solution group to Problem 2 can be expressed by Proof.From Algorithm 7 and Theorem 11, for initial iteration matrices  (0)  = L  (      ), we can obtain a least-squares solution  *  of matrix equation (11) and there exists a matrix  * such that  *  = L  (    *    ).Hence, it is enough to prove that the  *  is the least-norm solution.In fact, noting that (33) and Proposition 12, we have as required.
Theorems 11 and 13 display the efficiency of Algorithm 7. Actually, the iteration sequence { ()   } converges smoothly to the solution   , that is the minimization property of Algorithm 7.

The Solution of Problem 3
In this section, we discuss the optimal approximation Problem 3. Since the least squares problem is always consistent, it is easy to verify that the solution set   of Problem 2 is a nonempty convex cone, so the optimal approximation solution is unique.Without loss of generality, we can assume that the given matrices Z ∈ CS   ×  ⋆,  .In fact, from Lemma 4, arbitrary Z ∈    ×  can be divided into

Numerical Example
In this section, we illustrate the efficiency and reliability of Algorithm 7 by some numerical experiments.All the numerical experiments are performed by using Matlab 6.5.In addition, because of the influence of the roundoff errors,  ()   may not equal zero within finite iteration steps, so the iteration will be terminated if ∑  =1 ‖ ()  ‖ 2 < , for example, let  = 1.0 − 008.At this time, ( () 1 ,  () 1 , . . .,  ()  ) can be regarded as a solution of matrix equation (11), and  ()  +  ⬦  ( = 1, 2, . . ., ) consist of the solution group to Problem 2. In particular, let the initial iteration matrices  (0)  = 0, then we will obtain the least-norm solution by (36).
From Figure 2, we can see that the residual norm of Algorithm 7 is monotonically decreasing, which is in accordance with the theory established in Theorem 14, namely, this algorithm is numerical stable.While Figure 3 shows that the terminated condition ‖ ()  1 ‖+‖ () 2 ‖+‖ () 3 ‖ is oscillating back and forth and approaches to zero as iterative process.Hence, the iterative Algorithm 7 is efficient, but it lacks of smooth convergence.Of course, for a problem with large and sparse matrices, Algorithm 7 may not terminate in a finite number of steps because of roundoff errors.How to establish an efficient and smooth algorithm is an important problem which we should study in a future work.

Figure 3 :
Figure 3: The convergence curve of the Frobenius norm of the terminated condition.