Iterative Solutions of a Set of Matrix Equations by Using the Hierarchical Identification Principle

and Applied Analysis 3 According to Lemma 2, it is not hard to get the iterative solutionsX 1 (k),X 2 (k),X 3 (k), andX 4 (k) of these four subsystems S 1 , S 2 , S 3 , and S 4 as follows: X 1 (k) = X 1 (k − 1) + μAT 1 [M 1 − A 1 X 1 (k − 1)B 1 ]BT 1 , X 2 (k) = X 2 (k − 1) + μAT 2 [M 2 − A 2 X 2 (k − 1)B 2 ]BT 2 , X 3 (k) = X 3 (k − 1) + μCT 1 [N 1 − C 1 X 3 (k − 1)D 1 ]DT 1 , X 4 (k) = X 4 (k − 1) + μCT 2 [N 2 − C 2 X 4 (k − 1)D 2 ]DT 2 . (12) We will determine the convergence factor μ later. Using (10), we have X 1 (k) = X 1 (k − 1) + μAT 1 [F 1 − A 2 XB 2 − A 1 X 1 (k − 1)B 1 ]BT 1 , X 2 (k) = X 2 (k − 1) + μAT 2 [F 1 − A 1 XB 1 − A 2 X 2 (k − 1)B 2 ]BT 2 , X 3 (k) = X 3 (k − 1) + μCT 1 [F 2 − C 2 XD 2 − C 1 X 3 (k − 1)D 1 ]DT 1 , X 4 (k) = X 4 (k − 1) + μCT 2 [F 2 − C 1 XD 1 − C 2 X 4 (k − 1)D 2 ]DT 2 . (13) Because the unknown matrix X appears in the right-hand side, this algorithm is impossible to realize. By using the hierarchical identification principle, we replace the unknown matrixX with its estimate at iteration k−1. Hence, we obtain X 1 (k) = X 1 (k − 1) + μAT 1 × [F 1 − A 2 X 1 (k − 1)B 2 − A 1 X 1 (k − 1)B 1 ]BT 1 , X 2 (k) = X 2 (k − 1) + μAT 2 × [F 1 − A 1 X 2 (k − 1)B 1 − A 2 X 2 (k − 1)B 2 ]BT 2 , X 3 (k) = X 3 (k − 1) + μCT 1 × [F 2 − C 2 X 3 (k − 1)D 2 − C 1 X 3 (k − 1)D 1 ]DT 1 , X 4 (k) = X 4 (k − 1) + μCT 2 × [F 2 − C 1 X 4 (k − 1)D 1 − C 2 X 4 (k − 1)D 2 ]DT 2 . (14) In fact, only an iterative solution X(k) is needed in this algorithm; we take the average of X 1 (k), X 2 (k), X 3 (k), and X 4 (k) as X(k) and obtain the gradient-based iterative algorithm: X (k) = X1 ( k) + X 2 (k) + X 3 (k) + X 4 (k) 4 , (15) X 1 (k) = X (k − 1) + μAT 1 × [F 1 − A 2 X (k − 1)B 2 − A 1 X (k − 1)B 1 ]BT 1 , (16) X 2 (k) = X (k − 1) + μAT 2 × [F 1 − A 1 X (k − 1)B 1 − A 2 X (k − 1)B 2 ]BT 2 , (17) X 3 (k) = X (k − 1) + μCT 1 × [F 2 − C 2 X (k − 1)D 2 − C 1 X (k − 1)D 1 ]DT 1 , (18) X 4 (k) = X (k − 1) + μCT 2 × [F 2 − C 1 X (k − 1)D 1 − C 2 X (k − 1)D 2 ]DT 2 , (19)


Introduction
For systems with certain parameters, the controllability and stability are the important topics which are worth studying [1][2][3].If the parameters of systems are uncertain, then how to identify these parameters is to put on the agenda.To identify the parameters of the large-scale systems, the hierarchical identification principle was proposed in [4][5][6].The hierarchical gradient-based and least squares based identifications were presented for multivariable system [7].The fruits of these effective strategies include identifications and adaptive control for dual-rate systems [8] and Hammerstein nonlinear systems [9].
Many publications have studied the solutions to matrix equations from the different points of view [10][11][12][13][14]. Zhou et al. proposed the positive definite solutions of the nonlinear matrix equation X + A  X −1 A = I [15]; Li et al. discussed a class of iterative methods for the generalized Sylvester equation [16]; the Riccati equation and a class of the coupled transpose matrix equations are investigated in [17,18].Unlike the above methods and just like the Jacobi and Gauss-Seidel iterations, Ding and Chen proposed the gradient-based and least squares based iterations for solving Ax = b and AXB = F [19,20] and a large family of iterations.These iterations include the gradient iteration, the least squares iteration, and some classical iterations as their special cases [21,22].By using the hierarchical identification principle, the gradient-based iterative algorithms were derived for solving different real matrix equations, such as the generalized Sylvester matrix equations and general linear matrix equations [23][24][25].
Ding's strategy has received much attention.With the real representations of complex matrices as tools, Wu et al. applied the Ding's strategy to solve the extended Sylvester-conjugate matrix equations AXB + CXD = F [26], the complex conjugate and transpose matrix equations [27], and the extended coupled Sylvester conjugate matrix equations [28]; Song and Chen presented the gradient based iterative algorithm for solving the extended Sylvester-conjugate transpose matrix equations [29].
Different from the various single real matrix equations in [21][22][23][24][25] and the complex matrix equations in [26][27][28][29], the iterative algorithm for the real coupled matrix equations is discussed by using the hierarchical identification principle, and the gradient-based iterative algorithm is proposed.Moreover, the gradient-based iteration is reported for solving the general real coupled matrix equations.We prove that the iterative solution always converges to the exact one for any initial value, if there exists a unique solution of these kinds of real coupled matrix equations.
This paper is organized as follows.Section 2 offers some notation and basic lemmas.Section 3 derives the gradientbased iterative algorithm for solving the matrix equations The iteration of real general coupled matrix equations is discussed in Section 4. Section 5 presents a numerical example to illustrate the effectiveness of the proposed algorithm.Finally, we offer some concluding remarks in Section 6.

Notations and Basic Lemmas
Some notation and lemmas are introduced first.The symbol A  represents the transpose of A. For a square matrix A,  max [A] indicates the maximum eigenvalue of A. ‖A‖ denotes the norm of A and is defined as If A = [  ] ∈ R × and B ∈ R × , then their Kronecker product is defined as The relationship between the vec-operator col and the Kronecker product can be expressed as the following lemma [59].
The gradient-based iterative algorithm for solving matrix equation AXB = F is listed as follows [22].Lemma 2. For AXB = F, if A is a full-column rank matrix and B is a full-row rank matrix, then the iterative solution X() given by the following gradient-based iterative algorithm converges to the exact solution X (i.e., lim  → ∞ X() = X) for any initial values X(0) [22]: (5)

The Coupled Matrix Equations
In this section, we consider the following real coupled matrix equations: where are the given matrices, and X ∈ R × is the unknown matrix to be determined.

The Exact Solution.
According to Lemma 1, we rewrite ( 6) as where [F 1 , F 2 ] is a block matrix.The exact solution of ( 6) can be given by the following theorem.
Theorem 3. Equation (6) has a unique solution if and only if S 1 is a full-column rank matrix; in this case, the unique solution is where If F 1 = F 2 = 0, then (6) has a unique solution X = 0.

The Gradient Iterative
Algorithm.The hierarchical identification principle implies that the related system can be decomposed into several subsystems, so we introduce four intermediate matrices: Using these four intermediate matrices, we decompose (6) into four subsystems According to Lemma 2, it is not hard to get the iterative solutions X 1 (), X 2 (), X 3 (), and X 4 () of these four subsystems  1 ,  2 ,  3 , and  4 as follows: We will determine the convergence factor  later.Using (10), we have Because the unknown matrix X appears in the right-hand side, this algorithm is impossible to realize.By using the hierarchical identification principle, we replace the unknown matrix X with its estimate at iteration  − 1.Hence, we obtain In fact, only an iterative solution X() is needed in this algorithm; we take the average of X 1 (), X 2 (), X 3 (), and X 4 () as X() and obtain the gradient-based iterative algorithm: Theorem 4. If (6) has a unique solution X, then the iterative solution X() given by ( 15)-( 20) converges to X, that is, lim  → ∞ X() = X, or the error matrix X() − X converges to zero for any initial value X(0).

General Real Coupled Matrix Equations
In this section, we study the general real coupled matrix equations of form where A  , C  ∈ R × , B  , D  ∈ R × ,  = 1, 2, . . ., ,  = 1, 2, . . ., , and F 1 , F 2 ∈ R × are the given matrices, and X ∈ R × is the unknown matrix to be determined.

The Exact Solution.
According to Lemma 1, (39) can be rewritten as where The exact solution of (39) can be given by the following theorem.
Theorem 5. Equation (39) has a unique solution if and only if S 2 is a full-column rank matrix; in this case, the unique solution is If F 1 = F 2 = 0, then (39) has a unique solution X = 0.

The Gradient-Based Iterative Algorithm.
We define the intermediate matrices: A  XB  ,  = 1, 2, . . ., , By using the hierarchical identification principle, we decompose (39) into According to Lemma 2, we have where Y  () and Z  () denote the iterative solutions of equations G  = A  XB  and H  = C  XD  at iteration , respectively.Substituting G  into (45) and H  into (46) gives To realize the above algorithm, we replace the unknown matrix X in the right-hand side of the above two equations with Y  ( − 1) and Z  ( − 1), respectively, and obtain In fact, only an iterative solution X() is needed; taking the average of Y  () and Z  () as X(), we obtain the gradient iterative algorithm for solving (39) as follows: Theorem 6.If (39) has a unique solution X, then the iterative solution X() given by ( 49)-( 52) converges to X, that is, lim  → ∞ X() = X, or the error matrix X() − X converges to zero for any initial value X(0).
Proof.The estimation error matrices are defined as By using ( 49)-( 51), we have Set Abstract and Applied Analysis 7 According to the trace formula ‖X‖ 2 = tr [X  X] and using (54) In the second "⩽, " the assumption that If the convergence factor  is chosen to satisfy 0 <  < 2/ 0 , then we have It follows that ‖η()‖ 2 + ‖η()‖ 2 → 0, as  → ∞ or as  → ∞, we obtain According to Theorem 5, we have X() → 0, as  → ∞.This completes the proof of Theorem 6.

A Numerical Example
This section offers a numerical example to illustrate the performance of the proposed algorithm.Consider (6) with Taking X(0) = 10 −6 1 2×2 as initial iterative value, we apply the gradient-based iterative algorithm in ( 15)-( 19) to compute X().The iterative solutions X() are shown in Table 1, where  = ‖X() − X‖/‖X‖ is the relative error and the convergence factor  is 25/139.The relation of the relative error  with different convergence factors is shown in Figure 1.From Table 1 and Figure 1, we can find that  becomes smaller and smaller and tends to zero as the iterative times increase.This demonstrates that the algorithm proposed in this paper is effective.
A simple calculation indicates that the range of the convergence factor is conservative.Based on the deep analysis of Figure 1, we find that the rate of convergence increases when  enlarges from 25/556 to 25/278.However, if we keep enlarging  from 25/139 to 125/556, the rate of convergence will drop.This shows that the best convergence factor is uncovered in this algorithm.How to find the best convergence factor subtly is our work in the future.

Conclusions
This paper has proposed a gradient-based iterative algorithm for solving a class of the real coupled matrix equations.By using the hierarchical identification principle, we prove that the iterative solution is convergent if the unique solution exists.An example demonstrates that the algorithm is effective and indicates that the best convergence factor is existent.We will find the best convergence factor of this algorithm in the future.