Finite Iterative Algorithm for Solving a Complex of Conjugate and Transpose Matrix Equation

We consider an iterative algorithm for solving a complex matrix equation with conjugate and transpose of two unknowns of the form: A 1 VB 1 + C 1 WD 1 + A 2 VB 2 + C 2 WD 2 + A 3 V H B 3 + C 3 W H D 3 + A 4 V T B 4 + C 4 W T D 4 = E. With the iterative algorithm, the existence of a solution of this matrix equation can be determined automatically. When this matrix equation is consistent, for any initial matrices V 1 , W 1 the solutions can be obtained by iterative algorithm within finite iterative steps in the absence of round-off errors. Some lemmas and theorems are stated and proved where the iterative solutions are obtained. A numerical example is given to illustrate the effectiveness of the proposed method and to support the theoretical results of this paper.


Introduction
Consider the complex matrix equation: where 1 , 2 , 1 , 2 ∈ C × , 1 , 2 , 1 , 2 ∈ C × , 3 , 4 , 3 , 4 ∈ C × , ∈ C × and 3 , 4 , 3 , 4 ∈ C × are given matrices, while , ∈ C × are matrices to be determined. In the field of linear algebra, iterative algorithms for solving matrix equations have received much attention. Based on the iterative solutions of matrix equations, Ding and Chen presented the hierarchical gradient iterative algorithms for general matrix equations [1,2] and hierarchical least squares iterative algorithms for generalized coupled Sylvester matrix equations and general coupled matrix equations [3,4]. The hierarchical gradient iterative algorithms [1,2] and hierarchical least squares iterative algorithms [1,4,5] for solving general (coupled) matrix equations are innovational and computationally efficient numerical ones and were proposed based on the hierarchical identification principle [3,6] which regards the unknown matrix as the system parameter matrix to be identified. Iterative algorithms were proposed for continuous and discrete Lyapunov matrix equations by applying the hierarchical identification principle [7]. Recently, the idea of the hierarchical identification was also utilized to solve the so-called extended Sylvester-conjugate matrix equations in [8]. From an optimization point of view, a gradient-based iteration was constructed in [9] to solve the general coupled matrix equation. A significant feature of the method in [9] is that a necessary and sufficient condition guaranteeing the convergence of the algorithm can be explicitly obtained.
Some complex matrix equations have attracted attention from many researchers since it was shown in [10] that the consistence of the matrix equation − = can be characterized by the consimilarity [11][12][13] of two partitioned matrices related to the coefficient matrices , , and . By consimilarity Jordan decomposition, explicit solutions were obtained in [10,14]. Some explicit expressions of the solution to the matrix equation − = were established in [15], and it was shown that this matrix equation has a unique solution if and only if and hav no common eigenvalues. Research on solving linear matrix equations has been actively engaged in for many years. For example, Navarra et al. studied a representation of the general common solution of the matrix equations 1 1 = 1 and 2 2 = 2 [16]; Van der Woude obtained the existence of a common solution for the matrix equations = [17]; Bhimasankaram considered the linear matrix equations = , = , and = [18]. Mitra has provided conditions for the existence of a solution and a representation of the general common solution of the matrix equations = and = and the matrix equations 1 1 = 1 and 2 2 = 2 [19,20]. Ramadan et al. [21] introduced a complete, general, and explicit solution to the Yakubovich matrix equation − = , and the matrix equation ( , ) = ( , ) has some important results that have been developed. In [22], necessary and sufficient conditions for its solvability and the expression of the solution were derived by means of generalized inverse. Moreover, in [22] the least-squares solution was also obtained by using the generalized singular value decomposition. While in [23], when this matrix equation is consistent, the minimum-norm solution was given by the use of the canonical correlation decomposition. In [24], based on the projection theorem in Hilbert space, an analytical expression of the least-squares solution was given for the matrix equation ( , ) = ( , ) by making use of the generalized singular value decomposition and the canonical correlation decomposition. In [25], by using the matrix rank method a necessary and sufficient condition was derived for the matrix equations 1 = and 2 = to have a common least square solution. In the aforementioned methods, the coefficient matrices of the considered equations are required to be firstly transformed into some canonical forms. Recently, an iterative algorithm has presented in [26] to solve the matrix equation ( , ) = ( , ). Different from the above mentioned methods, this algorithm can be implemented by initial coefficient matrices and can provide a solution within finite iteration steps for any initial values.
Very recently, in [27] a new operator of conjugate product for complex polynomial matrices is proposed. It is shown that an arbitrary complex polynomial matrix can be converted into the so-called Smith normal form by elementary transformations in the framework of conjugate product. Meanwhile, the conjugate product and the Sylvester-conjugate sum are also proposed in [28]. Based on the important properties of the above new operators, a unified approach to solve a general class of Sylvester-polynomial-conjugate matrix equations is given. The complete solution of the Sylvester-polynomialconjugate matrix equation is obtained. In [29] by using a real inner product in complex matrix spaces, a solution can be obtained within finite iterative steps for any initial values in the absence of round-off errors. In [30] iterative solutions to a class of complex matrix equations are given by applying the hierarchical identification principle. This paper is organized as follows. First, in Section 2, we introduce some notations, a lemma, and a theorem that will be needed to develop this work. In Section 3, we propose iterative methods to obtain numerical solution to the complex matrix equation with conjugate and transpose of two unknowns of the form: 1 1 + 1 1 + 2 2 + 2 2 + 3 3 + 3 3 + 4 4 + 4 4 = using iterative method. In Section 4, numerical example is given to explore the simplicity and the neatness of the presented methods.

Preliminaries
The following notations, definitions, lemmas, and theorems will be used to develop the proposed work. We use , , , tr( ), and ‖ ‖ to denote the transpose, conjugate, conjugate transpose, the trace, and the Frobenius norm of a matrix , respectively. We denote the set of all × complex matrices by C × , and Re( ) denote the real part of number .
Definition 1 (inner product [31]). A real inner product space is a vector space over the real field R together with an inner product. That is, with a map Satisfying the following three axioms for all vectors , , ∈ and all scalars ∈ R: (1) symmetry: ⟨ , ⟩ = ⟨ , ⟩ (2) linearity in the first argument: (3) positive definiteness: ⟨ , ⟩ > 0 for all ̸ = 0, two vectors , ∈ are said to be orthogonal if ⟨ , ⟩ = 0.
The following theorem defines a real inner product on space C × over the field R.
Theorem 2 (see [32]). In the space C × over the field R, an inner product can be defined as

The Main Result
In this section, we propose an iterative solution to a complex matrix equation with conjugate and transpose of two unknowns: are given matrices, while , ∈ C × are matrices to be determined.
The following finite iterative algorithm is presented to solve it.
To prove the convergence property of Algorithm 3, we first establish the following basic properties. (1) is consistent and * , * are arbitrary solutions of (1). Then for any initial matrices 1 and 1 , we have

Lemma 4. Suppose that the matrix equation
From properties of trace and conjugate In view that * and * are solutions of matrix equation (1), with this relation we have This implies that (8) holds for = 1.
Assume that (8) holds for = . That is, Then we have to prove that the conclusion holds for = + 1; it follows from Algorithm 3 that From properties of trace and conjugate we get In view that * and * are solutions of matrix equation (1), with relation (14) one has Then relation (8) holds by mathematical induction. (1)

Lemma 5. Suppose that the matrix equation
Proof. We apply mathematical induction.
First from Algorithm 3 we have Journal of Discrete Mathematics This implies that (17) This implies that (18) is satisfied for = 1.
Theorem 6 (see [32]). If the matrix equation (1) is consistent, then a solution can be obtained within finite iteration steps by using Algorithm 3 for any initial matrices 1 , 1 .

Numerical Example
In this section, we present numerical example to illustrate the application of our proposed methods.
Example 7. In this example we illustrate our theoretical results of Algorithm 3 for solving the system of matrix equation: Because of the influence of the error of calculation, the residual ( ) is usually unequal to zero in this process of the iteration. We regard the matrix ( ) as a zero matrix if ( ) < 10 −10 . Given (34)

Conclusions
The above Figure 1 shows the convergence curve for the residual function ( ). In this paper, an iterative algorithm constructed to solve a complex matrix equation with conjugate and transpose of two unknowns of the form: 1 1 + 1 1 + 2 2 + 2 2 + 3 3 + 3 3 + 4 4 + 4 4 = is presented. We proved that the iterative algorithms always converge to the solution for any initial matrices. We stated and proved some lemmas and theorems where the solutions are obtained. The proposed method is illustrated by numerical example where the obtained numerical results show that our technique is very neat and efficient.