Polynomial Solutions to the Matrix Equation X − AX T B = C

Solutions are constructed for the Kalman-Yakubovich-transpose equation X − AXB = C. The solutions are stated as a polynomial of parameters of thematrix equation.One of the polynomial solutions is expressed by the symmetric operatormatrix, controllability matrix, and observability matrix. Moreover, the explicit solution is proposed when the Kalman-Yakubovich-transpose matrix equation has a unique solution. The provided approach does not require the coefficient matrices to be in canonical form. In addition, the numerical example is given to illustrate the effectiveness of the derived method. Some applications in control theory are discussed at the end of this paper.


Introduction
In the control area, the Kalman-Yakubovich-transpose matrix equation  −    =  occurs in fault detection [1], control with constrains systems [2], eigenstructure assignment [3], and observer design [4].In order to obtain explicit solutions, many researchers have made much efforts.Braden [5] studies the Lyapunov-transpose matrix equation    ±    =  via matrix decomposition.Liao et al. [6] propose an effective method to obtain the least square solution of the matrix equation    +      =  using GSVD, CCD and projective theorem.Piao et al. [7] investigate the matrix equation  +    =  by the Moore-Penrose generalized inverse and give the explicit solutions for the Sylvester-tranpose matrix equation.Song et al. [8,9] establish the explicit solution of the quaternion matrix equation  −  =  and  −  X = , where X denotes the -conjugate of the quaternion matrix.Moreover, other matrix equations such as the coupled Sylvester matrix equations and the Riccati equations have also been found numerous applications in control theory.For more related introduction, see [10,11] and the references therein.The matrix equation +   =  is considered by the iterative algorithm [12,13].In [14,15], the following linear equation is considered, where   ,   ,   ,   ( = 1, . . ., ;  = 1, . . ., ), and  are some known constant matrices of appropriate dimensions and  is a matrix to be determined.And the least squares solutions and least square solutions with the minimal-norm have been obtained.In [16], using the hierarchical identification principle, authors consider the following more general coupled Sylvester-transpose matrix equation: where and  ∈  [1, 𝑝] are the given known matrices and   ∈    ×  and  ∈  [1, 𝑝] are the matrices to be determined.In addition, the generalized discrete Yakubovich-transpose matrix equation  −    =  has important applications in dealing with complicated linear systems, such as large scale systems with interconnections, linear systems with certain partitioned structures or extended models, and second order or higher order linear systems [17,18].Song et al. [19] constructed the complete parametric solutions to the generalized discrete Yakubovich-transpose matrix equation.And one of the parametric solutions has a neat and elegant form in terms of the Krylov matrix, a block Hankel matrix, and an observability matrix.Other matrix equations such as descriptor linear systems and quaternion matrix equations have also been investigated, see [20][21][22][23][24][25] and the references therein.Wu et al. [26] have discussed the closed-form solutions to the generalized Sylvester-conjugate matrix equation and the proposed solution can provide all the degrees of freedom which are represented by a free parameter matrix.Wu et al. [27] proposed a general, complete parametric solution to the nonhomogeneous generalized Sylvester matrix equation  +  =  + .One advantage of the proposed solution is that the matrices  and  are in an arbitrary form and can be set undetermined.This may give great convenience to many problems in descriptor linear systems, such as observer design and model reference control.Zhou et al. [28] investigated the problem of parameterizing all solutions to the polynomial Diophantine matrix equation and the generalized Sylverter matrix equation by using the so-called generalized Sylverter mapping, right coprime factorization, and Bezout identity associated with certain polynomial matrix.It is shown that the provided solutions can be parameterized as soon as two pairs of polynomial matrices satisfying the right coprime factorization and Bezout identity are obtained.The rest of this paper is outlined as follows.In Section 2, the polynomial solutions to the Karm-Yakubovich-transpose matrix equation  −    =  are given.One of the polynomial solutions has a neat and elegant form in terms of symmetric operator matrix, controllability matrix and observability matrix.The polynomial solution to the Karm-Yakubovich-transpose is also proposed by the generalized Leverrier algorithm.Numerical example is given to show the efficiency of the proposed algorithm in Section 3. Some applications of the Karm-Yakubovich-transpose have been mentioned in Section 4 to end this paper.
Throughout this paper, we use  and  to denote the real number field and the complex number field.  , ,   , and  * refer to transpose, conjugate, conjugate transpose, and the adjoint matrix of , respectively.() and () are the sets of characteristic eigenvalue of matrices  and , respectively. represents the identity matrix with appropriate dimensions.Moreover, for  ∈  × ,  ∈  × , and  ∈  × , we have the following notations: In this case,   (, , ),   (, , ), and   (, ) are named as the controllability matrix, the observability matrix, and a symmetric operator matrix, respectively.
Proof.According to the argument above, it is shown that (4) implies (23).Now we prove that (23) implies (4) when  ̸ = 1 for any ,  ∈ (  ).Suppose that  is a solution of (23); we have In addition, we have Combining this with (24) gives Because of  ̸ = 1 for any ,  ∈ (  ), the matrix f  (  ) is nonsingular.Thus, it is obtained from ( 26) that ( 23) implies (4).With the two aspects above, the conclusion has been proved.
The following theorem presents a result on the unique solution of the Kalman-Yakubovich-transpose matrix equation.

Theorem 3. If 𝜆𝜇 ̸
= 1 for any ,  ∈ (  ), the solution to matrix equation ( 4) is which is a polynomial of matrices , , and .
Proof.Assume that the characteristic polynomial of which implies that Therefore, it is derived that which is a polynomial of f  (  ).Because f  (  ) is a polynomial of   , it is easy to know that [ f  (  )] −1 is a polynomial of   .So we can see that ∏(, , )[ f  (  )] −1 is a polynomial of matrices ,  and .Thus the conclusion can be proved.
Remark 4. From Theorem 2 and its proof, it is shown that the solution  of matrix equation ( 4) is completely dependent on the coefficient matrices of transpose matrix equation ( 4).
If  ̸ = 1 for any ,  ∈ (  ), the matrix equation ( 4) has a unique solution.Otherwise, the above mentioned matrix equation ( 4) has no unique solution.
Next, we provide two equivalent forms of the solution to matrix equation (4).In order to obtain the unique solution of matrix equation ( 4), only the coefficients of characteristic polynomial of   are required.Firstly, the so-called generalized Faddeev-Leverrier algorithm [29] is stated as the following iterative relations: where   ,  = 0, 1, 2, . . .,  − 1, are the coefficients of the characteristic polynomial of the matrix   and   ,  = 0, 1, . . ., −1, are the coefficient matrices of the adjoint matrix Journal of Applied Mathematics 5 So we have the following theorems.
Meanwhile, the above formula can be stated as Thus, it is easy to know Thus, we can easily obtain the conclusions.
On the basis of the results above, we have the following corollary on the solution of the Stein-conjugate matrix equation  −     = .Corollary 7. Given the matrices  ∈  × and  ∈  × .If the matrix  2 is Schur stable, then the unique solution of matrix equation  −     =  is expressed as

Numerical Example
Example 1.Now we propose an example to compute the solution of matrix equation ( 4).The parameters are stated as follows: It is easy to check that  ̸ = 1, for any ,  ∈ (  ).Therefore, the above matrix equation has a unique solution.

Applications
A number of control problems are related to the SCA (state covariance assignment) problem [30], for example, controllability/observability Gramian assignment [31], a certain class of  ∞ control problems [32].From a mathematical viewpoint, we point out that both the continuous-time and discrete-time SCA problems can be reduced to solving a symmetric matrix equation.
We also expand the discrete-time Lyapunov equation: as It is easy to see that both the continuous-time and discretetime SCA problems are essentially the same from the viewpoint of mathematics.
We also consider the relationship between the matrix equation and some important special cases as a control problem.For example, let matrices  ∈  × ,  ∈  × ,  ∈  × , and  ∈  × be given, where  =   ,  =   ≥ 0, and rank  = .We focus on a symmetric matrix equation of quadratic type [31]: The solution of the symmetric matrix equation ( 53) is equivalent to the SCA problem with proper definitions of , , , and .When  = 0, the special case of matrix equation ( 53) is reduced to the Karm-Yakubovich-transpose matrix equation.

Conclusions
The well-known Karm-Yakubovich-transpose matrix equation has many important applications in control system theory, such as fault detection, control with constrains systems, eigenstructure assignment, and observer design.In this paper we have proposed polynomial solutions to the Karm-Yakubovich-transpose matrix equation.The solutions are stated as a polynomial of parameters of the matrix equation.
All the coefficient matrices are not restricted to be in any canonical form.Meanwhile, an equivalent form of the solutions to the Karm-Yakubovich-transpose matrix equation has been expressed in terms of controllability matrix associated with , , and  and observability matrix associated with  and .Such a feature may bring greater convenience and advantages to some problems related to the Karm-Yakubovich-transpose matrix equation.From the discussion in our paper, one can observe that the solutions to the Karm-Yakubovich-transpose matrix equation are crucial as the theoretical basis of the development of many kinds of other matrix equations and are deserved further investigation in the future.In addition, as the theoretical generalization of the well-known Karm-Yakubovich-transpose matrix equation, it may be helpful for future control applications.