Solvability Theory and Iteration Method for One Self-Adjoint Polynomial Matrix Equation

The solvability theory of an important self-adjoint polynomialmatrix equation is presented, including the boundary of itsHermitian positive definite (HPD) solution and some sufficient conditions under which the (unique or maximal) HPD solution exists. The algebraic perturbation analysis is also given with respect to the perturbation of coefficient matrices. An efficient general iterative algorithm for the maximal or unique HPD solution is designed and tested by numerical experiments.


Introduction
In this paper, we consider the following self-adjoint polynomial matrix equation: where ,  are positive integers, ,  ∈ C × , and  > 0. As far as we know, the solvability of ( 1) is not completely solved untill now.In many fields of applied mathematics, engineering, and economic sciences, (1) plays an important role.The famous discrete-time algebraic Lyapunov equation (DALE) is exactly (1) with  =  = 1.Undoubtedly, DALE is one of the most important mathematical problems in signal processing, system, and control theory and many others (e.g., see the monographs [1,2]).If  is stable (with respect to the unit circle), DALE has a unique Hermitian positive definite (HPD) solution.Such strong relation between the spectral property of  and the solvability theory is fortunately owned by (1), which can be considered as a nonlinear DALE if  ̸ = 1 or  ̸ = 1.What about the following algebraic Riccati equation: where , ,  ∈ C × ,  * =  ≥ 0, and  * =  > 0? Defining  :=  +  and  :=  +  2 −  * , we can immediately get (1) with  = 2 and  = 1 as an equivalent form of (2).As we all know, solving algebraic Riccati equations is an important task in the linear-quadratic regulator problem, Kalman filtering,  ∞ -control, model reduction problems, and so forth.See [1,[3][4][5] and the references therein.Many numerical methods have been proposed, such as invariant subspace methods [6], Schur method [7], doubling algorithm [8], and structure-preserving doubling algorithm [9,10].At the same time the perturbation theory was developed in [11][12][13][14][15], as well as the unified methods for the discrete-time and continuous-time algebraic Riccati equations [16,17].A general iteration method for (1) given in this paper can be seen as a new algorithm for the algebraic Riccati equation (2), setting  = 2 and  = 1.Apart from the above applications, (1) is appealing from the mathematical viewpoint since it unifies a large class of systems of polynomial matrix equations.Many nonlinear matrix equations are special cases of (1).For example, nonlinear matrix equations,  −  *    =  (see, e.g., [18,19]), are equivalence models of   −  *    =  and  =  1/ , where ,  are positive integers and  = /.In a rather general form, Ran and Reurings [18] investigated  +  * F() =  ( > 0) for its positive semidefinite solutions under the assumption that the function F(⋅) is monotone and − * F() is positively definite.Besides, Lee and Lim [20] proved that (1) has a unique HPD solution when || ≥ 1 ≥ || and |/| < 1. See [21][22][23][24][25] for more recent results on nonlinear matrix equations.To the best of our knowledge, (1) with  <  (without monotony in hand) has not been discussed.These facts motivate us to study polynomial matrix equation (1).
This paper is organized as follows.In Section 2 we deduce the existence and uniqueness conditions of HPD solutions of (1); in Section 3 we derive the algebraic perturbation theory for the unique or maximal solution of (1); finally in Section 4, we provide an iterative algorithm and two numerical experiments.
We begin with some notations used throughout this paper.F × stands for the set of  ×  matrices with elements on field F (F is R or C).If  is a Hermitian matrix on F × ,  min () and  max () stand for the minimal and the maximal eigenvalues, respectively.Denote the singular values of a matrix  ∈ F × by  1 () ≥ ⋅ ⋅ ⋅ ≥   () ≥ 0, where  = min{, }.Suppose that  and  are Hermitian matrices; we write  ≥ ( > ) if  −  is positively semidefinite (definite) and denote the matrices set { |  −  ≥ 0 and  −  ≥ 0} by [, ].

Solvability of Self-Adjoint Polynomial Matrix Equation
In this section, we study the solvability theory of (1) assuming that  is nonsingular; that is,  min ( * ) > 0. To do this, we need two simple but useful functions defined on the positive abscissa axis: The following two famous inequalities will be used frequently in the remaining of this paper.

If (1) has an HPD solution, its eigenvalues may skip between
Next, what we take more attention on is the HPD solution with its eigenvalues distributed only on one interval. . ( We now prove the uniqueness of  under the additional condition that  min ( * ) >  −1 1 ( −1 1 ) −1 .Suppose  ∈ [( min ()) 1/ , (/( max ( * ))) 1/(−) ] is another HPD solution of (1) and  ̸ = .It has been known that Then from which is impossible.Hence,  = . ( because By Lemmas 1 and 2 and Brouwer's fixedpoint theorem, it is sufficient to prove ℎ 2 ( 2 ) ≥  2  and ℎ 2 ( 2 ) ≤  2  in order for an HPD solution  ∈ [ 2 ,  2 ] to exist.The existence of such  follows from inequalities Next we prove the uniqueness of  under the additional condition that Moreover, if which is impossible.Hence,  = .
Definition 5.An HPD solution   ∈ C × of (1) is the maximal solution if, for any HPD solution  ∈ C × of (1), there is   ≥ .
Note that similar iteration formula ever appeared in some papers such as [20,21] for other nonlinear matrix equations.
Here we firstly proved that the iteration form (17) preserves the maximality of   over all HPD solutions of (1).

Unique Solution
will converge to   .
From the above proof, we can see that the iteration form (22) preserves the minimality ( 0 =  1 ) or maximality ( 0 =  2 ) of   in process.
If  = , (1) can be reduced to a linear matrix equation − *  = , which is the discrete-time algebraic Lyapunov equation (DALE) or Hermitian Stein equation, [1, Page 5], assuming that  =   .It is well known that if  is d-stable (see [1]),  −  *  =  has a unique solution, and matrix series { 0 ,  1 ,  2 , . ..}, generated by  +1 =  +  *    with an initial value  0 , will converge to the unique solution.Besides, it is not difficult to get an expression of the unique solution   = (∑ ∞ =0 ( * )    ) 1/ , applying [32, Theorem 1, Section 13.2], [1, Theorem 1.1.18],and the results in Section 6.4 [28].Now we have presented the solvability theory of the selfadjoint polynomial matrix equation (1) in three cases.A general iterative algorithm for its maximal solution ( < ) or unique solution ( ≥ ) will be given in Section 4. Before it, we study the algebraic perturbation of the maximal or unique solution of (1).

Algebraic Perturbation Analysis
In this section, we present the algebraic perturbation analysis of the HPD solution of (1) with respect to the perturbation of its coefficient matrices.Similar to [30], we define the perturbed matrix equation of (1) as where Â =  + Δ ∈ C × and Q =  + Δ ∈ C × .We always suppose that (1) has a maximal (or unique) solution, denoted by   ∈ [ 2 ,  2 ], and (26) has a maximal (or unique) solution, denoted by X ∈ [α 2 , β2 ].Now we present the perturbation bound for   when  ̸ = .Define a function : Theorem 8. Let  > 0 be an arbitrary real number, and (α 2 , β2 ) ≥ 0. If then      X −        < .
If  = , for an arbitrary  > 0, define where

Algorithm and Numerical Experiments
In this section we give a general iterative algorithm for the maximal or unique solutions of (1) and two numerical experiments.All reported results were obtained using MATLAB-R2012b on a personal computer with 2.4 GHz Intel Core i7 and 8 GB 1600 MHz DDR3.
Example 10.Let matrices  = rand(100) × 10 −2 and  = eye(100).With tol = 10 −12 and not more than 200 iterations, we apply Algorithm 1 to compute the maximal or unique HPD solutions of (1) with  ̸ =  and compare the results with those by the iteration method from [33] (denoted by MONO in Table 1).
Table 1 shows iterations, CPU times before convergence, and the residues of the computed HPD solution , defined by Step 5. Let  0 =  2 , run (17).
From Table 1, we can see that it takes more iterations and CPU times to solve the maximal solution of (1) with  <  than to solve the unique solution of (1) with  > .At the same time, the accuracy of the latter is better than the former.MONO can not be used to solve (1) with  < , and it costs more iterations and CPU times than Algorithm 1 when solving (1) with  > .
Now we use Example 4.1 of [33] to test our method.
Table 2 shows that for Algorithm 1 the choice  0 =  1   is better than  0 =  2   .When  and  rise, MONO might lose its efficiency.It seems not proper to apply the iteration method designed for  −  *  /  =  with  =   to solve   −  *    = , although they are equivalent to each other in theory.

Conclusion
In this paper, we considered the solvability of the self-adjoint polynomial matrix equation (1).Sufficient conditions were given to guarantee the existence of the maximal or unique HPD solutions of (1).The algebraic perturbation analysis including perturbation bounds was also developed for (1) under the perturbation of given coefficient matrices.At last a general iterative algorithm with maximality preserved in process was presented for the maximal or unique solution with two numerical experiments reported.

Table 1 :
(1)teration, CPU time (seconds) and residue for solving(1)Theorems 8 and 9 make sure that the perturbation of   can be controlled if Δ and Δ have a proper upper bound.