A Preconditioned Iteration Method for Solving Sylvester Equations

A preconditioned gradient-based iterative method is derived by judicious selection of two auxiliary matrices. The strategy is based on the Newton’s iteration method and can be regarded as a generalization of the splitting iterative method for system of linear equations. We analyze the convergence of the method and illustrate that the approach is able to considerably accelerate the convergence of the gradient-based iterative method.


Introduction
In this paper, we consider preconditioned iterative methods for solving Sylvester equations of form where A ∈ R m×m , B ∈ R n×n , and X x 1 • • • x 2 ∈ R m×n , with m n generally.A Lyapunov equation is a special case of 1.1 with m n, B A T , and C C T .Such kind of problems frequently arise from many areas of applications in control and system theory 1 , stability of linear systems 2 , analysis of bilinear systems 3 , power systems 4 , signal and image processing 5 , and so forth.
Throughout the paper, we assume that Sylvester equation 1.1 possess a unique solution, that is, where λ A and λ B denote the spectra of A and B, respectively 6 .In theory, the exact solution of 1.1 can be computed by "linearization," that is, by solving an equivalent system of linear equations of form where A I n ⊗ A B T ⊗ I m ∈ R mn×mn , vec X x T 1 , . . ., x T n T with X x 1 , . . ., x n ∈ R m×n , and ⊗ is the Kronecker product.However, the above direct method requires considerable computational efforts, due to the high dimension of the problem.
For small-to-medium-scale problems of the form 1.1 , direct approaches such as Bartels-Stewart method 7, 8 and Hessenberg-Schur method 6, 9 have been the methods of choice.The main idea of these approaches is to transform the original linear system into a structured system that can be solved efficiently by forward or backward substitutions.
In the numerical linear community, iterative methods are becoming more and more popular.Several iterative schemes for Sylvester equations have been proposed; see, for example, 10-15 .Recently, Some gradient based iterative methods 3-5, 16-26 have been investigated for solving general coupled matrix equations and general matrix equations.For Sylvester equations of form 1.1 , the gradient based iterative methods use a hierarchical identification principle to compute the approximate solution.The convergence condition of these methods is investigated in 16-18 .It is proved that the gradient based iterative methods are convergent under certain conditions.However, we observe that the convergence speed of the gradient based iterative methods is generally very slow, which is similar to the behavior of classical iterative methods applied to systems of linear equations.In this paper, we consider preconditioning schemes for solving Sylvester equations of form 1.1 .We illustrated that the preconditioned gradient based iterative methods can be derived by selecting two auxiliary matrices.The selection of preconditioners is natural from the view point of splitting iteration methods for systems of linear equations.The convergent property of the preconditioned method is proved and the optimal relaxation parameter is derived.The performance of the method is compared with the original method in several examples.Numerical results show that preconditioning is able to considerably speed up the convergence of the gradient based iterative method.
The paper is organized as follows.In Section 2, a gradient based iterative method is recalled, and the preconditioned gradient based method is introduced and analyzed.In Section 3, performance of the preconditioned gradient based method is compared with the unpreconditioned one, and the influence of an iterative parameter is experimentally studied.Finally, we conclude the paper in Section 4.

A Brief Review of the Gradient Based Iterative Method
We firstly recall an iterative method proposed by Ding and Chen 18 for solving 1.1 .The basic idea is regarding 1.1 as two linear matrix equations: Then define two recursive sequences where κ is the iterative step size.The above procedures can be regarded as two separate iterative procedures for solving two matrix equations in 2.1 .With X 1 k and X 2 k at hand, then the kth approximate solution X k can be defined by taking the average of two approximate solutions, that is, By selecting an appropriate initial approximate solution X 0 , and using X k−1 to substitute 3 , then the above 2.2 -2.3 constitute the gradient based iterative method proposed in 18 .It is shown 18 that the gradient based iterative algorithm converges as long as where λ max AA T is the largest eigenvalue of AA T .

Preconditioned Gradient Based Iterative Method
We start with a general algebraic equation 27 Suppose x n is an approximate solution for 3.1 , for example, f x n ≈ 0 f x n / 0 .Then it follows that the accuracy of x n can be improved by the following scheme: with λ being a Lagrangian multiplier.It is well known that the optimal value of λ is determined by From the above condition, the Newton's iteration follows: Journal of Applied Mathematics Now, let us consider a general system of linear equations of form Kx b, 3.5 where K ∈ R n×n is a general square matrix.Let f x b − Ax, and x 0 be an initial approximate solution.Then by one-step Newton's iteration 3.4 , we can theoretically find the exact solution However, the previous procedure has the same difficulty as solving 3.5 by direct inverse.By utilizing the idea of preconditioning, the iteration formula 3.6 can be rebuilt by replacing K −1 by an approximate matrix M −1 .Taking x 0 as a starting vector and r 0 b − Ax 0 , then it follows that which is the preconditioned iterative method.Formula 3.7 can also be derived from the splitting iteration 28 with More generally, a relaxation parameter can be introduced as follows: x k 1 x k κM −1 r k , k 0, 2, . . . .

3.9
Recall that Sylvester equations 1.1 can be rewritten as the following two fictitious matrix equations: where By 3.9 ,we define preconditioned iterations as follows: where M 1 and M 2 are well-chosen preconditioners for A and B, respectively.The kth approximate solution can be defined by taking the average of two approximate solutions 16, 18 , for example,

3.13
By selecting two initial approximate solutions, the above procedures 3.11 -3.13 constitute the framework of the Newton's iteration method.
The above process can be accomplished by the following algorithm.
Algorithm 3.1.Preconditioned gradient based iterative algorithm PGBI can be given as follows.
1 Give two initial approximate solutions X 0 and X 0 .
Remark 3.2. 1 The above algorithm follows the framework of the gradient based iterative method proposed in 16-18 .However, in 16, 18 the matrix M −1 1 is chosen as B T , and in 17 the matrix M −1 1 is chosen as The previous selection of the matrices is obviously not wise.In this paper, we have illustrated that M 1 and M 2 should be the reasonable preconditioner for A and B, respectively.
2 Replacing then the above algorithm can be used to solve generalized Sylvester equation of form AXB X C.
3.14 We will present an example of form 3.14 in the last section of the paper.

Lemma 3.3 see 29 .
Leting A ∈ R n×n , then A is convergent if and only if ρ A < 1.
Based on Theorem 3.4 in 18 , we have the following theorem.
Theorem 3.4.Suppose Sylvester equation 1.1 has a unique solution X, and

3.16
Letting E k X − X k and submitting C AX − XB into the above formula, we have 3.17 Using X to subtract both sides of the above equation, we have

3.18
Let vec X be defined as the vector formed by stacking the columns of X on the top of one another, that is, where The proof is complete.
Remark 3.5.The choice of parameter κ is an important issue.We will experimentally study its influence on the convergence.However, the parameter is problem dependent; so seeking a parameter that is suitable for a broad range of problems is a difficult task.

Numerical Examples
In the following tests, the parameter κ is set to be in the gradient based iterative method GBI , and κ is chosen experimentally in the preconditioned gradient based iterative method PGBI .In all the test, the stopping criterion is set to be 1e−6.The exact solution is set as X rand m, n speye m, n * 2 such that the right hand side C AX XB, and the initial approximate solution is set to be X 0 ones m, n * 1e−6.
The relative error norms are recorded and plotted by Matlab command semilogy 30 .We always choose ILU 0 31 as the preconditioner.However, other suitable preconditioners can also be adapted.Example 4.1.The coefficient matrices used in this example are taken from 18 .We outline the matrix for completeness: m 50, n 30, rand "state", 0 ,

4.2
In the matrices, there is a parameter α which can be used to change the weight of the diagonal entries.The case with α 3.For κ 0.1, the behavior of the error norms is recorded in Figure 1 a .From this figure, we can see that the convergence of GBI method tends to slow down after some iteration steps.The PGBI method converges linearly, and much faster than GBI method.In Figure 1 b , the convergence curves with different parameter κ are recorded.For this problem, we can see that the optimal value of κ is very close to 0.5.By comparing the convergence of GBI method, we can see that PGBI is able to converge within 300 iteration steps, whereas GBI needs more than 1000 iteration steps.Therefore, even not with the optimal κ, the convergence of PGBI method is also much faster than that of GBI method.Discretizing this problem by the standard second-order finite difference FE scheme on a uniform grid with step size h 1/ m 1 in each direction, we obtain a coefficient matrix with . By choosing m 30, we compared the performance of GBI method and the preconditioned GBI method.The convergence curves are recorded in Figure 2 a .Figure 2 b , the influence of parameter κ is investigated.From this figure we can see that the optimal κ is close to 0.4.By comparing the convergence of GBI method, it is easy to see that for a wide range of κ the preconditioned GBI method is much better than GBI method.
Example 4.3.In this example, we consider the convection diffusion equation with Dirichlet boundary conditions 32 The equation is discretized by using central finite differences on 0, 1 2 , with mesh size h 1/ m 1 in the X-direction, and p 1/ n 1 in the Y-direction, produces two tridiagonal matrices A and B given by

4.8
By taking ν 3, m 60, and n 40, we have tested the performance of GBI method and the preconditioned GBI method.The convergence curves are recorded in Figure 3 a .From this figure, we can see that the GBI method converges very slowly and nearly stagnate.The preconditioned GBI method has much better performance.We also investigate the influence of parameter κ on this problem.The convergence behavior with different κ is recorded in Figure 3 b .From this figure, we can see that the parameter becomes very sensitive when it is larger than the optimal value.Therefore, a relative small parameter is more reliable.

Conclusions
In this paper, a preconditioned gradient based iteration PGBI method is proposed.The convergence of PGBI is analyzed.The choice of parameter κ is an important issue, and its influence is experimentally studied.The principle idea of this paper can be extended to the more general setting like coupled Sylvester matrix equations.

Figure 1 :
Figure 1: Comparison of convergence curves using GBI and PGBI a ; the influence of parameter κ on PGBI method b .

Example 4 . 2 .
In this example, we test a problem with B A, where coefficient matrix A is generated from the discretization of the following two-dimensional Poisson problem: −Δu f 4.3 posed on the unit square Ω 0 ≤ x, y ≤ 1 with Dirichlet boundary conditions

Figure 2 :
Figure 2: Comparison of convergence curves using GBI and PGBI a ; the influence of parameter κ on PGBI method b .

Example 4 . 4 .
In this example, we intend to test the algorithm for solving generalized Sylvester equation 3.14 .All the initializations are the same except setting C AXB X.The coefficient matrix A has the following structure:A tridiag 1 − d, 4, 1 d 4.9and B A. We set d 3 in this example.The tested results are shown in Figure4.The faster convergence behavior is observable from Figure4a .

Figure 3 :
Figure 3: Comparison of convergence curves using GBI and PGBI a ; the influence of parameter κ on PGBI method b .

Figure 4 :
Figure 4: Comparison of convergence curves using GBI and PGBI a ; the influence of parameter κ on PGBI method b .