Preconditioning complex symmetric linear systems

A new polynomial preconditioner for symmetric complex linear systems based on Hermitian and skew-Hermitian splitting (HSS) for complex symmetric linear systems is herein presented. It applies to Conjugate Orthogonal Conjugate Gradient (COCG) or Conjugate Orthogonal Conjugate Residual (COCR) iterative solvers and does not require any estimation of the spectrum of the coefficient matrix. An upper bound of the condition number of the preconditioned linear system is provided. Moreover, to reduce the computational cost, an inexact variant based on incomplete Cholesky decomposition or orthogonal polynomials is proposed. Numerical results show that the present preconditioner and its inexact variant are efficient and robust solvers for this class of linear systems. A stability analysis of the method completes the description of the preconditioner.


Introduction
Focus of this paper is the solution of the complex linear system given by Ax = b, where the symmetric complex matrix A has the property that can be written as A = B + iC with B, C two real symmetric semipositive definite matrices (semi-SPD) and B + C a symmetric positive definite (SPD) matrix. This kind of linear systems arises, for example, in the discretization of problems in computational electrodynamics [52] or time-dependent Schrödinger equations, or in conductivity problems [16,36].
If A is Hermitian, a straight-forward extension of the Conjugate Gradients (CG) algorithm can be used [51]. Unfortunately, the CG method can not be directly employed when A is only complex symmetric, thus, some specialized iterative methods must be adopted. An effective one is the HSS with its variants (MHSS), which need, at each iteration, the solution of two real linear systems. Other standard procedures to solve this problem are given by numerical iterative methods based on Krylov spaces and designed for complex symmetric linear systems: COCG [52], COCR [46], CSYM [15], CMRH [42]. Some iterative methods for non SPD linear systems like BiCGSTAB [50], BiCGSTAB( ) [44,43], GMRES [41] and QMR [25] can be adapted for complex symmetric matrices [1,30,51].
Purpose of this paper is to develop a polynomial preconditioner to speed up the MHSS process and to propose a preconditioned version of COCG and COCR.
Methods based on Hermitian and Skew-Hermitian Splitting (HSS) [8,9,10,11] can be used as standalone solvers or combined (as preconditioner) together with CG like algorithms. The speed of convergence of CG like iterative schemes depends on the condition number of matrix A, thus, preconditioning is a standard way to improve convergence [5,12]. Incomplete LU is a standard and accepted way to precondition linear systems. Despite its popularity, incomplete LU is potentially unstable, difficult to parallelize and lacks of algorithmic scalability. Nevertheless, when incomplete LU is feasible and the preconditioned linear system is well conditioned, the resulting algorithm is generally the most performing.
In this work we focus on large problems, where incomplete LU preconditioning is too costly or not feasible. In this case, iterative methods like SSOR are used as preconditioners, but a better performance is obtained using HSS iterative methods, which allow to reduce the condition number effectively. However, HSS iterative methods need the solution of two SPD real systems at each step. Standard preconditioned CG methods can be used at each iteration [11] which can again be preconditioned with an incomplete Cholesky factorization, although for very large problems, the incomplete Cholesky factorization may be not convenient or not feasible. As an alternative, in the present paper is proposed a polynomial preconditioner that allows to solve the linear system for large matrices. Polynomial preconditioners have not a good reputation as preconditioners [5,12] and research on this subject was dropped out in the late 80's. In fact, polynomial preconditioners based on Chebyshev polynomials need accurate estimate of minimum and maximum eigenvalue, while least squares polynomials were not computed using stable recurrences, limiting the degree of available stable polynomials [3,4,31,38,22]. However, in the last years, polynomial preconditioner went back to the top after the works of Lu-Lai-Xu [34]. Notwithstanding anything contained above, here we propose as a preconditioner the use of a polynomial approximation of a modified HSS step. A specialization for Chebyshev and Jacobi orthogonal polynomials is discussed and the corresponding polynomial preconditioner is evaluated using a stable recurrence which permits to use very high degree polynomials.
The paper has this structure. Section 1.1 describes the problem and gives a brief summary of existing methods for resolution with the fundamental results and variants that lead to the present method, in particular, the HSS is considered. Section 2 shows how to use one step of the MHSS method as a preconditioner and gives a bound on the conditioning number of the MHSS iteration matrix. Section 3 explains the iterative solution method with the strategy to adopt when Cholesky factorization is possibile or not. Section 4 presents a scale transformation of the system in order to move the eigenvalues to the range (0, 1]. Section 5 describes the polynomial preconditioner based first on least squares and then in terms of orthogonal polynomials and furnishes a stable recurrence for the computation of polynomial preconditioners for high degrees. The specialization for Chebyshev and Jacobi orthogonal polynomial is presented. Section 6 studies the numerical stability of this process and Section 8 concludes the paper.

The MHSS iterative solver
The complex N × N linear system Ax = b, where A is complex symmetric, is solved via an iterative method based on a splitting algorithm (HSS). The preconditioner requires to solve (each time it is applied) two real symmetric and positive definite (SPD) linear systems.
The HSS scheme can be summarized in this manner, suppose to decompose the vector x of the unknowns in real and imaginary part, x = y + iz, and accordingly, the right hand side b = c + id, the system Ax = b is then rewritten as: so that, the two-steps of the Modified HSS method proposed in reference [8] results in: for suitable matrices V and W . The previous procedure can be rewritten as a single step of a splitting based scheme P x k+1 = Qx k + b by posing This iterative method converges if the iteration matrix P −1 Q, e.g, has spectral radius strictly less than one. It is well known that the choice V = W = αI, for a given positive constant α, yields the standard HSS method [7,8,9] for which an estimate of the spectral radius is given by where λ j (B) are the eigenvalues of the SPD matrix B. The optimal value for α can also be computed [7,8,9] and is From the previous formulas, it is clear that those computations rely on the knowledge (or estimate) of the minimum and maximum eigenvalue of matrix B, which is, in general, a hard problem. Another possible choice for V and W is V = αB and W = βB which yields, when α = β, a variant of the MHSS method by [7,8,9]. In the next Lemma, an upper bound of the spectral radius of the iteration matrix is given.
with B a SPD matrix and C a semi-SPD matrix, then the spectral radius of P −1 Q satisfies the upper bound and the minimum value of the upper bound U (α, β) is attained when α = β = 1 where U (1, 1) = √ 2/2 ≈ 0.707.
Proof. Posing V = αB and W = βB in (3) yields If λ is an eigenvalue of matrix P −1 Q, it satisfies 3 Thus, µ defined as is a generalized eigenvalue, i.e., it satisfies det(µB − C) = 0 and it is well known that µ must be non negative. Computing λ from (4), the function λ(µ) is found to be: which allows to evaluate an upper bound U (α, β) of the spectral radius of P −1 Q: There are optimal values of α and β that minimize U (α, β) for α ≥ 0 and β ≥ 0. To find them, set α = sin θ and β = cos θ with θ ∈ [0, π/2], then If θ is fixed, the minimum of this last expression is for = sin θ/(cos θ) 2 corresponding to The minimum of U (α, β) is attained for θ = π/4, which corresponds to α = β = 1. The computation of U (1, 1) is then just a computation.
Being α = β = 1 optimal, from now on it is assumed α = β = 1 and therefore V = W = B. Using these values, the two step method (2) is recast as the one step method: where the simplified expressions for P and Q are: Notice that P is well defined and non singular provided that B + C is not singular. Thus the requests of Lemma 1.1 are weakened when α = β = 1, resulting in the next corollary.
Corollary 1.2. Let B and C be semi-SPD with B + C not singular, P and Q as defined in (6), then the spectral radius of P −1 Q satisfies the upper bound (P −1 Q) ≤ √ 2/2. Remark 1.3. The spectral radius of the iteration matrix is bounded independently of its size, thus, once the tolerance is fixed, the maximum number of iterations is independent from the size of the problem.
Iterative method (5)-(6) can be reorganized in the following algorithm: The MHSS iterative solver of Algorithm 1 needs at each iteration the resolution of two real linear systems, respectively for the real and imaginary part, whose coefficient matrices are SPD, namely B + C. For small matrices this can be efficiently done by using Cholesky decomposition. For large and sparse matrices a preconditioned Conjugate Gradient method is mandatory.
Although Algorithm 1 can be used to solve linear system (1), better performances are obtained using one or more steps of Algorithm 1 not to solve the linear system (1), but as a preconditioner for a faster Conjugate Gradient like iterative solver such as COCG or COCR. The convergence rate estimation for these iterative schemes for a linear system M x = (B + C)x = b depends on the condition number is the classic spectral norm of a matrix. The energy norm · M induced by the (real) SPD matrix M is used to obtain the well known estimate where x is the solution of the linear system. In general, Conjugate Gradient like iterative schemes perform efficiently if a good preconditioner makes the system well conditioned.

Use of MHSS as preconditioner
In this section the effect of a fixed number of steps of Algorithm 1, used as a preconditioner for linear system (1), is analyzed in terms of the reduction of the condition number. Performing n steps of Algorithm 1 with x (0) = 0 is equivalent to compute x (n) = P −1 n b where P −1 n = I + P −1 Q + (P −1 Q) 2 + · · · + (P −1 Q) n−1 P −1 Matrix P n can be thought as an approximation of matrix A = B + iC. Thus, it is interesting to obtain an estimate of the condition number of the preconditioned matrix P −1 n A in order to check the effect of MHSS when used as preconditioner.
For the estimation, we need to recall some classical results about spectral radii and norms. For any matrix M and any matrix norm, Gelfand's Formula connects norm and spectral radius [26,33]: Notice that when (M ) < 1, for k large enough, M k < 1.
Lemma 2.1. Let A = P − Q so that P −1 Q be such that (P −1 Q) < 1, then for any ε > 0 satisfying (P −1 Q) + ε < 1 there is an integer n ε > 0 such that: where κ(M ) = M M −1 is the condition number with respect to the norm · and P n is defined by (7).
Corollary 2.2. Let A = P − Q so that P −1 Q has the property that (P −1 Q) < 1, then there exists a matrix norm |||·||| such that the conditioning number of matrix P −1 n A with respect to this norm satisfies: where κ(M ) = |||M ||| M −1 .
From Corollary 1.2 using the Euclidean norm and choosing ε such that (P −1 Q) + ε = √ 2/2 + ε = 0.8 for n ≥ n ε , the condition number of the preconditioned matrix satisfies: This estimate shows that using n steps of MHSS, with n large enough, the condition number of the preconditioned system can be bounded independently of the size of the linear system. In practice, when n = 1 the reduction of the condition number is enough, in fact, Corollary 2.2 shows that, using the appropriate norm, the condition number of the preconditoned linear system is less than 9, independently of its size. Remark 2.3. From reference [6], when the condition number κ is large, the estimate of m conjugate gradient (CG) iterations satisfies m ∝ √ κ. The cost of computation is proportional to the number of iterations, whereas the cost of each iteration is proportional to 1 + Cn, where C is the cost of an iteration of MHSS used as preconditioner, relative to the cost of a CG step. Thus, using κ 2 from (10), when n is large enough, a rough estimate of the computational cost is 6 Fixing C, the cost (11), as a function of n alone, is convex and has a minimum for n = n min (C) which satisfies: C = −a n ln(a) a n ln(a) − a 2n + 1 , a = 0.8.
The inverse function C(n) which satisfies n min (C(n)) = n, is the r.h.s of (12), moreover, C(n) is a monotone decreasing function so that also n min (C) is monotone decreasing.
The following table shows the constant C as a function of n giving the critical values of C such that n steps of MHSS are better than n − 1 steps, This means that it is convenient to use n > 1 steps in the preconditioner MHSS only if the cost of one step of MHSS is less than 0.47, i.e., the half of one step of the Conjugate Gradient method. A situation that never happens in practice.
According to Remark 2.3, it is considered only P n with n = 1, i.e. P 1 = P as preconditioner for linear system (1).

Iterative solution of complex linear system
From the previous section, it is clear that the use of one iteration step of MHSS is a good choice that lowers the conditioning number of the original complex linear system (1). The resulting preconditioner matrix is P = (1 + i)(B + C) as defined in (6). Preconditioner P will be used together with semi-iterative methods specialized in the solution of complex problems. Examples of those methods there are COCG [51,45] or COCR [28,40,46,45]. They are briefly exposed next.
There are also other methods available for performing this task, for example CSYM [15] or QMR [24]. The application of the preconditioner P for those methods is equivalent to the solution of two real SPD systems depending on B + C, as in Remark 1.4. Of course, one can use a direct method [12,20,27,17], or a conjugated gradient method [11] with incomplete Cholesky factorizations, or approximate inverse as preconditioner [12,13,19,21,35,39]. Nevertheless, it is not so convenient to adopt this philosophy for very large linear systems because for example B + C can not be formed or just the fill-in of the incomplete Cholesky-based factorization is unacceptable. In the next sections, a polynomial preconditioner based on orthogonal polynomials is presented, it will allow to solve very large complex linear systems.
The suggested strategy to get the solution of the complex linear system is resumed in Algorithm 4. Compute LL T = B + C and use P = (1 + i)LL T as preconditioner.
The incomplete Cholesky is a good approximation of B + C, thus, use P = (1 + i)LL T as preconditioner 9 else if E not "too large" then 10 To compute P −1 v the incomplete Cholesky is used as preconditioner for the solution of the two real system (B + C)z = v/(1 + i) by PCG method. In this case incomplete Cholesky is too inaccurate or matrix is too large so that incomplete Cholesky can not be computed. In this case use proposed polynomial preconditioner. 16 end if

Scaling the complex linear system
The polynomial preconditioner presented in the next section depends on the knowledge of an interval containing eigenvalues. Scaling is a cheap procedure to recast the problem into a one with eigenvalues in the interval (0, 1]. Consider the diagonal matrix S and the linear system (1), the scaled system is: where S is a real diagonal matrix with positive entries on the diagonal. The scaled system inherits the properties of the original and still has the matrices SBS and SCS semi-SPD with SBS + SCS SPD. The next lemma shows how to choose a good scaling factor S used forward:  • Matrices B and C are semi-SPD; • Matrix B + C is SPD with eigenvalues in (0, 1].

Preconditioning with polynomials
On the basis of the results of the previous sections with Assumption 4.2, the linear system to be preconditioned has the form Ax = b with A = B + iC with B and C semi-SPD and M = B + C SPD with eigenvalues distributed in the interval (0, 1]. A good preconditioner for this linear system is one step of MHSS in Algorithm 1, which results in a multiplication by P −1 where P = (1 + i)(B + C) = (1 + i)M . Here the following polynomial approximation of P −1 is proposed: The matrix polynomial s m (M ) must be an approximation of the inverse of M , i.e. s m (M )M ≈ I where s m (x) is a polynomial with degree m. A measure of the quality of the preconditioned matrix for a generic polynomial s(x) is the distance from the identity matrix: A preconditioner polynomial can be constructed by minimizing Q σ (s) of equation (13) within the space Π m of polynomials of degree at most m. This implies the knowledge of the spectrum of matrix M which is in general not available making problem (13) unfeasible. The following approximation of quality measure (13) is feasible and needs the knowledge of [ , 1], an interval for > 0, containing the spectrum of M . The polynomial which minimizes Q [ ,1] (s) for s ∈ Π m is well known and is connected to an appropriately scaled and shifted Chebyshev polynomial. The construction of such solution is described in section 5.1 and was previously considered by Ashby et al. [3,4], Johnson et al. [31], Freund [23], Saad [38] and Axelsson [5]. The computation of Q [ ,1] (s) needs the estimation of a positive lower bound of the minimum eigenvalue of M , which is, in general, expensive or infeasible. The estimate = 0 cannot be used because Q [0,1] (s) ≥ 1 for any polynomial s. A different way to choose is analysed later in this section. Saad observed that the use of Chebyshev polynomials with the conjugate gradient method, i.e. the polynomial which minimizes the condition number of the preconditioned system, is in general far from being the best polynomial preconditioner, i.e. the one that minimizes the CG iterations [38]. Practice shows that although non optimal, Chebyshev preconditioners perform well in many situations. The following integral average quality measure proposed in [47,38,31,4] is a feasible alternative to (14): The preconditioner polynomial s m proposed here is the solution of minimization of quality measure (14) or (15): Solution of problem (16) is detailed in the next sections. The proposed solution to the first problem is by means of the Chebyshev polynomials, while the solution of the second problem is done with the Jacobi weight.

Chebyshev polynomial preconditioner
The solution of minimization problem (16) with quality measure Q [ ,1] (s) is well known and can be written in terms of Chebyshev polynomials [3,4]: where T m+1 (λ) is the (m + 1)-th Chebyshev polynomial scaled in the interval [ , 1]. Polynomials T k (λ) satisfy the recurrence where From (17), the preconditioner polynomial s m (λ) becomes and from (18) it is possibile to give a recursive definition for s m (λ) too.
Using Lemma 5.1 the polynomial preconditioner (17) satisfies the recurrence (19) with where c n = −1 is used and γ n = T n (0)/T n+1 (0) is computed by solving recurrence (18) for x = 0, that is Numerical stability of recurrence (21) is discusssed in section 6. The estimation of is the complex task and some authors perform it dynamically. As an alternative, the present approach is to move the eigenvalues of the coefficient matrix from the interval [ , 1] to a stripe [1 − δ, 1 + δ], so that the condition number remains bounded. The value of is not determined from the estimate of the eigenvalues, but from the degree of the preconditioner polynomial and from the amplitude of the stripe δ. Once δ is fixed, the higher the degree of the preconditioner, the lower the value of , which decreases to zero. Thus, if the degree of the preconditioner is high enough, the eigenvalues are moved in the interval [1 − δ, 1 + δ]. The important fact is that even if the degree is not high enough to move the complete spectrum, the majority of the eigenvalues are moved in the desired stripe, improving the performance of the conjugate gradient method. An idea of this behaviour is showed in Figure 1 on the right. The end of this section is devoted to the explicit expression of the value of computed backwards from the value of δ n : once the maximum condition number is fixed, it is possible to increase the degree of the polynomial preconditioner so that decreases until the whole (or at least the most) spectrum of the matrix is contained in the specified range.

Jacobi polynomial preconditioner
The solution of minimization problem (16) with quality measure Q(s) is well known and can be written in terms of Jacobi orthogonal polynomials [38]: Definition 5.2. Given a nonnegative weight function w(λ) : [0, 1] → R + (see [31,38,2]) the scalar product ·, · w and the relative induced norm · w are defined as where p and q are continuous functions. The orthogonal polynomials w.r.t. the scalar product ·, · w are the polynomials p k (λ) which satisfy p k , p j w = 0 if k = j.
The class of polynomials of the form 1 − λs(λ) with s ∈ Π m−1 can be thought as polynomials r ∈ Π m+1 with r(0) = 1, thus, the minimization problem (16) for Q(s) can be recast to the following constrained minimization for w(λ) ≡ 1: The preconditioner polynomial is s m (λ) = λ −1 (1 − r m+1 (λ)). Polynomial r m+1 (λ) is expanded by means of Jacobi orthogonal polynomials with α = β = 0: Problem (23) is solved by using Lagrange multiplier with first order conditions resulting in Remark 5.3. The solution (24) is only formal but barely useful from a computational point of view, because it requires to compute explicitly the least squares polynomial. In facts, it is well known that the evaluation of a polynomial of high degree is a very unstable process. To make solution (24) practical, it is mandatory to obtain a stable recurrence formula that allows to evaluate polynomials even of very high degree, e.g. 1000 or more.
To find a recurrence for (24), it must be rewritten as a ratio of orthogonal polynomials as for the Chebyshev preconditioner (17). To this scope, some classical theorems and definitions on orthogonal polynomials are here recalled for convenience. Christoffel-Darboux formulas and Kernel Polynomials, here recalled without proofs (see [49,47]), are used to build the recurrence.
Preconditioner polynomial can be computed recursively using Lemma 5.1, where coefficients a n , b n , c n and γ n are computed from a 0,1 n , b 0,1 n , c 0,1 n . Given the values of a n , b n , c n and γ n from Lemma 5.1 become: The only difficulty of the previous coefficients computation lies in the recursive solution of γ n , which is here omitted for conciseness but that is a linear three term recurrence with polynomial coefficients. Notice that ∆ → 0, thus the limit value of the above coefficients is evident.

Numerical stability
Algorithm 5, i.e. the application of preconditioner s m (M ) given by equation (28) to a vector v, also taking into account rounding errors, results in where (k) ∞ ≤ δ are the errors due to floating point operations with δ as an upper bound of such errors. The cumulative error e (k) = w (k) − v (k) satisfies the linear recurrence e (0) = (0) , e (1) = (1) , e (n) = a n M e (n−1) + b n e (n−1) + c n e (n−2) + (n) .
The next definitions introduce the concept of generalized and joint spectral radius needed for the proof of the theorem of the matrix bound, they can be found in [32].
The set Σ is irreducible if the only invariant subspace are {0} or R n .
Definition 6.2. The generalized spectral radius (Σ) and the joint spectral radiusˆ (Σ, · ) of any set of matrices Σ are defined as Let Σ be a bounded and irreducible set of matrices with (Σ) > 0, then there is a constant C such that Proof. It is theorem 2.1 by [32] with a slight modification to match the present case.
where N is the dimension of the linear system, δ is the amplitude of the stripe for the eigenvalues and C is an unknown constant coming from the norm inequalities which is found experimentally to be small.
Focusing on j th component of the transformed error, f (n) = (T e (n) ) j and η (n) = (T (n) ) j , a scalar recurrence is obtained: Recurrence (30) is restated in matrix form as with initial data f T 1 = (η (1) , η (0) ). Notice that η (n) is bounded by (31) it follows that The set Σ = {A i |i = 1, . . . , ∞} is bounded and irreducible, each matrix has spectral radius strictly less than 1, (see Lemma 6.5 for a proof), therefore the joint spectral radius is less than 1. From (32), with Theorem 6.3 using the infinity norm, and, because of f (n) = (T e (n) ) j , it follows that T e (n) ∞ ≤ C δ √ N n. A bound of the term e (n) is done as This shows that the error grows at most linearly.
The above relation shows that the recurrence is at worst linearly unstable, i.e. the error grows at most linearly. The existence is proved in the works of Rota and Strang [37] where the concept of joint spectral radius is introduced. The determination of C is not possible but practise reveals that it is small. In conclusion it is possible to employ even a very high degree polynomial preconditioner with a stable computation.
Lemma 6.5. Given a n , b n , c n from (21) or (27), respectively for the Chebyshev and the Jacobi preconditioner, then the roots z 1 and z 2 of the characteristic polynomial of homogeneous recurrence (30), i.e.

Numerical tests
In this section a group of tests is proposed for the solution of a complex linear system of the form (1), i.e. (B + iC)(y + iz) = c + id, where B and C are semi-SPD with B + C SPD. The solvers used are COCG (Algorithm 2) and COCR (Algorithm 3) preconditioned with • ILU0, the incomplete Cholesky ILU(0), for the matrix B + iC; • MHSS-ILU0, the incomplete Cholesky ILU(0) for the preconditioner P defined in (6); • MHSS-JACOBI, the approximation of the preconditioner P defined in (6) with Jacobi polynomial preconditioner; • MHSS-CHEB, the approximation of the preconditioner P defined in (6) with Chebyshev polynomial preconditioner.
The degrees used for the polynomial preconditioner are 10, 50, 100, 500 and 1000. Due to the lack of complex symmetric matrices with SPD real and imaginary part, it was decided to combine two real SPD matrices of not too far dimension (eventually padding with zeros to match the size of the biggest one). The real SPD matrices used are summarized in the next Table and can be found on the NIST "Matrix Market" Sparse Matrix Collection [14] or on University of Florida Sparse Matrix Collection [18]. As usual "NNZ" means number of non zero elements and it is understood that the matrices are square, hence only the number of rows is reported. These matrices are used paired where the first matrix of the pair corresponds to the real part, the second to the imaginary part. If the dimensions disagree, the smallest matrix is padded with zero rows and columns up to the size of the biggest one. The pairing of the matrices with the name of the corresponding test is resumed in the following The right hand side used for all tests, unless explicitly written, is assumed to be (1 + i)1, that is 1 + i for all components. A test from a real application is found in [16], from which the complex symmetric SPD matrices are provided with a specific right hand side. List of the tests: for the first group (T tests) the r.h.s. was (1 + i)1, for the second group (S tests) the r.h.s. was the one prescribed in the paper [16].
The linear system corresponding to each test is solved with COCG and COCR. The preconditioners used are the ILU(0) for the complex matrix B + iC, the proposed preconditioners (1 + i)(B + C) approximated with ILU(0) or Jacobi and Chebyshev polyonomials of degree 10, 50, 100, 500, 1000. The results in terms of number of iterations are collected in Table 5 for COCG and COCR.
From Table 5 it is clear that the strategy presented in Algorithm 4 is effective. In fact when ILU factorization is available and iteration converges incomplete factorization preconditioner is faster than polynomial preconditioner. Polynomial preconditioner is an effective alternative when ILU is not available or not sufficient as preconditioner.
Computational time is indicative and was obtained implementing the proposed precoditioner using MAT-LAB scripts available at Matlab Central. Standard MATLAB incomplete LU is used in the tests.
Rising the degree of the polynomial corresponds in lowering the number of iterations needed by COCG and COCR. It is also apparent that it is not possible to go below a certain number of iterations even with a very high degree polynomial, this is evident for example in test T16k with both COCG and COCR and with both preconditioners. This is explained from the fact that the condition number of the preconditioned system when preconditioner (1 + i)(B + C) is computed exactly is independent of the system size.
Another behaviour that is common to all tests is the generally better performance of the COCR over the COCG: this can be appreciated looking at Figure 2, 3 and 4. They show the history of the residual for each iteration of both methods with the Jacobi and the Chebyshev preconditioners. Table 1: Numerical results for COCG and COCR, the reported numbers represents the iterations of the corresponding solver with the specified preconditioner. The dash indicates that it was not feasible to compute a particular test, while the letters "NC" mean "not converged", i.e., residual is still large after 5 000 iterations. The value of δ used in the Chebyshev preconditioner was 0.2 while the stopping tolerance was 10 −8 . The time elapsed in the computation is expressed in seconds.

Conclusions
It was presented a polynomial preconditioner for the solution of the linear system Ax = b, for A complex symmetric such that A = B + iC, where B, C are real symmetric semi-positive definite matrices (semi-SPD) and B + C is symmetric positive definite (SPD). Typical problems of this form come from the field of electrodynamics, where the involved matrices are complex but not Hermitian and standard methods can not be used directly. This algorithm is suitable for large matrices, where Cholesky decomposition, or its inexact form, are too costly or infeasible. It works as a polynomial approximation of a single step of the MHSS method, but it is successfully applied as preconditioner of Conjugate Gradient-like methods, in particular it is showed how to use it together with COCG or COCR. Following the trend of the last years, but aware of the criticism that arose in the '80s, the proposed new preconditioner is computed as a recurrence of orthogonal polynomials and is proved to be stable. This allows to employ polynomials of very high degree and numerical tests confirm the expected theoretical good performances.