The Preconditioned SOR Iterative Method for Positive Definite Matrices

We present several iterations for preconditioners introduced 
by Tarazaga and Cuellar (2009), and 
study the convergence of the method for solving a linear system whose coefficient 
matrix is positive definite matrices, and we also find that they complete 
very well with the SOR iteration, which is shown through numerical examples.


Introduction
For solving the large sparse linear system  = , ,  ∈   , where  ∈  × is a square nonsingular positive definite matrix, an iteration method is often considered.For any splitting,  =  −  with det() ̸ = 0, the basic iterative method for system (1) is To improve the convergence rate of the basic iterative method, transform the original systems (1) into the preconditioner form where  is called the preconditioner or a preconditioning matrix.several preconditioned iterative methods have been proposed [1][2][3][4][5][6].Since  is nonsingular, (1) and ( 3) have the same solutions.We are considering here systems with a unique solution.It is well known that system (3) can be transformed by an iteration as follows: This iteration is called the Richardson iteration for preconditioning system (3).In this paper, we consider the iteration methods by the following form: where  represents the iteration matrix.
Lemma 1 (see [7,8]).For the iteration formula (5) to produce a sequence converging to ( − ) −1 , for any starting point  0 , it is necessary and sufficient that the spectral radius of  be less than one.
We will use the following notations.A matrix  = (  ) is called a row diagonally dominant if and column diagonally dominant if <             for  = 1, . . ., .
The Frobenius inner product of  and  is defined by where trace () denotes the trace of a matrix , and   stands for the transpose of , and the spectral radius of a matrix  is denoted by ().Let  be decomposed as  =  −  −  in which  is the diagonal of ,  is the strict lower part of , and  is the strict upper part of .
Using the Frobenius norm, the preconditioning matrix in [6] where   stands for the th row of the matrix .
The second preconditioning matrix is  1 =  and  is computed to minimize the infinity norm of the iteration matrix.
The small gap for a matrix  is defined by Obviously, sg() is positive for diagonally dominant matrices.we can also suppose the diagonal entries are positive, else, it is true by multiplying the corresponding rows with −1.
Then, the preconditioner obtained by minimizing the infinity norm is given by We can easily find that the diagonal preconditioner is constant diagonal.Using the idea in [9], we can obtain the iterations associated with the preconditioners   and  ∞ as follows.
Solving the iteration the matrix  is decomposed by  =  +  +  and    is moved to the left-hand side, we obtain Because  +    is inverse, we can easily get This iteration is considered as sequential Frobenius norm iteration.
Similarly, we obtained and built the infinity norm iteration associated with  ∞ as follows: Now, we have set two preconditioned SOR iterative methods which use   and  ∞ as a preconditioner.
In this paper, first in Section 2, we discuss the convergence of the preconditioned SOR iterative method which uses   and  ∞ as a preconditioner.In Section 3, we provide numerical experiments to illustrate the theoretical results obtained in Section 2, and we find if we choose the set of parameters; then our method has smaller spectral radii of the iterative matrices than the SOR method, which is shown through numerical examples.

The Sequential Frobenius Norm Iteration
Theorem 2. Suppose 1 ≤  < 2 and  is positive definite; then the iteration converges for any starting point  0 .
Proof.By simple calculation, the iteration matrix can be written as let  be corresponding eigenvalue; then or we get that then The following proof is similar in [9], so we omit it; then the theorem is obtained.

The Sequential Infinity Norm Iteration
Theorem 3. Suppose 1 ≤  < 2 and  is positive definite.If every diagonal entry of  satisfies then the iteration converges for any starting point  0 .Proof.The proof of this theorem is similar to the previous one.We notice the diagonal entries of the matrix 2 −1 ∞ −  are by the assumption ‖‖ ∞ + sg () >   , so the diagonal entries of matrix 2 −1 ∞ −  are positive which completes the proof.Now, we modify the infinity norm preconditioner for diagonally dominant matrices.Since the eigenvalues of a positive definite matrix  lie in the interval (0, ‖‖ ∞ + ) for arbitrarily small number , the preconditioning matrix  is defined by for any  > 0. Especially,  = 0 if () < ‖‖ ∞ .
Theorem 4. For any  > 0, if  is positive definite; then the iteration converges for any starting point  0 , where 1 ≤  < 2.
Proof.The proof is similar to Theorem 2, we notice the condition of or 2 −1 ∞ −  ≻ 0, and the diagonal entries are Hence, we obtain this result.
Next, we will obtain a general result for previously positive definite preconditioners.Theorem 5. Suppose positive definite matrices of  and  satisfy where  is the diagonal of  and 1 ≤  < 2; then the iteration converges for any starting point  0 .Remark 6.In Theorem 5, the matrix  just satisfies the condition needed in the proof of Theorem 2; hence, this theorem is a general result for previous preconditioners.

Numerical Experiments
In this section, we provide numerical experiments to illustrate the theoretical results obtained in Section 2. All numerical experiments are carried out using MATLAB 7.1.The spectral radii of various iteration matrices are shown in Figures 1 and 2. For simplicity of comparison, suppose that all of  = 1.9.Let () denote the spectral radii of the SOR iteration matrices, let (⋅) denote the spectral radii of the Frobenius norm preconditioner iteration matrices, let ( * ) denote the spectral radii of the infinity norm preconditioner, and let (∇) denote the spectral radii of the infinity- preconditioner with  = 0.1.
Remark 7. The previous numerical experiments indicate that the spectral radii of iterative matrices with three proposed preconditioners achieve significant improvement over the spectral radii of SOR iterative matrices.
Remark 8.As the proportion of negative to positive entries is increased, the spectral radius of random positive PSD matrices    + 10 * diag(rand(, 1)) with half of the entries of  negative with infinity norm preconditioner ( * ) becomes larger than 1.But both of infinity norm preconditioners have faster convergence rates than SOR method.