Generalized ASOR and Modified ASOR Methods for Saddle Point Problems

Recently, the accelerated successive overrelaxation(SOR-) like (ASOR) method was proposed for saddle point problems. In this paper, we establish a generalized accelerated SOR-like (GASOR) method and a modified accelerated SOR-like (MASOR) method, which are extension of the ASORmethod, for solving both nonsingular and singular saddle point problems.The sufficient conditions of the convergence (semiconvergence) for solving nonsingular (singular) saddle point problems are derived. Finally, numerical examples are carried out, which show that theGASOR andMASORmethods have faster convergence rates than the SORlike, generalized SOR (GSOR), modified SOR-like (MSOR-like), modified symmetric SOR (MSSOR), generalized symmetric SOR (GSSOR), generalized modified symmetric SOR (GMSSOR), and ASOR methods with optimal or experimentally found optimal parameters when the iteration parameters are suitably chosen.


Introduction
Consider the following large and sparse saddle point problem: where  ∈ R × is a symmetric positive definite matrix,  ∈ R × ,  ∈ R  , and  ∈ R  ( ≥ ).It follows that the linear system (1) is nonsingular when  is of full column rank and singular when  is rank deficient [1].
The saddle point problem (1) is important and arises in a variety of scientific and engineering applications, such as mixed finite element approximation of elliptic partial differential equations, optimal control, computational fluid dynamics, weighted least-squares problems, electronic networks, and computer graphics; see [1][2][3][4] and references therein.
In the case of  in (1) being rank deficient, we call the linear system (1) a singular saddle point problem.For this case, various kinds of relaxation iteration methods have also been established.In [30][31][32][33][34], the authors applied the inexact Uzawa methods for singular saddle point problems.
Recently, Njeru and Guo [35] developed an accelerated SOR-like (ASOR) method for the nonsingular saddle point problem (1) and numerical experiments results show that the convergence rate of the ASOR method is faster than the convergence rates of the SOR-like, GSSOR, and GSOR methods when the parameters are suitably chosen.It can be described as follows.
For the coefficient matrix of the saddle point problem (1), Njeru and Guo [35] made the following splitting: with  > 0,  ̸ = 0, and  ∈ R × is a symmetric positive definite matrix.The iteration for ASOR method [35] is thus described by the algorithm below.
The ASOR Method.Given initial vectors  (0) ∈ R  and  (0) ∈ R  and two real relaxation factors  > 0 and  ̸ = 0, for  = 0, 1, 2, . .., until the iteration sequence {( () ), ( ()  )} converges to the exact solution of the saddle point problem (1), compute In addition, the authors in [35] obtained the experimentally found optimal parameters by using trial and error method in different cases, and the experimentally found optimal values for  and  are very close to  = √  max  min 8 √  min + √  max ,  = 2 √  min  max + 1 where  max and  min are the maximum and minimum eigenvalues of the matrix  −1    −1 , respectively.In this paper, based on the ASOR method, by adding the new parameters, the generalized ASOR (GASOR) method and the modified ASOR (MASOR) method are proposed to solve the nonsingular and the singular saddle point problem (1).We discuss the convergence properties of the GASOR and MASOR methods for solving nonsingular saddle point problems and the semiconvergence properties of the GASOR and MASOR methods for solving singular saddle point problems, respectively.In addition, the choice of the relaxation parameters of the GASOR and MASOR methods is discussed in Section 4. Numerical examples are implemented to illustrate the effectiveness of the GASOR and MASOR methods with appropriate parameters for both nonsingular and singular saddle point problems.
The rest of this paper is organized as follows.In Section 2, we propose the GASOR method for solving nonsingular and singular saddle point problem (1) and discuss the convergence (semiconvergence) properties of the GASOR method for solving nonsingular (singular) saddle point problems.The MASOR method is proposed and convergence (semiconvergence) properties of the MASOR method for solving nonsingular (singular) saddle point problems are derived in Section 3. The analysis of the optimal convergence factors of two new methods is presented in Section 4. In Section 5, numerical experiments are provided to examine the feasibility and effectiveness of the GASOR and MASOR methods for solving both nonsingular and singular saddle point problems.Finally, some conclusions are drawn.

The Generalized ASOR Method for Saddle Point Problems
2.1.The Generalized ASOR Method.In this section, we propose the generalized ASOR (GASOR) method for solving the saddle point problem (1).The GASOR method with appropriate parameters has the faster convergence rate than those of SOR-like [8], GSOR [5], MSOR-like [9], and ASOR [35] methods with optimal or experimentally found optimal parameters for nonsingular saddle point problems.For the coefficient matrix of the augmented system (1), we make the splitting of A as in (2).
Let  and  be two nonzero reals, let   ∈ R × and   ∈ R × be the -by- and the -by- identity matrices, respectively, and Then, we consider the following generalized ASOR iteration scheme for solving the saddle point problem (1): where  ,, is the iteration matrix of the GASOR method and its form as follows: Then, we have the following algorithmic description of the GASOR method.
Note that the GASOR method with  =  reduces to the ASOR method proposed by Njeru and Guo [35].
Proof.Let  be an eigenvalue of  ,, and let (  ,   )  be the corresponding eigenvector.Then we have Equation ( 13) can be written as which is equivalent to Inasmuch as  −  −  ̸ = 0, we obtain from (15).Substituting ( 17) into ( 16), we have by the definiteness of .
We can prove the second assertion by reversing the process.
One may easily show that for the special case  =  the GASOR method reduces to the ASOR method derived in [35].
Lemma 2 (see [36]).Both roots of the real quadratic equation  2 −+ = 0 are less than one in modulus if and only if || < 1 and || < 1 + .Theorem 3. Let  and  be symmetric positive definite, and let  be full column rank.Assume that all eigenvalues  of  −1    −1  are real and positive.Then, the GASOR method is convergent for all , , and  such that where   is the maximum eigenvalue of  −1    −1 .
Proof.After some manipulations on Lemma 1, we have Changing the above equation into  2 −  +  = 0, we find where It follows from (24) that It is obvious that (26) holds true by  > 0 and  > 0. From (25), we get Since  > 0,  > 0, and  is an real and positive eigenvalue of matrix  −1    −1 , we have However, We obtain that if the conditions (20) are satisfied, the GASOR method is convergent.
From (21), we get the following corollary.
Corollary 4. Let  and  be symmetric positive definite, let and  be full column rank.If  is an eigenvalue of the matrix  −1    −1 , then eigenvalues of the matrix  ,, are given by  = /( + ) or 2.3.Semiconvergence of the GASOR Method for Singular Saddle Point Problems.For singular saddle point problems, in [37], Wang and Zhang studied the accelerated HSS (AHSS) method first put forward by Bai and Golub in [20] for solving singular saddle point problems; Yang and Wu [38] proposed the Uzawa-HSS method and applied it for singular saddle point problems [39].Recently, Miao and Cao [40] studied the semiconvergence of the generalized local HSS method established by Zhu [41] for singular saddle point problems; Zhou and Zhang [42] discussed the semiconvergence of the GMSSOR method which derived by Zhang et al. in [43] for singular saddle point problems.In the sequel, Chen and Ma [44] presented a generalized shift-splitting preconditioner and investigated this preconditioner for singular saddle point problems [45].For more literatures on this theme, one can refer to [46,47] and references therein.In this section, by using the idea of [42], we assume that  in (1) is rank deficient; that is, rank() < ; that is, A in (1) is singular.In addition, we will discuss the semiconvergence region for parameters , , and  in the GASOR method for solving the singular saddle point problem (1).The GASOR method has a faster convergence rate than the ones of the GSSOR [48], MSSOR [49], and GMSSOR [42,43] methods for singular saddle point problems when optimal or experimentally found optimal parameters are chosen for them.
We first recollect some needed known results about stationary iterative methods for singular linear systems.We denote the range and the null space of the matrix  by () and (), respectively.The smallest nonnegative integer  such that rank(  ) = rank( +1 ) is called the index of  and is denoted by  = index().For a matrix  ∈ C × , the splitting  =  −  is a nonsingular splitting if  is nonsingular.Let  =  −1  and  =  −1 , then solving linear systems  =  is equivalent to considering the following iterative scheme: When  is singular, the semiconvergence about the iteration scheme ( 31) is precisely described in [50,51].
Definition 6 (see [42]).The pseudospectral radius of matrix  is defined as where () stands for the spectrum of matrix .
Theorem 8. Let  and  be symmetric positive definite, and let  be rank deficient.If  is an eigenvalue of the matrix  −1    −1 , then eigenvalues of the matrix  ,, are 1,  = /( + ), and the remainder eigenvalues are the roots of the quadratic equation (21).
By Theorem 8, we see that the eigenvalues of  ,, are 1, /( + ) and the remainder eigenvalues are the roots of the quadratic equation (21).By Definition 6 and Lemma 7, we need to consider the eigenvalues of  ,, except 1.
Lemma 11.Let  and  be symmetric positive definite, and let  be rank deficient.Then, V( ,, ) < 1 for all , , and  such that where   is the maximum eigenvalue of  −1    −1 .
Using the same technique as the proof of Theorem 3, the conditions (44) are obtained.This completes our proof of Lemma 11.
Based on Definition 6 and Lemmas 10 and 11, we can establish the sufficient conditions of the semiconvergence for the GASOR method.
Theorem 12. Let  and  be symmetric positive definite, and let  be rank deficient.Then, the GASOR method is semiconvergent for all , , and  such that where   is the maximum nonzero eigenvalue of  −1    −1 .

The Modified ASOR Method for Saddle Point Problems
3.1.The Modified ASOR Method.To obtain the new iteration methods for saddle point problems, some authors added new parameters in the existing methods and got the better methods [11,20,53].Based on the preconditioned HSS (PHSS) method derived by Bai et al. [2], Bai and Golub [20] and Li et al. [53] by using this method obtained the AHSS and parameterized preconditioned HSS (PPHSS) methods, respectively.This idea motivates us to propose the modified ASOR (MASOR) method for solving the saddle point problem (1) by making another splitting for the coefficient matrix of the saddle point problem (1) as follows: where with  > 0 and  ∈ R × is a symmetric positive definite matrix.Let  be nonzero real and let  be real.Then, we consider the following modified ASOR iteration scheme for solving the saddle point problem (1): where  ,, is the iteration matrix of the MASOR method and its form as follows: Then, the algorithmic description of the MASOR method is as follows.
The MASOR Method.Given initial vectors  (0) ∈ R  and  (0) ∈ R  and three real relaxation factors  > 0,  ̸ = 0, and  is a real number, for  = 0, 1, 2, . .., until the iteration sequence {( () ), ( ()  )} converges to the exact solution of the saddle point problem (1), compute Note that the MASOR method involves three parameters , , and .Since then the matrix ( − ) is invertible if and only if by the definiteness of  and .
Note that the MASOR method with  = 1/2 reduces to the ASOR method proposed by Njeru and Guo [35].
, which is a contradiction.Thus,  ̸ = 0.In a similar manner, we can prove that  ̸ = /( + ) and  ̸ = 1.We assume that  is the eigenvector corresponding to ; thus,  ̸ = 0. Then by (54), the following holds: which yields that Let  = (/( −  − )) −1 ; we can deduce that and, by setting  into (56), we have Combining ( 57) and (58) results in We rewrite (57) and (59) as follows: which is equivalent to This implies that  ,, x = x, where x = (  ,   )  ̸ = 0, which means that  is an eigenvalue of  ,, .We can prove the second assertion by revising the process.Theorem 14.Let  and  be symmetric positive definite, and let  be full column rank.Assume that all eigenvalues  of  −1    −1  are real and positive.Then, the MASOR method is convergent for all , , and  such that where   is the maximum eigenvalue of  −1    −1 .
Proof.Equation ( 54) can be equivalently written as In terms of (65), we derive Evidently, the first and the second inequalities in (66) hold true for all  > 0 and  > 0. It follows from the third inequality in (66),  2 > 0, and  which is a real and positive eigenvalue of the matrix It is not difficult to verify that Therefore, the proof of this theorem is completed.
Mathematical Problems in Engineering 9 From (63), we get the following corollary.
Corollary 15.Let  and  be symmetric positive definite, and let  be be full rank.If  is an eigenvalue of the matrix  −1    −1 , then eigenvalues of the matrix  ,, are given by  = /( + ) or ] . (69)

Semiconvergence of the MASOR Method for Singular
Saddle Point Problem.In this section, we assume that  in (1) is rank deficient and we will discuss the semiconvergence region for parameters , , and  in the MASOR method for solving the singular saddle point problem (1).Theorem 16.Let  and  be symmetric positive definite, and let  be rank deficient.If  is an eigenvalue of the matrix  −1    −1 , then eigenvalues of the matrix  ,, are 1,  = /( + ), and the remainder eigenvalues are the roots of the quadratic equation ( 63 We assume that rank() =  −  ( > 0) and, with a quite similar strategy utilized in Theorem 8, we deduce that where   ( = 1, . . .,  − ) are the eigenvalues of  −1    −1 , from which one can see that  ,, has at least  eigenvalues 1 and  eigenvalues /(+).It follows from −1+( 2 /[(+ )−](1−))  = 0 that (63) holds, which implies that the remainder eigenvalues of  ,, are the roots of (63).Thus, the theorem is proved.
By Lemma 17, we have proved that index( −  ,, ) = 1, which satisfies the condition (1) in Lemma 7. In the sequel, we need to prove condition (2) in Lemma 7.
By Theorem 16, we see that the eigenvalues of  ,, are 1, /( + ) and the remainder eigenvalues are the roots of the quadratic equation ( 63).By Definition 6 and Lemma 7, we only consider the eigenvalues of  ,, except 1.

Lemma 18.
Let  and  be symmetric positive definite, and let  be rank deficient.Then, V( ,, ) < 1 for all , , and  such that where   is the maximum eigenvalue of  −1    −1 .
Together with Definition 6 and Lemmas 17 and 18, we get the following sufficient conditions for the semiconvergence of the MASOR method.
Theorem 19.Let  and  be symmetric positive definite, and let  be rank deficient.Then, the MASOR method is semiconvergent for all , , and  such that where   is the maximum nonzero eigenvalue of  −1    −1 .

The Optimal Convergence Factors of the GASOR and MASOR Methods
According to the theory of iterative methods, the optimal parameters of the GASOR method and MASOR method are as follows: ( * ,  * ,  * ) = arg min respectively, where the arg min (,,) ( ,, ) denotes such , , and  that the spectral radius of the matrix  ,, reaches the minimum.This problem has been discussed in some articles [2,5,8,10,12,19].Golub et al. [8] obtained the optimal parameter of the SOR-like method; Bai et al. and Li and Kong studied the optimal parameters of the GSOR-like and GSOR methods in [5,10], respectively.Bai [19] discussed the optimal parameters of the HSS-like method and Bai et al. [2] studied the optimal parameters of the PHSS method and so forth.To get the optimal values of , , and  for the GASOR method and , , and  for the MASOR method, we need to analyze the modulus of eigenvalues of  ,, and  ,, , respectively.Based on the proof of Theorems 8 and 16, the matrices  ,, and  ,, have  repeated eigenvalues /( + ) and the remainder eigenvalues satisfy ( 21) and (63), respectively.Note that all the eigenvalues  of  ,, and  ,, are available, which depend on , , , and .The optimal parameters could be obtained by getting the minimum of these eigenvalue functions.However, for most iterative methods, especially with multiple parameters, this work is very complicated.Hence it is very difficult to get the optimal parameters.The parameters in this paper are chosen based on prior experience and trial and error.

Numerical Experiments
In this section, numerical examples illustrate the superiority of the GASOR and MASOR methods to the ASOR, SORlike, GSOR, and MSOR-like methods when they are used for solving the nonsingular saddle point problem (1) and show the advantages of the GASOR and MASOR methods over the GSOR, GSSOR, MSSOR, and GMSSOR methods for solving the singular saddle point problem (1).All numerical procedures are carried out using Matlab 6.5 on a personal computer with Intel(R) Pentium(R) CPU G3240T 2.70 GHz, 2.0 G memory and Windows 7 operating system.
Example 1.Consider the Stokes flow problem [2]: find u and w such that where Ω = (0, 1) × (0, 1) ⊂ R 2 , Ω is the boundary of Ω,  stands for the viscosity scalar, u = (  1 ,   2 )  is a vector-valued function representing the velocity, Δ is the componentwise Laplace operator, and w is a scalar function representing the pressure.By discretizing (82) with the upwind scheme as follows [54,55]: we obtain the system where with Table 1: Choice of the matrix .
All computations for the SOR-like, GSOR, MSOR-like, ASOR, GASOR, and MASOR methods are started from initial vector x (0) = ( (0) ,  (0) )  = (0, 0, . . ., 0)  and the iteration is terminated once the current iterate x () satisfies or the maximum prescribed number of iterations  max = 15 is exceeded.In Table 2, for various problem sizes , we list the theoretical optimal iteration parameters of the SOR-like, GSOR, MSOR-like, ASOR, GASOR, and MASOR methods used in our implementations.For the GSOR, SOR-like, and MSORlike methods, we adopt their optimal parameters as in [5,8,9].In addition, the parameters taken by the ASOR method are the same as in [35], and we choose the parameters of the GASOR and MASOR methods which result in the least numbers of iterations for this numerical example.However, note that the explicit expressions of parameters cannot be obtained, and we only choose them by trial and error.The corresponding convergence factors  * ,  exp , and  with various problems size  are also reported in Table 2.
In Table 3, we present the iteration numbers (IT), CPU times (CPU), and relative residual (RES) of the testing iteration methods with different problem sizes .In this table, we use the bold numbers to indicate the smallest and the second smallest CPU times and iteration numbers in each column.
As observed in Table 2, the convergence factors of the GASOR and MASOR methods are smaller than that of the SOR-like method.We can also see that the spectral radii of the GASOR and MASOR methods in Cases 1 and 2 for this example are almost the same as the ones of the GSOR, MSORlike and ASOR methods, while the GASOR and MASOR methods require much less time and iteration steps to achieve the stopping criterion than other four methods, and the relaxation parameters of the GASOR and MASOR methods are not optimal and only lie in the convergence region of these methods.From the results in Table 3, we see that the SOR-like, GSOR, MSOR-like, ASOR, GASOR, and MASOR methods  In order to better understand the numerical results, we have presented graphs of RES(log 10) against number of iterations in Table 3 for  = 16,  = 24, and  = 32, respectively, in Figures 1-3.From the three figures, we note that the six methods are convergent while the GASOR and MASOR methods converge faster than other methods.Moreover, it can be seen that the GASOR and MASOR methods at different iteration points have "semilocal convergence" behavior, and thus we can deduce that the parameters chosen by the   GASOR and MASOR methods are not optimal and only lie in the convergence regions of these two new methods, which means that the GASOR and MASOR methods may perform better with better choice of optimal parameters.The numerical results in this example show feasibility and effectiveness of the GASOR and MASOR methods for solving nonsingular saddle point problems.
Example 2. Consider the singular saddle point problem (1) with coefficient matrix of the matrix blocks [34]  = (  ⊗  +  ⊗  0 where and the right-hand side vector  is chosen by  = A + , where We choose the preconditioning matrix  = diag(   −1 ).
In Table 4, for various problem sizes , we list the theoretical optimal iteration parameters of the iteration matrices For the GSOR, MSSOR, and GSSOR methods, we take their optimal parameters as in [5,6,49].For the GMSSOR method, the parameters are chosen as [42].Furthermore, we take the parameters of the GASOR and MASOR methods which result in the least numbers of iterations for this numerical example.As mentioned in Example 1, we only choose them by trial and error.Furthermore, the pseudospectral radii of the iteration matrices of these methods are also presented in this table.
Comparing the results in Table 4, we observe that the pseudospectral radii of the GASOR and MASOR methods are smaller than those of the MSSOR and GMSSOR methods.It also can be seen that the pseudospectral radii of the GASOR and MASOR methods for Example 2 are almost the same as the ones of the GSOR and GSSOR methods.However, the GASOR and MASOR methods outperform the GSOR and GSSOR methods in terms of less CPU times and iteration steps.In Table 5, we present the iteration numbers (IT), CPU times (CPU), and relative residual (RES) of the testing iteration methods with different problem sizes .In this table, we use the bold numbers to indicate the smallest and the second smallest CPU times and iteration numbers in each column.Dates presented in Table 5 reveal that the GSOR, MSSOR, GSSOR, GMSSOR, GASOR, and MASOR methods succeed in producing high-quality approximate solutions in all cases.The iteration numbers and CPU times of all methods improve with size growing.Moreover, the GASOR and MASOR methods both use the least iteration numbers and CPU times compared with the GSOR, MSSOR, GSSOR, and GMSSOR methods.Furthermore, we can observe that the performance of the GASOR and MASOR methods is the same.It can be seen that the performance of the GASOR and MASOR methods is much better than other methods, but the parameters of these two methods are not the optimal parameters, and other methods take optimal or experimentally found optimal parameters.Hence, it is anticipated that the GASOR and MASOR methods with the optimal parameters would be much better than the other four methods.
In Figure 4, the graphs of RES(log 10) against number of iterations in Table 5 for four different sizes are derived.It clearly shows that the six methods are semiconvergent while the GASOR and MASOR methods converge faster.However, the parameters of the GASOR and MASOR methods are not optimal and only lie in the convergence regions of these two methods.

Conclusions
In this paper, we propose two new methods called GASOR and MASOR methods, respectively, and study the convergence and semiconvergence of these two new methods for solving nonsingular and singular saddle point problems, respectively.Numerical results given in Section 5 (Tables 1-5 and Figures 1-4) show that the convergence rates of the GASOR and MASOR methods are better than the SOR-like, GSOR, MSOR-like, GSSOR, MSSOR, GMSSOR, and ASOR methods even though they are implemented with the optimal or the experimentally found optimal parameters.Numerical results present the feasibility and effectiveness of the GASOR and MASOR methods for solving both nonsingular and singular saddle point problems.
Since the optimal parameters were not used for the GASOR and MASOR methods in the numerical experiments,  it is anticipated that the GASOR and MASOR methods with the optimal parameters would be much better than the other methods.Thus, it would be nice if we can find the optimal parameters for which the convergence rates of the GASOR and MASOR methods are best.Future work will include numerical or theoretical studies for finding the optimal values of , , and  for the GASOR method and the optimal values of , , and  for the MASOR method.

Figure 1 :
Figure 1: The iteration curves of algorithms when  = 512 for Case 1 and Case 2, respectively.

Figure 2 :
Figure 2: The iteration curves of algorithms when  = 1152 for Case 1 and Case 2, respectively.

Figure 3 :
Figure 3: The iteration curves of algorithms when  = 2048 for Case 1 and Case 2, respectively.

Table 2 :
Parameters for the six methods and the corresponding spectral radii.uses the least iteration numbers and CPU times compared with the SOR-like, GSOR, MSOR-like, ASOR, and MASOR methods, and the MASOR method is also superior to the SOR-like, GSOR, MSOR-like, and ASOR methods both in iteration numbers and CPU times especially for large problem size .Furthermore, we can find that the

Table 3 :
Numerical results for the six methods for Case 1 and Case 2, respectively.

Table 4 :
Parameters for the six methods and the corresponding pseudospectral radii.

Table 5 :
Numerical results for the six methods.