Modified Preconditioned GAOR Methods for Systems of Linear Equations

Three kinds of preconditioners are proposed to accelerate the generalized AOR (GAOR) method for the linear system from the generalized least squares problem. The convergence and comparison results are obtained. The comparison results show that the convergence rate of the preconditioned generalized AOR (PGAOR) methods is better than that of the original GAOR methods. Finally, some numerical results are reported to confirm the validity of the proposed methods.


Introduction
Consider the generalized least squares problem min ∈R  ( − )   −1 ( − ) , where  ∈ R  ,  ∈ R × ,  ∈ R  , and the variance-covariance matrix  ∈ R × is a known symmetric and positive-definite matrix.This problem has many scientific applications and one of the applications is a parameter estimation in mathematical model [1,2].In order to solve the problem simply, one has to solve a linear system of the equivalent form as follows: where with  ∈ R × ,  ∈ R × , and  +  = .Without loss of generality, we assume that  = I − L − U, where I is the identity matrix, and L and U are strictly lower and upper triangular matrices obtained from , respectively.So we can pretty easily get that In order to get the approximate solutions of the linear system (2), a lot of iterative methods such as Jacobi, Gauss-Seidel (GS), successive over relaxation (SOR), and accelerated over relaxation (AOR) have been studied by many authors [3][4][5][6][7][8].These iterative methods have very good results, but have a serious drawback because of computing the inverses of  −  and − in (3).To avoid this drawback, Darvishi and Hessari [9] proposed the generalized convergence of the generalized AOR (GAOR) method when the coefficient matrix  is a diagonally dominant matrix.The GAOR method [10,11] can be defined as follows: where 2

Journal of Applied Mathematics
Here,  and  are real parameters with  ̸ = 0.The iteration matrix is rewritten briefly as To improve the convergence rate of the GAOR iterative method, a preconditioner should be applied.Now we can transform the original linear system (2) into the preconditioned linear system where  is the preconditioner. can be expressed as Meanwhile, the PGAOR method for solving the preconditioned linear system (8) is defined by  (11) In this paper, we propose three new types of preconditioners and study the convergence rate of the preconditioned GAOR methods for solving the linear system (2).This paper is organized as follows.In Section 2, some notations, definitions, and preliminary results are presented.In Section 3, three new types of preconditioners are proposed and compared with that of the original GAOR methods.Lastly, a numerical example is provided to confirm the theoretical results studied in Section 4.

Preliminaries
For vector  ∈ R  ,  ≥ 0 ( > 0) denotes that all components of  are nonnegative (positive).For two vectors ,  ∈ R  ,  ≥  ( > ) means that  −  ≥ 0 ( −  > 0).These definitions are carried immediately over to matrices.A matrix  is said to be irreducible if the directed graph of  is strongly connected.() denotes the spectral radius of .Some useful results are provided as follows.
(c) () is a simple eigenvalue of .
Moreover, if  is irreducible and if 0 ̸ =  ≤  ≤  for some nonnegative vector , then  ≤ () ≤  and  is a positive vector.

Preconditioned GAOR Methods
To solve the linear system (2) with the coefficient matrix  in (3), we consider the preconditioners as follows: where ) , ( > 0) .
The preconditioned coefficient matrix    can be expressed as where ) . (15) Based on the discussed above,    can be spitted as Similarly, The preconditioned GAOR methods for solving    =    are defined by where with where For  = 1, 2, 3, we have ) . ( Next, we will study the convergence condition of the PGAOR methods.For simplicity, without loss of generality, we can assume that Then, we have the following theorem. Theorem 3. Let T  and T * 1 be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively.If matrix  in (3) is an irreducible matrix then it holds that Proof.By some simple calculations on (7), one can get Since here  is irreducible, one can pretty easily obtain that T  is nonnegative and irreducible by the above assumptions.And so on, one can also easily prove that T * 1 is nonnegative and irreducible.By Lemma 1, there exists a positive vector  > 0 such that where  = (T  ).

Journal of Applied Mathematics
One can easily have That is, With the same vector  > 0, it holds Using ( 22), (26), and (28), we can obtain Meanwhile, we have By far, we can easily get In view of the abovementioned assumptions, we have that Then, if  = (T  ) > 1, then From Lemma 2, we can easily get Similarly, if  = (T  ) < 1, then So we have If  = (T  ) = 1, then we may get that  =  but  ̸ = 0, which is contradictory to the fact of nonsingular matrix  by assumptions; this completes the conclusion of the theorem.Theorem 4. Let T  and T * 2 be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively.If the matrix  in (3) is an irreducible matrix satisfying then it holds that (  ) ̸ = 0, (39) ) . ( The coefficient matrix F is spitted as ( ) . ( Table 1 reveals the spectral radii of the GAOR methods and the PGAOR methods.It tells that the spectral radii of the preconditioned PGAOR methods are smaller than those of the GAOR methods, so we can get that the proposed three Obviously,  is irreducible.Table 2 shows the spectral radii of the corresponding iteration matrices with  = 8 and  = 6.
Similarly, in Table 2, we get that the results are in concord with Theorems 3-5.

Table 1 :
Spectral radii of GAOR method and PGAOR method.

Table 2 :
(2)ctral radii of GAOR method and PGAOR method.thespeedrate of the GAOR method for the linear systems(2).The results in Table1are in accordance with Theorems 3-5.Example 2. The coefficient matrix  in (3) is given by