JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 850986 10.1155/2013/850986 850986 Research Article Modified Preconditioned GAOR Methods for Systems of Linear Equations Zhang Xue-Feng Cui Qun-Fa 0000-0002-7798-3557 Wu Shi-Liang Marino Giuseppe School of Mathematics and Statistics Anyang Normal University Anyang 455000 China aynu.edu.cn 2013 30 5 2013 2013 30 01 2013 13 05 2013 2013 Copyright © 2013 Xue-Feng Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Three kinds of preconditioners are proposed to accelerate the generalized AOR (GAOR) method for the linear system from the generalized least squares problem. The convergence and comparison results are obtained. The comparison results show that the convergence rate of the preconditioned generalized AOR (PGAOR) methods is better than that of the original GAOR methods. Finally, some numerical results are reported to confirm the validity of the proposed methods.

1. Introduction

Consider the generalized least squares problem (1)minxn(Ax-b)TW-1(Ax-b), where xn, An×n, bn, and the variance-covariance matrix Wn×n is a known symmetric and positive-definite matrix. This problem has many scientific applications and one of the applications is a parameter estimation in mathematical model [1, 2].

In order to solve the problem simply, one has to solve a linear system of the equivalent form as follows: (2)Fy=f, where (3)F=(I-BHKI-C), with Bp×p, Cq×q, and p+q=n. Without loss of generality, we assume that F=--𝒰, where is the identity matrix, and and 𝒰 are strictly lower and upper triangular matrices obtained from F, respectively. So we can pretty easily get that (4)=(I00I),=(00-K0),𝒰=(B-H0C).

In order to get the approximate solutions of the linear system (2), a lot of iterative methods such as Jacobi, Gauss-Seidel (GS), successive over relaxation (SOR), and accelerated over relaxation (AOR) have been studied by many authors . These iterative methods have very good results, but have a serious drawback because of computing the inverses of I-B and I-C in (3). To avoid this drawback, Darvishi and Hessari  proposed the generalized convergence of the generalized AOR (GAOR) method when the coefficient matrix F is a diagonally dominant matrix. The GAOR method [10, 11] can be defined as follows: (5)yk+1=𝒯γωyk+ωg,k=0,1,2,, where (6)𝒯γω=(I0γKI)-1×((1-ω)I+(ω-γ)(00-K0)+ω(B-H0C)),g=(I0-γKI)f. Here, ω and γ are real parameters with ω0. The iteration matrix is rewritten briefly as (7)𝒯γω=((1-ω)I+ωB-ωHω(γ-1)K-γωKB(1-ω)I+ωC+wγKH).

To improve the convergence rate of the GAOR iterative method, a preconditioner should be applied. Now we can transform the original linear system (2) into the preconditioned linear system (8)PFy=Pf, where P is the preconditioner. PF can be expressed as (9)PF=(I-B*H*K*I-C*). Meanwhile, the PGAOR method for solving the preconditioned linear system (8) is defined by (10)yk+1=𝒯γω*yk+wg*,k=0,1,2,, where(11)𝒯γω*=((1-ω)I+ωB*-ωH*ω(γ-1)K*-γωK*B*(1-ω)I+ωC*+γωK*H*),g=(I0-γK*I)Pf.

In this paper, we propose three new types of preconditioners and study the convergence rate of the preconditioned GAOR methods for solving the linear system (2). This paper is organized as follows. In Section 2, some notations, definitions, and preliminary results are presented. In Section 3, three new types of preconditioners are proposed and compared with that of the original GAOR methods. Lastly, a numerical example is provided to confirm the theoretical results studied in Section 4.

2. Preliminaries

For vector xn, x0    (x>0) denotes that all components of x are nonnegative (positive). For two vectors x,yn, xy    (x>y) means that x-y0(x-y>0). These definitions are carried immediately over to matrices. A matrix A is said to be irreducible if the directed graph of A is strongly connected. ρ(A) denotes the spectral radius of A. Some useful results are provided as follows.

Lemma 1 (see [<xref ref-type="bibr" rid="B7">7</xref>]).

Let A0 be an irreducible matrix. Then,

A  has a positive eigenvalue equal to its spectral radius.

A has an eigenvector x>0 corresponding to ρ(A).

ρ(A) is a simple eigenvalue of A.

Lemma 2 (see [<xref ref-type="bibr" rid="B12">12</xref>]).

Let A be a nonnegative matrix. Then,

if αxAx for some nonnegative vector x, x0, then αρ(A).

If Axβx for some positive vector x, then ρ(A)β. Moreover, if A is irreducible and if 0αxAxβx for some nonnegative vector x, then αρ(A)β and x is a positive vector.

3. Preconditioned GAOR Methods

To solve the linear system (2) with the coefficient matrix F in (3), we consider the preconditioners as follows: (12)Pi=I+S-=(I0SiI),  i=1,2,3, where (13)S1=(00-k1p00-k2p000-kqp),S2=(0-k1200-k210-k2300-k320-kq-1,p00-kq,p-10),S3=(0-k1200000-kq-1,p-kq1α00),(α>0). The preconditioned coefficient matrix PiF can be expressed as (14)PiF=(I-BHK+Si(I-B)SiH+I-C),  i=1,2,3, where(15)K+S1(I-B)=(k11k12k1pbppk21k22k2pbppkq1kq2kqpbpp),K+S2(I-B)=(k11k12b22k1,p-1k1pk21b11+k23b31k22k21b13+k23b33k2pk31k32b22+k34b420kq-1,p-2bq-2,p+kq-1,pbqpkq1kq2kq,p-1bq-1,p-1kqp),K+S3(I-B)=(k11k12b22k1pk21k22k2pkq-1,pbppkq1(1-1-b11α)kq2kqp).Based on the discussed above, PiF can be spitted as (16)PiF=-i-𝒰i,i=1,2,3. Similarly, (17)=(I00I),i=(00-K-Si(I-B)0),𝒰i=(B-H0C-SiH),i=1,2,3. The preconditioned GAOR methods for solving PiFy=Pif are defined by (18)yk+1=𝒯γωi*yk+ωgi*,k=0,1,2,,i=1,2,3, where (19)ωgi*=(-Γi)-1ΩPif,i=1,2,3, with (20)𝒯γωi*=(I-Γi)-1(I-Ω+(Ω-Γ)i+Ω𝒰i),i=1,2,3, where (21)Ω=(ω1I00ω2I,),Γ=(γ1I00γ2I). For i=1,2,3, we have(22)𝒯γωi*=((1-ω1)I+ω1B-ω1H(ω1γ2-ω2)[K+Si(I-B)](1-γ2)I+γ2C-γ2SiH-ω1γ2[K+Si(I-B)]B+ω1γ2[K+Si(I-B)]H).

Next, we will study the convergence condition of the PGAOR methods. For simplicity, without loss of generality, we can assume that (23)H0,K0,B0,C0,0<ω11,0<ω21,0<γ2ω2ω1. Then, we have the following theorem.

Theorem 3.

Let 𝒯γω and 𝒯γω1* be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If matrix F in (3) is an irreducible matrix then it holds that ρ(𝒯γω)1, (24)ρ(𝒯γω1*)>ρ(𝒯γω)1ifρ(𝒯γω)>1,ρ(𝒯γω1*)<ρ(𝒯γω)1ifρ(𝒯γω)<1.

Proof.

By some simple calculations on (7), one can get (25)𝒯γω=((1-ω1I)+ω1B-ω1H(ω1γ2-ω2)K(1-ω2I+ω2)C)+ω1γ2(00-KBKH). Since here F is irreducible, one can pretty easily obtain that 𝒯γω is nonnegative and irreducible by the above assumptions. And so on, one can also easily prove that 𝒯γω1* is non-negative and irreducible. By Lemma 1, there exists a positive vector x>0 such that (26)𝒯γωx=λx, where λ=ρ(𝒯γω).

One can easily have (27)[-Ω+(Ω-Γ)+Ω𝒰]x=λ(-Γ)x. That is, (28)(-Ω)x=λ(-Γx-(Ω-Γ)x-Ω𝒰x). With the same vector x>0, it holds (29)𝒯γω1*x-λx=(-Γ1)-1×[-Ω+(Ω-Γ)1-λ(-Γ1)]x. Using (22), (26), and (28), we can obtain (30)𝒯γω1*x-λx=(-Γ1)-1×[(Ω-Γ)(1-)+Ω(𝒰1-𝒰)+λΓ(1-)]x=(-Γ1)-1[Ω(𝒰1-𝒰+1-)+(λ-1)Γ(1-)]x=(-Γ1)-1Ω(𝒰1-𝒰+1-)x+(λ-1)((-Γ1)-1)Γ(1-)x=(-Γ1)-1(00-ω2S1(I-B)-ω2S1H)x+(λ-1)((-Γ1)-1)×(00-γ2S1(I-B)0)x. Meanwhile, we have (31)(-Γ1)-1(00-ω2S1(I-B)-ω2S1H)x=(-Γ1)-1(00ω2ω1S10)(-ω1I+ω1B-ω1H00)x=(-Γ1)-1(00ω2ω1S10)×(-ω1I+ω1B-ω1H(ω1γ2K-ω1γ2KB)-ω2I+ω2C+ω1γ2KH)x=(-Γ1)-1(00ω2ω1S10)(𝒯wr-)x=(λ-1)(-Γ1)-1(00ω2ω1S10)x. By far, we can easily get (32)𝒯γω1*x-λx=(λ-1)(-Γ1)-1×[(00ω2ω1S10)+(00-γ2S1(I-B)0)]x=(λ-1)(I0-γ2[K+S1(I-B)]I)×(00(ω2ω1-γ2)S1+γ2S1B0)x=(λ-1)(00(ω2ω1-γ2)S1+γ2S1B0)x. In view of the abovementioned assumptions, we have that (33)(00(ω2ω1-γ2)S1+γ2S1B0)x>0. Then, if λ=ρ(𝒯wr)>1, then (34)𝒯γω1*-λx>0,𝒯γω1*-λx0. From Lemma 2, we can easily get (35)ρ(𝒯γω1*)>ρ(𝒯wr)>1. Similarly, if λ=ρ(𝒯wr)<1, then (36)𝒯γω1*-λx<0,𝒯γω1*-λx0. So we have (37)ρ(𝒯γω1*)<ρ(𝒯γω)<1. If λ=ρ(𝒯γω)=1, then we may get that Fy=f but y0, which is contradictory to the fact of nonsingular matrix F by assumptions; this completes the conclusion of the theorem.

Theorem 4.

Let 𝒯γω and 𝒯γω2* be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If the matrix F in (3) is an irreducible matrix satisfying (38)b11>0,kij0for|i-j|=1, then it holds that ρ(Twr)0, (39)ρ(𝒯γω2*)>ρ(𝒯γω)1ifρ(𝒯γω)>1,ρ(𝒯γω2*)<ρ(𝒯γω)1ifρ(𝒯γω)<1.

Proof.

One can easily prove this theorem by using similar arguments of Theorem 3.

Similarly, we have the following theorem.

Theorem 5.

Let 𝒯γω and 𝒯γω3* be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If the matrix F in (3) is an irreducible matrix satisfying (40)α>1,kq1<0,b11>0,ki,i+1<0fori=1,2,,q-1, then it holds that ρ(𝒯γω)1, (41)ρ(𝒯γω3*)>ρ(𝒯γω)1ifρ(𝒯γω)>1,ρ(𝒯γω3*)<ρ(𝒯γω)1ifρ(𝒯γω)<1.

4. Numerical Examples

In this section, we give numerical examples to demonstrate the conclusions drawn above. The numerical experiments were done by using MATLAB 7.0.

Example 1.

Consider the following Laplace equation: (42)2u(x,y)x2+2u(x,y)y2=0. Under a uniform square domain, applying the five-point finite difference method with the uniform mesh size, we can get the following linear system: (43)x=f, where(44)=(12000000-180-18-1801200000-180000012000-18-18-18-1800001200-18-18000000012000-18-1800000012000-18-1800-18-1800120000-18-18-18-180001200000-180-180001200-180-180-18-18000120-180000-18000012).The coefficient matrix is spitted as (45)=(I-BHKI-C), where (46)B=(120000001200000012000000120000001200000012),C=(120000012000001200000120000012),H=(0-180-18-180-18000-18-18-18-180-18-1800000-18-180000-18-18),K=(00-18-1800-18-18-18-180000-180-180-180-180-18-18-180000-18).

Table 1 reveals the spectral radii of the GAOR methods and the PGAOR methods. It tells that the spectral radii of the preconditioned PGAOR methods are smaller than those of the GAOR methods, so we can get that the proposed three preconditioners can accelerate the speed rate of the GAOR method for the linear systems (2). The results in Table 1 are in accordance with Theorems 35.

Spectral radii of GAOR method and PGAOR method.

Preconditioner α ω 1 ω 2 γ 2 ρ GAOR ρ PGAOR
S 1 0.8912 0.9654 0.8865 0.8478 0.8457
S 2 0.8912 0.9654 0.8865 0.8478 0.8376
S 3 1.5 0.8912 0.9654 0.8865 0.8478 0.8418
2 0.8912 0.9654 0.8865 0.8478 0.8420
Example 2.

The coefficient matrix F in (3) is given by (47)F=(I-BHKI-C), where B=(bij)p×p, C=(cij)q×q, and p+q=n, H=(hij)p×q, K=(kij)q×p with (48)bii=1i+1,i=1,p,bij=130-1(30j)+i,j>i,i=1,p-1,j=2,p,bij=130-130((10(i-j+1))+i),i>j,i=2,p,j=1,p-1,cii=1n+i+1,i=1,q,cij=130-130(n+j)+n+i,j>i,i=1,,q-1,j=2,,q,cij=130-130(i-j+1)+n+i,i>j,i=2,,q,j=1,q-1,kij=130(n+i-j+1)+n+i-130,i=1,,q,j=1,,p,hij=130(n+j)+i-130,i=1,,p,j=1,,q.

Obviously, F is irreducible. Table 2 shows the spectral radii of the corresponding iteration matrices with n=8 and m=6.

Spectral radii of GAOR method and PGAOR method.

Preconditioner α ω 1 ω 2 = ω γ 2 = γ ρ GAOR ρ PGAOR
S 1 0.8 0.6 0.7 0.7505 0.6618
S 2 0.8 0.6 0.7 0.7505 0.7340
S 3 0.1 0.8 0.6 0.7 0.7505 0.7298
0.5 0.8 0.6 0.7 0.7505 0.7328

Similarly, in Table 2, we get that the results are in concord with Theorems 35.

Acknowledgments

The authors would like to thank the anonymous referees for their helpful suggestions, which greatly improved the paper. This research of this author was supported by NSFC Tianyuan Mathematics Youth Fund (no. 11026040), by Science and Technology Development Plan of Henan Province (no. 122300410316), and by Natural Science Foundations of Henan Province (no. 13A110022).

Yuan J. Y. Numerical methods for generalized least squares problem Journal of Computational and Applied Mathematics 1996 66 571 584 Yuan J.-Y. Jin X.-Q. Convergence of the generalized AOR method Applied Mathematics and Computation 1999 99 1 35 46 10.1016/S0096-3003(97)10175-8 MR1661219 ZBL0961.65029 Gunawardena A. D. Jain S. K. Snyder L. Modified iterative methods for consistent linear systems Linear Algebra and Its Applications 1991 154/156 123 143 10.1016/0024-3795(91)90376-8 MR1113142 ZBL0731.65016 Hadjidimos A. Accelerated overrelaxation method Mathematics of Computation 1978 32 141 149 157 MR0483340 10.1090/S0025-5718-1978-0483340-6 ZBL0382.65015 Li Y. T. Li C. X. Wu S. L. Improvement of preconditioned AOR iterative methods for L-matrices Journal of Computational and Applied Mathematics 2007 206 656 665 Milaszewicz J. P. Improving Jacobi and Gauss-Seidel iterations Linear Algebra and Its Applications 1987 93 161 170 10.1016/S0024-3795(87)90321-1 MR898552 ZBL0628.65022 Varga R. S. Matrix Iterative Analysis 2000 27 Berlin, Germany Springer 10.1007/978-3-642-05156-2 MR1753713 Young D. M. Iterative Solution of Large Linear Systems 1971 New York, NY, USA Academic Press MR0305568 Darvishi M. T. Hessari P. On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices Applied Mathematics and Computation 2006 176 1 128 133 10.1016/j.amc.2005.09.051 MR2233338 ZBL1101.65033 Zhou X. Song Y. Wang L. Liu Q. Preconditioned GAOR methods for solving weighted linear least squares problems Journal of Computational and Applied Mathematics 2009 224 1 242 249 10.1016/j.cam.2008.04.034 MR2474229 ZBL1168.65020 Darvishi M. T. Hessari P. Shin B.-C. Preconditioned modified AOR method for systems of linear equations International Journal for Numerical Methods in Biomedical Engineering 2011 27 5 758 769 10.1002/cnm.1330 MR2830160 ZBL1226.65023 Berman A. Plemmons R. J. Nonnegative Matrices in the Mathematical Sciences 1994 9 Philadelphia, Pa, USA Academic Press, New York, NY, USA; Society for Industrial and Applied Mathematics (SIAM) 10.1137/1.9781611971262 MR1298430