MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 956143 10.1155/2013/956143 956143 Research Article Convergence of the GAOR Method for One Subclass of H-Matrix Wang Guangbin Wang Ting Weber Gerhard-Wilhelm Department of Mathematics Qingdao University of Science and Technology Qingdao 266061 China qust.edu.cn 2013 10 4 2013 2013 05 01 2013 12 03 2013 14 03 2013 2013 Copyright © 2013 Guangbin Wang and Ting Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We discuss the convergence of the GAOR method to solve linear system which occurred in solving the weighted linear least squares problem. Moreover, we present one convergence theorem of the GAOR method when the coefficient matrix is a strictly doubly α diagonally dominant matrix which is a nonsingular H-matrix. Finally, we show that our results are better than previous ones by using four numerical examples.

1. Introduction

Consider the weighted linear least squares problem (1)minxRn(Ax-b)TW-1(Ax-b), where W is the variance-covariance matrix. The problem has many scientific applications. A typical source is parameter estimation in mathematical modeling.

In order to solve it, we have to solve the following linear system: (2)Hy=f, where (3)H=(I-B1DCI-B2) is invertible. For example, in the variance-covariance matrix , a generalized SOR (GSOR) method to solve linear system (2) was proposed in , afterwards, a generalized AOR (GAOR) method to solve linear system (2) was established in  as follows:(4)y(k+1)=Lω,ry(k)+ωk, where (5)Lω,r=(1-ω)I+ωJ+ωrK,k=(I0-rCI)f,J=(B1-D-CB2),K=(00C(I-B1)CD)=(0C)(I-B1D).

In , authors studied the convergence of the GAOR method for solving the linear system Hy=f. In [4, 5], authors studied the convergence of the GAOR method for diagonally dominant coefficient matrices and gave the regions of convergence. In , authors studied the convergence of the GAOR method for strictly doubly diagonally dominant coefficient matrices and gave the regions of convergence. In , authors studied the preconditioned generalized AOR method for solving linear systems. They proposed two kinds of preconditioning that each one contains three preconditioners. They showed that the convergence rate of the preconditioned generalized AOR methods is better than that of the original method, whenever the original method is convergent. In , authors presented three kinds of preconditioners for preconditioned modified AOR method to solve systems of linear equations. They showed that the convergence rate of the preconditioned modified AOR method is better than that of the original method, whenever the original method is convergent.

Sometimes, the coefficient matrices of linear systems Hy=f are not strictly diagonally dominant or strictly doubly diagonally dominant. In this paper, we will discuss the convergence of the GAOR method when the coefficient matrices are strictly doubly α diagonally dominant.

Throughout this paper, we denote the i-row sums of the modulus of the entries of J and K by Ji and Ki, the i-column sums of the modulus of the entries of J and K by Ji and Ki, respectively. And we denote the spectral radius of iterative matrix Lω,r by ρ(Lω,r): (6)N={1,2,,n},N1={iKi=0,Ki=0},N2=N-N1,Ri(A)=ki|aik|,Ci(A)=ki|aki|,i,kN.

Definition 1 (see [<xref ref-type="bibr" rid="B9">9</xref>]).

We call matrix ACn×n a strictly diagonally dominant matrix if (7)|aii|>Ri(A),iN and denote ASD.

We call matrix ACn×n a strictly doubly diagonally dominant matrix if (8)|aii||ajj|>Ri(A)Rj(A),i,jN and denote ASDD.

We call matrix ACn×n a strictly doubly α diagonally dominant matrix if (9)|aii||ajj|>[αRi(A)+(1-α)Ci(A)]×[αRj(A)+(1-α)Cj(A)],i,jN, and denote ASDD(α), where α[0,1].

Obviously, a strictly doubly diagonally dominant matrix is a strictly doubly α diagonally dominant matrix, but not vice versa.

For example, (10)A=(114341413414141)SDD(12), but ASDD.

Lemma 2 (see [<xref ref-type="bibr" rid="B9">9</xref>]).

If ASDD(α), then A is a nonsingular H-matrix.

In this paper, we study the convergence of the GAOR method for solving the linear system Hy=f for strictly doubly α diagonally dominant coefficient matrices. Firstly, we obtain upper bound for the spectral radius of the matrix Lω,r which is the iterative matrix of the GAOR iterative method. Moreover, we present one convergence theorem of the GAOR method. Finally, we present four numerical examples.

2. Upper Bound of the Spectral Radius of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M45"><mml:mrow><mml:msub><mml:mrow><mml:mi>L</mml:mi></mml:mrow><mml:mrow><mml:mi>ω</mml:mi><mml:mo mathvariant="bold">,</mml:mo><mml:mi>r</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>

In this section, we obtain upper bound of the spectral radius of the iterative matrix Lω,r.

Theorem 3.

Let HSDD(α), then ρ(Lω,r) satisfies the following inequality: (11)ρ(Lω,r)maxi,jij(×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2|1-ω|+(+(1-α)(|ω|Jj+|ωr|Kj)]+(1-α)(|ω|Jj+|ωr|Kj)])1/2[(1-α)(|ω|Ji+|ωr|Ki)α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2).

Proof.

Let λ be an arbitrary eigenvalue of the iterative matrix Lω,r, then (12)det(λI-Lω,r)=0, that is, (13)det((λ+ω-1)I-ωJ-ωrK)=0. If (λ+ω-1)I-ωJ-ωrKSDD(α), then (14)|λ+ω-1|2>[α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)],ij. From Lemma 2, we know that (15)(λ+ω-1)I-ωJ-ωrK is nonsingular, that is (16)det((λ+ω-1)I-ωJ-ωrK)=det(λI-Lω,r)0, then λ is not an eigenvalue of the iterative matrix Lω,r.

However, λ is an eigenvalue of the iterative matrix Lω,r, then there exists at least a couple of i,jN(ij), such that (17)|λ+ω-1|2[α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)]. That is, (18)|λ|2-2|1-ω||λ|+(1-ω)2[α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)]. It is easy to find that the discriminant of a curve of second order (19)Δ=4[α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[α(|ω|Jj+|ωr|K  j  )+(1-α)(|ω|Jj+|ωr|Kj)]0, then the solution of (18) satisfies (20)|λ||1-ω|+(+(1-α)(|ω|Jj+|ωr|Kj)][Kjα(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[{(|ω|Jj+|ωr|Kj)}α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2,ij. So, ρ(Lω,r) satisfies the following inequality: (21)ρ(Lω,r)maxi,jij(×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2|1-ω|+(+(1-α)(|ω|Jj+|ωr|Kj)])1/2+(1-α)(|ω|Jj+|ωr|Kj)][+(1-α)(|ω|Ji+|ωr|Ki)α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[{(|ω|Jj+|ωr|Kj)}α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2).

3. Convergence of the GAOR Method

In this section, we investigate the convergence of the GAOR method to solve linear system (2). We assume that H is a strictly doubly α diagonally dominant coefficient matrix and obtain the regions of convergence of the GAOR method.

Theorem 4.

If HSDD(α), (22)[αJi+(1-α)Ji][αJj+(1-α)Jj]<1,i,jN,ij, then the GAOR method converge if ω,r satisfy either

0<ω1, (23)|r|<mini.jij{×(2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1(1-[αJi+(1-α)Ji][αJj+(1-α)Jj])×([αJj+(1-α)Jj][αKi+(1-α)Ki]+[αJi+(1-α)Ji][αKj+(1-α)Kj])-1,(Δ1-[αKi+(1-α)Ki][αJj+(1-α)Jj]-[αKj+(1-α)Kj][αJi+(1-α)Ji]+Δ1)×(2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1}

or

(24)1<ω<min{2,mini,jij2-2[αJi+(1-α)Ji][αJj+(1-α)Jj]1-[αJi+(1-α)Ji][αJj+(1-α)Jj]},|r|<mini.jij{(ω2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1((2-ω)2-ω2[αJi+(1-α)Ji]×[αJj+(1-α)Jj])×(ω2[αKi+(1-α)Ki][αJj+(1-α)Jj]+ω2[αKj+(1-α)Kj]×[αJi+(1-α)Ji])-1,(Δ2-ω2[αKi+(1-α)Ki][αJj+(1-α)Jj]-ω2[αKj+(1-α)Kj][αJi+(1-α)Ji]+Δ2)×(ω2[αKi+(1-α)Ki]×[αKj+(1-α)Kj])-1},

where (25)Δ1={[αKi+(1-α)Ki][αJj+(1-α)Jj]+[αKj+(1-α)Kj][αJi+(1-α)Ji]}2+4[αKi+(1-α)Ki][αKj+(1-α)Kj]×{1-[αJi+(1-α)Ji][αJj+(1-α)Jj]},Δ2={ω2[αKi+(1-α)Ki][αJj+(1-α)Jj]+ω2[αKj+(1-α)Kj][αJi+(1-α)Ji]}2+4ω2[αKi+(1-α)Ki][αKj+(1-α)Kj]×{ω2-ω2[αJi+(1-α)Ji]×[αJj+(1-α)Jj]-4ω+4}.

Proof.

Because HSDD(α), then the GAOR method converge if (26)maxi,jij{×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2|1-ω|+(+(1-α)(|ω|Jj+|ωr|Kj)][(|ω|Jj+|ωr|Kj)α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[{(|ω|Jj+|ωr|Kj)}α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2}<1. That is, (27)|1-ω|+maxi,jij{+(1-α)(|ω|Jj+|ωr|Kj)])1/2(+(1-α)(|ω|Jj+|ωr|Kj)][+(1-α)(|ω|Ji+|ωr|Ki)α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[{(|ω|Jj+|ωr|Kj)}α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)])1/2}<1. Firstly, when 0<ω1, we have (28)ω2>[α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)]. That is, (29)|r|2[αKi+(1-α)Ki][αKj+(1-α)Kj]+|r|{[αKi+(1-α)Ki][αJj+(1-α)Jj]+[αKj+(1-α)Kj][αJi+(1-α)Ji]}+[αJi+(1-α)Ji][αJj+(1-α)Jj]-1<0. Then, we have the following conditions.

When i,jN1, then Ki=Kj=0, Ki=Kj=0. From (30)[αJi+(1-α)Ji][αJj+(1-α)Jj]<1,

we have (31)|r|<+.

When iN1, jN2 or iN2, jN1, then (32)|r|{[αJj+(1-α)Jj][αKi+(1-α)Ki]+[αJi+(1-α)Ji][αKj+(1-α)Kj]}+[αJi+(1-α)Ji][αJj+(1-α)Jj]-1<0.

From [αJi+(1-α)Ji][αJj+(1-α)Jj]<1, we have (33)|r|<(1-[αJi+(1-α)Ji][αJj+(1-α)Jj])×([αJj+(1-α)Jj][αKi+(1-α)Ki]+[αJi+(1-α)Ji][αKj+(1-α)Kj])-1.

When i,jN2, it is easy to prove that the discriminant of a curve of second order of (29) Δ1>0, and then (34)|r|<(Δ1-[αKi+(1-α)Ki][αJj+(1-α)Jj]-[αKj+(1-α)Kj][αJi+(1-α)Ji]+Δ1)×(2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1.

So, when 0<ω1, we have (35)|r|<mini,jij{×(2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1(1-[αJi+(1-α)Ji][αJj+(1-α)Jj])×([αJj+(1-α)Jj][αKi+(1-α)Ki]+[αJi+(1-α)Ji][αKj+(1-α)Kj])-1,(Δ1-[αKi+(1-α)Ki][αJj+(1-α)Jj]-[αKj+(1-α)Kj][αJi+(1-α)Ji]+Δ1)×(2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1}. Secondly, when 1<ω<2, we have (36)4-4ω+ω2>[(|ω|Ji+|ωr|Ki)α(|ω|Ji+|ωr|Ki)+(1-α)(|ω|Ji+|ωr|Ki)]×[α(|ω|Jj+|ωr|Kj)+(1-α)(|ω|Jj+|ωr|Kj)]. That is, (37)|r|2ω2[αKi+(1-α)Ki][αKj+(1-α)Kj]+|r|ω2{[αKi+(1-α)Ki][αJj+(1-α)Jj]+[αKj+(1-α)Kj][αJi+(1-α)Ji]}+ω2[αJi+(1-α)Ji][αJj+(1-α)Jj]-(ω-2)2<0. Then we have the following conditions.

When i,jN1, then Ki=Kj=0, Ki=Kj=0. From inequality (37), we have (38)ω2[αJi+(1-α)Ji][αJj+(1-α)Jj]-(ω-2)2<0.

From [αJi+(1-α)Ji][αJj+(1-α)Jj]<1, we have (39)1<ω<mini,jij2-2[αJi+(1-α)Ji][αJj+(1-α)Jj]1-[αJi+(1-α)Ji][αJj+(1-α)Jj],|r|<+.

When iN1, jN2 or iN2, jN1, then Ki=Ki=0 or Kj=Kj=0. From inequality (37), we have (40)|r|{[αKi+(1-α)Ki][αJj+(1-α)Jj]+[αKj+(1-α)Kj][αJi+(1-α)Ji]}ω2+ω2[αJi+(1-α)Ji][αJj+(1-α)Jj]-(ω-2)2<0,

so, (41)|r|<((2-ω)2-ω2[αJi+(1-α)Ji]×[αJj+(1-α)Jj])×(ω2[αKi+(1-α)Ki][αJj+(1-α)Jj]+ω2[αKj+(1-α)Kj][αJi+(1-α)Ji])-1.

When i,jN2, then Ki>0, Kj>0, Ki>0, Kj>0.

It is easy to find that the discriminant of a curve of second order of (37) Δ2>0. Then we have (42)|r|<(Δ2-ω2[αKi+(1-α)Ki][αJj+(1-α)Jj]-ω2[αKj+(1-α)Kj][αJi+(1-α)Ji]+Δ2)×(ω2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1. So, when 1<ω<min{2,mini,j,ij((2-2[αJi+(1-α)Ji][αJj+(1-α)Jj])/(1-[αJi+(1-α)Ji][αJj+(1-α)Jj]))}, we have (43)|r|<mini,jij{(ω2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1}((2-ω)2-ω2[αJi+(1-α)Ji][αJj+(1-α)Jj])×(ω2[αKi+(1-α)Ki][αJj+(1-α)Jj]+ω2[αKj+(1-α)Kj][αJi+(1-α)Ji])-1,(Δ2-ω2[αKi+(1-α)Ki][αJj+(1-α)Jj]-ω2[αKj+(1-α)Kj][αJi+(1-α)Ji]+Δ2)×(ω2[αKi+(1-α)Ki][αKj+(1-α)Kj])-1}.

4. Examples

In this section, we give four numerical examples to show that our results are better than previous ones.

Example 1.

Let (44)H=(113131311313131)=(I-B1DCI-B2), where (45)I-B1=(113131),I-B2=(1).

It is easy to know that HSDD(1/2) and (46)J=(0-13-13-130-13-13130),K=(000000494929),J1=J2=J3=23,K1=K2=0,K3=109,J1=J2=J3=23,K1=K2=49,K3=29. So (47)N1={1,2},N2={3},12JiJj+12JiJj=49<1,i,jN,ij.

By Theorem 4, we obtain the following regions of convergence:

0<ω1 and |r|<(31-4)/2, or

1<ω<6/5 and |r|<(27/4)(1/ω-1/2)2-3/4.

In addition, HSDD.

By Theorem 2 of paper , we obtain the following regions of convergence:

0<ω1 and |r|<3/4, or

1<ω<6/5 and |r|<(27/20)((2-ω)/ω)2-3/5.

From Figure 1, we know that the regions of convergence got by Theorem 4 in this paper (bounded by blue lines) are larger than ones got by Theorem 2 of paper  (bounded by green lines). From Example 1 of paper , we know that the regions of convergence got by Theorem 2 of paper  are larger than ones of paper  and paper . So, the regions of convergence got by Theorem 4 in this paper are larger than ones of paper .

Example 2.

Let (48)H=(114341413414141)=(I-B1DCI-B2), where (49)I-B1=(114141),I-B2=(1).

It is easy to know that HSDD(1/2) and (50)J=(0-14-34-140-34-14-140),K=(000000516516616),J1=J2=1,J3=12,J1=J2=12,J3=32,K1=K2=0,K3=1,K1=K2=516,K3=38. So, (51)N1={1,2},N2={3},12JiJj+12JiJj=58<1,i,jN,ij.

By Theorem 4, we obtain the following regions of convergence:

0<ω1 and |r|<(4/55)(2289-43), or

1<ω<4(2-3) and |r|<(8/55)7040(1/ω-1/2)2+529-344/55.

But HSDD, so we cannot use Theorem 2 of paper . And we cannot use the results of paper [4, 5].

Example 3.

Let (52)H=(113160017613130116016131)=(I-B1DCI-B2), where (53)I-B1=(11301),I-B2=(116131).

Obviously, HSDD(1) and (54)J=(0-13-16000-76-13-1300-160-16-130),K=(0000000013191180016736118).

By Theorem 4, we obtain the following regions of convergence:

0<ω1 and |r|<1/3, or

1<ω<8-43 and |r|<(4/3)((2-ω)/ω)2-1.

Consider the following linear system: (55)Hy=f, where (56)H=(113160017613130116016131),f=(32523232), which has the exact solution (57)y=(1,1,1,1)T.

We choose the initial solution y0=(0,0,0,0)T. If ω=101/100,r=-1/5, we need 28 iterations to achieve four decimal digits of accuracy in the solution components. If ω=101/100, r=1/5, we need 18 iterations to achieve four decimal digits of accuracy in the solution components. If ω=1/2,r=1/4, we need 22 iterations to achieve four decimal digits of accuracy in the solution components.

Example 4.

Consider (58)H=(119600196-12119600196-196-121196196196196-1960-121196196196196-19600-121196196196-19600-121196196-19600-121196-19600-121)51  ×  51,

where (59)I-B1=(1196-121),D=(0019619600196). Obviously, HSDD(1/2), and (60)J=(0-19600-196120-19600-196196120-196-196-196-1961960120-196-196-196-19619600120-196-196-19619600120-196-19619600120-19619600120)51  ×  51,K=(0000000000002396-46099216-11920000-499216-196-1921600000-19216-196-1921600000-19216-196-1921600000-19216-196-1921600000-19216-196-1921600000-19216)51  ×  51.

By Theorem 4, we obtain the following regions of convergence:

0<ω1 and |r|<2208/3481, or

1<ω<192/169 and |r|<(18432-16224ω)/3481ω.

Acknowledgments

The authors would like to thank the referees for their valuable comments and suggestions, which greatly improved the original version of this paper. This work was supported by the National Natural Science Foundation of China (No. 11001144) and the Natural Science Foundation of Shandong Province (No. ZR2012AL09).

Searle S. R. Casella G. McCulloch C. E. Variance Components 1992 New York, NY, USA John Wiley & Sons xxvi+501 10.1002/9780470316856 MR1190470 ZBL0850.62007 Yuan J. Y. Numerical methods for generalized least squares problems Journal of Computational and Applied Mathematics 1996 66 571 584 Yuan J.-Y. Jin X.-Q. Convergence of the generalized AOR method Applied Mathematics and Computation 1999 99 1 35 46 10.1016/S0096-3003(97)10175-8 MR1661219 ZBL0961.65029 Darvishi M. T. Hessari P. On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices Applied Mathematics and Computation 2006 176 1 128 133 10.1016/j.amc.2005.09.051 MR2233338 ZBL1101.65033 Tian G.-X. Huang T.-Z. Cui S.-Y. Convergence of generalized AOR iterative method for linear systems with strictly diagonally dominant matrices Journal of Computational and Applied Mathematics 2008 213 1 240 247 10.1016/j.cam.2007.01.016 MR2382718 ZBL1138.65028 Wang G. Wen H. Li L. L. Li X. Convergence of GAOR method for doubly diagonally dominant matrices Applied Mathematics and Computation 2011 217 18 7509 7514 10.1016/j.amc.2011.02.058 MR2784598 ZBL1225.65042 Zhou X. X. Song Y. Z. Wang L. Liu Q. S. Preconditioned GAOR methods for solving weighted linear least squares problems Journal of Computational and Applied Mathematics 2009 224 1 242 249 10.1016/j.cam.2008.04.034 MR2474229 ZBL1168.65020 Darvishi M. T. Hessari P. Shin B.-C. Preconditioned modified AOR method for systems of linear equations International Journal for Numerical Methods in Biomedical Engineering 2011 27 5 758 769 10.1002/cnm.1330 MR2830160 ZBL1226.65023 Li M. Sun Y. X. Discussion on α-diagonally dominant matrices and their applications Chinese Journal of Engineering Mathematics 2009 26 5 941 945 MR2597832