Three kinds of preconditioners are proposed to accelerate the generalized AOR (GAOR) method for the linear system from the generalized least squares problem. The convergence and comparison results are obtained. The comparison results show that the convergence rate of the preconditioned generalized AOR (PGAOR) methods is better than that of the original GAOR methods. Finally, some numerical results are reported to confirm the validity of the proposed methods.
1. Introduction
Consider the generalized least squares problem
(1)minx∈ℝn(Ax-b)TW-1(Ax-b),
where x∈ℝn, A∈ℝn×n, b∈ℝn, and the variance-covariance matrix W∈ℝn×n is a known symmetric and positive-definite matrix. This problem has many scientific applications and one of the applications is a parameter estimation in mathematical model [1, 2].
In order to solve the problem simply, one has to solve a linear system of the equivalent form as follows:
(2)Fy=f,
where
(3)F=(I-BHKI-C),
with B∈ℝp×p, C∈ℝq×q, and p+q=n. Without loss of generality, we assume that F=ℐ-ℒ-𝒰, where ℐ is the identity matrix, and ℒ and 𝒰 are strictly lower and upper triangular matrices obtained from F, respectively. So we can pretty easily get that
(4)ℐ=(I00I),ℒ=(00-K0),𝒰=(B-H0C).
In order to get the approximate solutions of the linear system (2), a lot of iterative methods such as Jacobi, Gauss-Seidel (GS), successive over relaxation (SOR), and accelerated over relaxation (AOR) have been studied by many authors [3–8]. These iterative methods have very good results, but have a serious drawback because of computing the inverses of I-B and I-C in (3). To avoid this drawback, Darvishi and Hessari [9] proposed the generalized convergence of the generalized AOR (GAOR) method when the coefficient matrix F is a diagonally dominant matrix. The GAOR method [10, 11] can be defined as follows:
(5)yk+1=𝒯γωyk+ωg,k=0,1,2,…,
where
(6)𝒯γω=(I0γKI)-1×((1-ω)I+(ω-γ)(00-K0)+ω(B-H0C)),g=(I0-γKI)f.
Here, ω and γ are real parameters with ω≠0. The iteration matrix is rewritten briefly as
(7)𝒯γω=((1-ω)I+ωB-ωHω(γ-1)K-γωKB(1-ω)I+ωC+wγKH).
To improve the convergence rate of the GAOR iterative method, a preconditioner should be applied. Now we can transform the original linear system (2) into the preconditioned linear system
(8)PFy=Pf,
where P is the preconditioner. PF can be expressed as
(9)PF=(I-B*H*K*I-C*).
Meanwhile, the PGAOR method for solving the preconditioned linear system (8) is defined by
(10)yk+1=𝒯γω*yk+wg*,k=0,1,2,…,
where(11)𝒯γω*=((1-ω)I+ωB*-ωH*ω(γ-1)K*-γωK*B*(1-ω)I+ωC*+γωK*H*),g=(I0-γK*I)Pf.
In this paper, we propose three new types of preconditioners and study the convergence rate of the preconditioned GAOR methods for solving the linear system (2). This paper is organized as follows. In Section 2, some notations, definitions, and preliminary results are presented. In Section 3, three new types of preconditioners are proposed and compared with that of the original GAOR methods. Lastly, a numerical example is provided to confirm the theoretical results studied in Section 4.
2. Preliminaries
For vector x∈ℝn, x≥0(x>0) denotes that all components of x are nonnegative (positive). For two vectors x,y∈ℝn, x≥y(x>y) means that x-y≥0(x-y>0). These definitions are carried immediately over to matrices. A matrix A is said to be irreducible if the directed graph of A is strongly connected. ρ(A) denotes the spectral radius of A. Some useful results are provided as follows.
Lemma 1 (see [<xref ref-type="bibr" rid="B7">7</xref>]).
Let A≥0 be an irreducible matrix. Then,
Ahas a positive eigenvalue equal to its spectral radius.
A has an eigenvector x>0 corresponding to ρ(A).
ρ(A) is a simple eigenvalue of A.
Lemma 2 (see [<xref ref-type="bibr" rid="B12">12</xref>]).
Let A be a nonnegative matrix. Then,
if αx≤Ax for some nonnegative vector x, x≠0, then α≤ρ(A).
If Ax≤βx for some positive vector x, then ρ(A)≤β. Moreover, if A is irreducible and if 0≠αx≤Ax≤βx for some nonnegative vector x, then α≤ρ(A)≤β and x is a positive vector.
3. Preconditioned GAOR Methods
To solve the linear system (2) with the coefficient matrix F in (3), we consider the preconditioners as follows:
(12)Pi=I+S-=(I0SiI),i=1,2,3,
where
(13)S1=(00⋯-k1p00⋯-k2p0⋱⋮00⋯-kqp),S2=(0-k12⋯00-k210-k23⋯00-k32⋱⋮⋮⋮⋮⋯0-kq-1,p00⋯-kq,p-10),S3=(0-k12⋯000⋱⋮0⋮0-kq-1,p-kq1α…00),(α>0).
The preconditioned coefficient matrix PiF can be expressed as
(14)PiF=(I-BHK+Si(I-B)SiH+I-C),i=1,2,3,
where(15)K+S1(I-B)=(k11k12⋯k1pbppk21k22⋯k2pbppkq1kq2⋯kqpbpp),K+S2(I-B)=(k11k12b22⋯k1,p-1k1pk21b11+k23b31k22k21b13+k23b33⋯k2pk31k32b22+k34b42⋱⋮⋮⋮⋮⋯0kq-1,p-2bq-2,p+kq-1,pbqpkq1kq2⋯kq,p-1bq-1,p-1kqp),K+S3(I-B)=(k11k12b22⋯k1pk21k22⋱k2p⋮⋮…kq-1,pbppkq1(1-1-b11α)kq2⋯kqp).Based on the discussed above, PiF can be spitted as
(16)PiF=ℐ-ℒi-𝒰i,i=1,2,3.
Similarly,
(17)ℐ=(I00I),ℒi=(00-K-Si(I-B)0),𝒰i=(B-H0C-SiH),i=1,2,3.
The preconditioned GAOR methods for solving PiFy=Pif are defined by
(18)yk+1=𝒯γωi*yk+ωgi*,k=0,1,2,…,i=1,2,3,
where
(19)ωgi*=(ℐ-Γℒi)-1ΩPif,i=1,2,3,
with
(20)𝒯γωi*=(I-Γℒi)-1(I-Ω+(Ω-Γ)ℒi+Ω𝒰i),i=1,2,3,
where
(21)Ω=(ω1I00ω2I,),Γ=(γ1I00γ2I).
For i=1,2,3, we have(22)𝒯γωi*=((1-ω1)I+ω1B-ω1H(ω1γ2-ω2)[K+Si(I-B)](1-γ2)I+γ2C-γ2SiH-ω1γ2[K+Si(I-B)]B+ω1γ2[K+Si(I-B)]H).
Next, we will study the convergence condition of the PGAOR methods. For simplicity, without loss of generality, we can assume that
(23)H≤0,K≤0,B≥0,C≥0,0<ω1≤1,0<ω2≤1,0<γ2≤ω2ω1.
Then, we have the following theorem.
Theorem 3.
Let 𝒯γω and 𝒯γω1* be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If matrix F in (3) is an irreducible matrix then it holds that ρ(𝒯γω)≠1,
(24)ρ(𝒯γω1*)>ρ(𝒯γω)≠1ifρ(𝒯γω)>1,ρ(𝒯γω1*)<ρ(𝒯γω)≠1ifρ(𝒯γω)<1.
Proof.
By some simple calculations on (7), one can get
(25)𝒯γω=((1-ω1I)+ω1B-ω1H(ω1γ2-ω2)K(1-ω2I+ω2)C)+ω1γ2(00-KBKH).
Since here F is irreducible, one can pretty easily obtain that 𝒯γω is nonnegative and irreducible by the above assumptions. And so on, one can also easily prove that 𝒯γω1* is non-negative and irreducible. By Lemma 1, there exists a positive vector x>0 such that
(26)𝒯γωx=λx,
where λ=ρ(𝒯γω).
One can easily have
(27)[ℐ-Ω+(Ω-Γ)ℒ+Ω𝒰]x=λ(ℐ-Γℒ)x.
That is,
(28)(ℐ-Ω)x=λ(ℐ-Γℒx-(Ω-Γ)ℒx-Ω𝒰x).
With the same vector x>0, it holds
(29)𝒯γω1*x-λx=(ℐ-Γℒ1)-1×[ℐ-Ω+(Ω-Γ)ℒ1-λ(ℐ-Γℒ1)]x.
Using (22), (26), and (28), we can obtain
(30)𝒯γω1*x-λx=(ℐ-Γℒ1)-1×[(Ω-Γ)(ℒ1-ℒ)+Ω(𝒰1-𝒰)+λΓ(ℒ1-ℒ)]x=(ℐ-Γℒ1)-1[Ω(𝒰1-𝒰+ℒ1-ℒ)+(λ-1)Γ(ℒ1-ℒ)]x=(ℐ-Γℒ1)-1Ω(𝒰1-𝒰+ℒ1-ℒ)x+(λ-1)((ℐ-Γℒ1)-1)Γ(ℒ1-ℒ)x=(ℐ-Γℒ1)-1(00-ω2S1(I-B)-ω2S1H)x+(λ-1)((ℐ-Γℒ1)-1)×(00-γ2S1(I-B)0)x.
Meanwhile, we have
(31)(ℐ-Γℒ1)-1(00-ω2S1(I-B)-ω2S1H)x=(ℐ-Γℒ1)-1(00ω2ω1S10)(-ω1I+ω1B-ω1H00)x=(ℐ-Γℒ1)-1(00ω2ω1S10)×(-ω1I+ω1B-ω1H(ω1γ2K-ω1γ2KB)-ω2I+ω2C+ω1γ2KH)x=(ℐ-Γℒ1)-1(00ω2ω1S10)(𝒯wr-ℐ)x=(λ-1)(ℐ-Γℒ1)-1(00ω2ω1S10)x.
By far, we can easily get
(32)𝒯γω1*x-λx=(λ-1)(ℐ-Γℒ1)-1×[(00ω2ω1S10)+(00-γ2S1(I-B)0)]x=(λ-1)(I0-γ2[K+S1(I-B)]I)×(00(ω2ω1-γ2)S1+γ2S1B0)x=(λ-1)(00(ω2ω1-γ2)S1+γ2S1B0)x.
In view of the abovementioned assumptions, we have that
(33)(00(ω2ω1-γ2)S1+γ2S1B0)x>0.
Then, if λ=ρ(𝒯wr)>1, then
(34)𝒯γω1*-λx>0,𝒯γω1*-λx≠0.
From Lemma 2, we can easily get
(35)ρ(𝒯γω1*)>ρ(𝒯wr)>1.
Similarly, if λ=ρ(𝒯wr)<1, then
(36)𝒯γω1*-λx<0,𝒯γω1*-λx≠0.
So we have
(37)ρ(𝒯γω1*)<ρ(𝒯γω)<1.
If λ=ρ(𝒯γω)=1, then we may get that Fy=f but y≠0, which is contradictory to the fact of nonsingular matrix F by assumptions; this completes the conclusion of the theorem.
Theorem 4.
Let 𝒯γω and 𝒯γω2* be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If the matrix F in (3) is an irreducible matrix satisfying
(38)b11>0,kij≠0for|i-j|=1,
then it holds that ρ(Twr)≠0,
(39)ρ(𝒯γω2*)>ρ(𝒯γω)≠1ifρ(𝒯γω)>1,ρ(𝒯γω2*)<ρ(𝒯γω)≠1ifρ(𝒯γω)<1.
Proof.
One can easily prove this theorem by using similar arguments of Theorem 3.
Similarly, we have the following theorem.
Theorem 5.
Let 𝒯γω and 𝒯γω3* be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If the matrix F in (3) is an irreducible matrix satisfying
(40)α>1,kq1<0,b11>0,ki,i+1<0fori=1,2,…,q-1,
then it holds that ρ(𝒯γω)≠1,
(41)ρ(𝒯γω3*)>ρ(𝒯γω)≠1ifρ(𝒯γω)>1,ρ(𝒯γω3*)<ρ(𝒯γω)≠1ifρ(𝒯γω)<1.
4. Numerical Examples
In this section, we give numerical examples to demonstrate the conclusions drawn above. The numerical experiments were done by using MATLAB 7.0.
Example 1.
Consider the following Laplace equation:
(42)∂2u(x,y)∂x2+∂2u(x,y)∂y2=0.
Under a uniform square domain, applying the five-point finite difference method with the uniform mesh size, we can get the following linear system:
(43)ℱx=f,
where(44)ℱ=(12000000-180-18-1801200000-180000012000-18-18-18-1800001200-18-18000000012000-18-1800000012000-18-1800-18-1800120000-18-18-18-180001200000-180-180001200-180-180-18-18000120-180000-18000012).The coefficient matrix ℱ is spitted as
(45)ℱ=(I-BHKI-C),
where
(46)B=(120000001200000012000000120000001200000012),C=(120000012000001200000120000012),H=(0-180-18-180-18000-18-18-18-180-18-1800000-18-180000-18-18),K=(00-18-1800-18-18-18-180000-180-180-180-180-18-18-180000-18).
Table 1 reveals the spectral radii of the GAOR methods and the PGAOR methods. It tells that the spectral radii of the preconditioned PGAOR methods are smaller than those of the GAOR methods, so we can get that the proposed three preconditioners can accelerate the speed rate of the GAOR method for the linear systems (2). The results in Table 1 are in accordance with Theorems 3–5.
Spectral radii of GAOR method and PGAOR method.
Preconditioner
α
ω1
ω2
γ2
ρGAOR
ρPGAOR
S1
0.8912
0.9654
0.8865
0.8478
0.8457
S2
0.8912
0.9654
0.8865
0.8478
0.8376
S3
1.5
0.8912
0.9654
0.8865
0.8478
0.8418
2
0.8912
0.9654
0.8865
0.8478
0.8420
Example 2.
The coefficient matrix F in (3) is given by
(47)F=(I-BHKI-C),
where B=(bij)p×p, C=(cij)q×q, and p+q=n, H=(hij)p×q, K=(kij)q×p with
(48)bii=1i+1,i=1,…p,bij=130-1(30j)+i,j>i,i=1,…p-1,j=2,…p,bij=130-130((10(i-j+1))+i),i>j,i=2,…p,j=1,…p-1,cii=1n+i+1,i=1,…q,cij=130-130(n+j)+n+i,j>i,i=1,…,q-1,j=2,…,q,cij=130-130(i-j+1)+n+i,i>j,i=2,…,q,j=1,…q-1,kij=130(n+i-j+1)+n+i-130,i=1,…,q,j=1,…,p,hij=130(n+j)+i-130,i=1,…,p,j=1,…,q.
Obviously, F is irreducible. Table 2 shows the spectral radii of the corresponding iteration matrices with n=8 and m=6.
Spectral radii of GAOR method and PGAOR method.
Preconditioner
α
ω1
ω2=ω
γ2=γ
ρGAOR
ρPGAOR
S1
0.8
0.6
0.7
0.7505
0.6618
S2
0.8
0.6
0.7
0.7505
0.7340
S3
0.1
0.8
0.6
0.7
0.7505
0.7298
0.5
0.8
0.6
0.7
0.7505
0.7328
Similarly, in Table 2, we get that the results are in concord with Theorems 3–5.
Acknowledgments
The authors would like to thank the anonymous referees for their helpful suggestions, which greatly improved the paper. This research of this author was supported by NSFC Tianyuan Mathematics Youth Fund (no. 11026040), by Science and Technology Development Plan of Henan Province (no. 122300410316), and by Natural Science Foundations of Henan Province (no. 13A110022).
YuanJ. Y.Numerical methods for generalized least squares problemYuanJ.-Y.JinX.-Q.Convergence of the generalized AOR methodGunawardenaA. D.JainS. K.SnyderL.Modified iterative methods for consistent linear systemsHadjidimosA.Accelerated overrelaxation methodLiY. T.LiC. X.WuS. L.Improvement of preconditioned AOR iterative methods for L-matricesMilaszewiczJ. P.Improving Jacobi and Gauss-Seidel iterationsVargaR. S.YoungD. M.DarvishiM. T.HessariP.On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matricesZhouX.SongY.WangL.LiuQ.Preconditioned GAOR methods for solving weighted linear least squares problemsDarvishiM. T.HessariP.ShinB.-C.Preconditioned modified AOR method for systems of linear equationsBermanA.PlemmonsR. J.