ANA Advances in Numerical Analysis 1687-9570 1687-9562 Hindawi Publishing Corporation 732032 10.1155/2013/732032 732032 Research Article Convergent Homotopy Analysis Method for Solving Linear Systems Nasabzadeh H. Toutounian F. Huang Ting-Zhu School of Mathematical Sciences Ferdowsi University of Mashhad P.O. Box 1159-91775 Mashhad Iran um.ac.ir 2013 8 10 2013 2013 16 06 2013 22 08 2013 2013 Copyright © 2013 H. Nasabzadeh and F. Toutounian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

By using homotopy analysis method (HAM), we introduce an iterative method for solving linear systems. This method (HAM) can be used to accelerate the convergence of the basic iterative methods. We also show that by applying HAM to a divergent iterative scheme, it is possible to construct a convergent homotopy-series solution when the iteration matrix G of the iterative scheme has particular properties such as being symmetric, having real eigenvalues. Numerical experiments are given to show the efficiency of the new method.

1. Introduction

Computational simulation of scientific and engineering problems often depend on solving linear system of equations. Such systems frequently arise from discrete approximation to partial differential equations. Systems of linear equations can be solved either by direct or by iterative methods. Iterative methods are ideally suited for solving large and sparse systems. For the numerical solution of a large nonsingular linear system, (1)Au=b, where An×n is given, bn is known, and un is unknown, one class of iterative methods is based on a splitting (M,N) of the matrix A, that is, (2)A=M-N, where M is taken to be invertible and cheap to invert, which mean that a linear system with matrix coefficient M is much more economical to solve than (1). Based on (2), (1) can be written in the fixed-point form (3)u=Gu+c,G=M-1N,c=M-1b, which yields the following iterative scheme for the solution of (1): (4)u(k+1)=Gu(k)+c,k=0,1,2,,u(0)n  isarbitrary.

A sufficient and necessary condition for (4) to converge to the solution of (1) is ρ(G)<1, where ρ(G) denotes spectral radius. Some effective splitting iterative methods and preconditioning methods were presented for solving the linear system of (1), see . Recently, Keramati , Yusufoğlu , and Liu  applied the homotopy perturbation method to obtain the solution of linear systems and deduced the conditions for checking the convergence of homotopy series. In this work, we show how the homotopy analysis method may be regarded as an acceleration procedure based on the iterative method (4). We observe that it is not necessary that the basic method (4) be convergent. When ρ(G)>1, it is sufficient that the eigenvalues λi=Re(λi)+iIm(λi), i=1,,n of iteration matrix G satisfy the relation Re(λi)<1, i=1,,n, (or Re(λi)>1, i=1,,n). When ρ(G)<1, by applying the homotopy analysis method to the basic iterative method (4), we can improve the rate of convergence of the iterative method (4). This paper is organized as follows. In Section 2, we introduce the basic concept of HAM, derive the conditions for convergence of the homotopy series, and apply the homotopy analysis method to the Jacobi, Richardson, SSOR, and SAOR methods. In Section 3, some numerical examples are presented to show the efficiency of the method. Finally, we make some concluding remarks in Section 4.

2. Basic Idea of HAM

The homotopy analysis method (HAM) [13, 14] was first proposed by S. J. Liao in 1992. The HAM was further developed and improved by S. Liao for nonlinear problem in .

Here, we apply the homotopy analysis method (HAM) to the problem (3) for finding the solution of (1) when det(A)0. Consider (3), where u is unknown vector of (1) and G is the iteration matrix of an iterative method. Let w0 denote an initial guess of exact solution u, 0 an convergence control parameter. Then, we can apply the homotopy analysis method and define H(u,p) by (5)H(u,0)=F(u),H(u,1)=L(u) and a homotopy as follows: (6)H(u,p)=(1-p)F(u)-pL(u)=0, where (7)L(u)=(I-G)u-c,F(u)=u-w0, and p is an embedding parameter. Hence, it is obvious that (8)H(u,0)=u-w0,H(u,1)=(I-G)u-c. And as the embedding parameter p[0,1] increases from 0 to 1, the solution of H(u,p) varies continuously from the initial approximation w0 to the exact solution u of the original equation L(u)=0. The homotopy analysis method uses the parameter p as an expanding parameter (see ) to obtain (9)u=u0+pu1+p2u2+, and it gives an approximation to the solution of (3) as (10)v=limp1(u0+pu1+p2u2+)=limp1k=0pkuk=k=0uk.

By substituting (9) in (6) and equating the terms with identical powers of p, we can obtain (11)p0:u0-w0=0,p1:u1-[(I-G)u0-c]=0,pi:ui-ui-1-(I-G)ui-1=0,i=2,3,.

This implies that (12)u0=w0,u1=[(I-G)u0-c],ui=[(+1)I-G]ui-1,i=2,3,.

Taking (13)G=(+1)I-G yields that (14)u1=[(I-G)u0-c],ui=Gui-1,i=2,3,, where u0 is an initial guess of exact solution u. Therefore, (15)v=u0+i=1piGi-1u1.

By setting p=1, we obtain (16)v=u0+i=1Gi-1u1.

It is obvious that if ρ(G)<1, then the series, i=1Gi-1u1, converges and we have (17)v=u0+(I-G)-1u1=u0+-1(I-G)-1u1=(I-G)-1c, which is the exact solution of (3). A series of vectors can be computed by (14), and our aim is to choose the convergence control parameter 0 so that ρ(G)<1. For improving the rate of convergence of iterative method, we present the following theorem.

Theorem 1.

Suppose that ρ(G)<1, and let λi=Re(λi)+iIm(λi), and μi, i=1,2,,n be the eigenvalues of G and G, respectively. Let αi=(1-Re(λi))2+Im(λi)2, βi=2(1-Re(λi)), and let gi()=|μi|2-ρ(G)2=αi2+βi+1-ρ(G)2, i=1,2,,n. If αi0 and βi-2αi0, i=1,2,,n, then

the quadratic equation gi()=0, i=1,2,,n, has simple real roots γ1(i)<γ2(i)<0,

=-1 belongs to the interval [maxi=1nγ1(i),mini=1nγ2(i)] and ρ(G-1)=ρ(G)<1,

for each [maxi=1nγ1(i),mini=1nγ2(i)] and -1, the relation ρ(G)<ρ(G)<1 holds.

Proof.

(i) We begin by defining two index sets N1={i|λi|=ρ(G)} and N2={i|λi|ρ(G)}. Since αi>0 and ρ(G)<1, for iN2, we have (18)signgi(-)=1,gi(-1)<0,gi(0)>0.

So, gi(), iN2, has simple real roots (19)γ1(i)<-1,-1<γ2(i)<0.

For iN1, we have αi-βi<0 and (20)gi(-1)=0,gi(αi-βiαi)=0.

So, by using the assumption βi-2αi0, gi(), iN2, has also simple real roots γ1(i)<γ2(i)<0, defined as follows: (21)γ1(i)=-1,γ2(i)=αi-βiαiifαi-βiαi>-1,γ1(i)=αi-βiαi,γ2(i)=-1ifαi-βiαi<-1.

(ii) From part (i), we observe that -1[maxi=1nγ1(i),mini=1nγ2(i)] and gi(-1)=0. This implies that ρ(G-1)=ρ(G)<1.

(iii) From part (i), we also observe that for each [maxi=1nγ1(i),mini=1nγ2(i)] and -1. We have gi()<0, and the relation ρ(G)<ρ(G)<1 holds.

The following theorem shows that by applying the HAM to a iterative scheme which is divergent, it is possible to construct a convergent homotopy-series vectors when the iteration matrix G has particular properties.

Theorem 2.

Let λi=Re(λi)+iIm(λi), and μi, i=1,2,,n, be the eigenvalues of G and G, respectively. Let αi=(1-Re(λi))2+Im(λi)2 and βi=2(1-Re(λi)). Suppose that αi0, i=1,2,,n.

If  Re(λi)>1,  for  i=1,2,,n, and (0,min1in-(βi/αi)), then ρ(G)<1.

If  Re(λi)<1,   for  i=1,2,,n, and (max1in-(βi/αi),0), then ρ(G)<1.

Proof.

From (13), we have μi=1+(1-Re(λi))-iIm(λi). So, it is sufficient to have (22)|μi|2-1=αi2+βi<0,fori=1,2,,n.

Under the hypothesis of part (i), we have βi<0, i=1,2,,n, and the relation (22) holds if (0,min1in-(βi/αi)), and the proof of (i) is complete. A similar argument holds if Re(λi)<1,  for i=1,2,,n, and part (ii) follows.

When the assumption of part (i) (or part (ii)) of Theorem 2 does not hold, the following theorem shows that, for certain cases, instead of (3) we can consider the equivalent equation (23)u=G¯u+c¯,with  G¯=(G-I)2+I,c¯=(G-I)c, in which the iteration matrix G¯ has the eigenvalues with the desired properties.

Theorem 3.

Let λi=Re(λi)+iIm(λi), and νi=Re(νi)+iIm(νi), i=1,2,,n, be the eigenvalues of G and G¯, respectively.  If (1-Re(λi))2-(Im(λi))2>0 (or (1-Re(λi))2-(Im(λi))2<0) for i=1,2,,n, then Re(νi)>1 (or Re(νi)<1) for i=1,2,,n.

Proof.

The proof immediately follows from the fact that Re(νi)=((1-Re(λi))2-(Im(λi))2)+1.

The following corollary shows that by using the modified linear equation (23) and the homotopy analysis method with the corresponding G¯=(+1)I-G¯, we can construct a convergent homotopy series for linear system (1).

Corollary 4.

If G has only real eigenvalues, then there exists 0 such that the series of vectors generated by (24)u1=[(I-G¯)u0-c¯],ui=G¯ui-1,i=2,3,, converges to the exact solution of (1).

Proof .

The proof immediately follows from Theorems 2 and 3.

This corollary establishes that the series of vectors generated by (24) always converges if the iteration matrix G is a symmetric matrix. When A is symmetric with diagonal elements positive real numbers, (1) can be written as follows: (D-1/2AD-1/2)(D1/2x)=(D-1/2b), where D is the diagonal of A. Denoting again by A, x, and b the expressions (D-1/2AD-1/2), (D1/2x), and (D-1/2b), respectively, it is obvious the new coefficient A is still symmetric and can therefore be written in the form A=I-L-LT. An immediate consequence of Corollary 4 and the above discussion is the following results.

The series of vectors generated by (24) converges when A is a symmetric matrix and the iterative method is the Richardson method (25)u(k+1)=(I-A)u(k)+b.

The series of vectors generated by (24) converges when A=I-L-U is a symmetric matrix and the iterative method is the Jacobi method (26)u(k+1)=(L+U)u(k)+b.

If A=I-L-U is a symmetric matrix and the iterative method is SAOR method (27)u(k+1)=Jr,wu(k)+c with (28)Jr,w=(I-rU)-1[(1-ω)I+(ω-r)U+ωL]×(I-rL)-1[(1-ω)I+(ω-r)L+ωU],c=ω(I-rU)-1[(2-ω)I-(ω-r)(L+U)](I-rL)-1b, then the series of vectors generated by (24) converges if (29)ω(0,2),ω+2-ωμmin<r<ω+2-ωμmax, where μmin and μmax stand for the minimum and the maximum eigenvalues of B=L+U, respectively. This result follows from the fact that in this case the iteration matrix Jr,w of SAOR method has real eigenvalues (see Theorem 2 in ).

If A=I-L-U is a symmetric matrix and the iterative method is SSOR method, then the series of vectors generated by (24) converges if ω(0,2). This result immediately follows from the fact that the SAOR method reduces to the SSOR method for r=ω.

3. Numerical Examples

For numerical comparison, we use some matrices from the University of Florida sparse matrix Collection . These matrices with their properties are shown in Table 1. We determined the spectral radii of iteration matrices of the classical SOR, AOR, SSOR, SAOR, Jacobi, and Richardson methods (ρ(G)) as well as those of the corresponding G after the application HAM method with the experimentally computed optimal value of (ρ(Gopt)). In Tables 24, we list ρ(G), ρ(Gopt), the interval of convergence which introduced in Theorems 1 and 2, the experimentally computed optimal value of (opt), and the spectral radius of iteration matrix ρ(G¯opt) which introduced in Corollary 4.

Test problem information.

Matrix Order nnz Symmetric Positive definite Condition number
cage5 37 233 No No 15.4166
pivtol 102 306 No No 109.607
pde225 225 1065 No No 39.0638
Si2 769 17801 Yes No 170.848
bfwb782 782 5982 Yes No 18.0724

The basic method is convergent (Theorem 1).

Matrix Method ρ ( G ) Convergence interval opt ρ ( G opt )
pde225 SOR (w=r=1) 0.9776 (−1, −0.0743) −0.7423 0.7768
pde225 AOR (r=1, w=0.8) 0.8053 (−1, −0.8096) −0.9400 0.7739
pde225 SSOR (w=r=0.5) 0.8048 (−1.4441, −1) −1.2941 0.7474
pde225 SAOR (r=0.5, w=1) 0.7773 (−1, −0.6038) −0.8920 0.6673
cage5 SOR (w=r=1) 0.3388 (−1.2814, −1) −1.1591 0.2314
cage5 AOR (r=1.2, w=1.9) 0.9240 (−1, −0.0589) −0.68 0.3355
cage5 SSOR (w=r=0.5) 0.6427 (−1.7235, −1) −1.5235 0.4557
cage5 SAOR (r=0.1, w=0.7) 0.5590 (−1.5609, −1) −1.3756 0.3890
pivtol SOR (w=r=0.1) 0.9487 (−3.9226, −1) −2.7526 0.8588
pivtol AOR (r=0.3, w=0.6) 0.9958 (−1, −0.0128) −0.7300 0.7626
pivtol SSOR (w=r=0.2) 0.8005 (−1.4850, −1) −1.3450 0.7317
pivtol SAOR (r=0.1, w=0.4) 0.9190 (−1, −0.2198) −0.83 0.6943
bfwb782 SOR (r=w=1) 0.3732 (−1.2270, −1) −1.1470 0.3024
bfwb782 AOR (r=1.2, w=1.9) 0.9942 (−1, −0.0045) −0.65 0.3988
bfwb782 SSOR (w=r=0.5) 0.5984 (−1.6665, −1) −1.4704 0.4103
bfwb782 SAOR (r=0.1, w=0.7) 0.5123 (−1.5123, −1) −1.3423 0.3453
bfwb782 Richardson 0.999998 ( - 8.5201 × 1 0 4 , -1) - 8.0701 × 1 0 4 0.8952

The basic method is divergent (Theorem 2).

Matrix Method ρ ( G ) Convergence interval opt ρ ( G opt )
pde225 SOR (r=ω=1.5) 2.4937 ( - 0.4318 , 0) −0.3 0.7980
pde225 AOR (r=1, ω=1.5) 1.5542 ( - 0.6816 , 0) −0.5 0.7745
pde225 SSOR (r=2.01, ω=2.01) 1.1112 ( 0 , 17.9804) 12.0540 0.9388
pde225 SAOR (r=2, ω=3) 4.9844 ( 0 , 0.3408) 0.2510 0.7741
cage5 SOR (r=ω=2) 1.0150 ( - 0.9872 , 0) −0.4 0.7425
cage5 AOR (r=1.2, ω=3) 2.0370 ( - 0.6584 , 0) −0.43 0.3355
cage5 SSOR (r=2.2, ω=2.2) 1.8715 ( 0 , 2.2948) 2.0280 0.7681
cage5 SAOR (r=2, ω=3) 4.0002 ( 0 , 0.6666) 0.458 0.3748
pivtol SOR (r=ω=1) 4.3002 ( - 0.3773 , 0) −0.33 0.7778
pivtol AOR (r=0.467, ω=0.981) 1.5280 ( - 0.7566 , 0) −0.58 0.6766
pivtol SSOR (r=ω=0.5) 2.1132 ( - 0.6424 , 0) −0.55 0.7306
pivtol SAOR (r=0.1, ω=0.7) 2.6670 ( - 0.5454 , 0) −0.47 0.7235
bfwb782 Jacobi 1.0554 ( - 0.9731 , 0) −0.81305 0.6794
bfwb782 AOR (r=1.2, ω=3) 2.1398 ( - 0.6352 , 0) −0.4 0.3996
bfwb782 SSOR (ω=r=2.2) 2.1411 ( 0 , 1.7528) 1.5550 0.7748
bfwb782 SAOR (r=2, ω=3) 4.0000 ( 0 , 0.6667) 0.45 0.3570

The basic method is divergent (Theorem 3).

Matrix Method ρ ( G ) Convergence interval opt ρ ( G ¯ opt )
Si2 SOR (r=w=1.3) 1.0993 (0, 0.8415) 0.8 0.9974
Si2 AOR (r=1.5, w=0.75) 1.0799 (0, 2.7964) 2.7 0.9942
Si2 SSOR (r=w=1.8) 1.24945 (0, 2.0007) 1.9 0.9909
Si2 SAOR (r=1.5, w=0.75) 1.1380 (0, 2.0949) 2 0.9916
Si2 Jacobi 1.3233 (0, 0.3705) 0.7 0.9999
Si2 Richardson 40.3813 (0, 0.0012) 0.0011 0.9999

In Table 2, we consider the convergent classical methods (ρ(G)<1). It is easy to verify that the numerical results are consistent with Theorem 1, and we observe that for the convergent classical methods by choosing suitable convergence control parameter , the rate of the convergence of the HAM method is faster than that of corresponding classical method.

In Table 3, we consider the divergent classical methods when Re(λi)>1 (or Re(λi)<1) for i=1,2,,n. We can see that the numerical results are consistent with Theorem 2. These results show that by applying the HAM to a iterative scheme which is divergent, it is possible to construct a convergent homotopy-series vectors when the iteration matrix G has mentioned properties.

In Table 4, we report the results obtained for the symmetric matrix Si2 which has positive diagonal elements. For this example, the classical methods diverge and there exist i,j such that Re(λi)<1 and Re(λj)>1. We observe that the results are consistent with Theorem 3 and Corollary 4. The results show that the HAM method is convergent but the rate of the convergence is slow.

Finally, Tables 3 and 4 show that it is not necessary to choose the the parameters r and ω in the convergence interval of the classical methods. In the case of divergence, under the assumptions of Theorems 2 and 3, the application of the HAM can generate the convergent homotopy-series vectors for linear system (1).

4. Conclusion

In this paper, we proposed to apply the homotopy analysis method to the classical iterative methods for solving the linear system of equations. The theoretical results show the HAM can be used to accelerate the convergence of the basic iterative methods. In addition, we show that by applying the HAM to a divergent iterative scheme, it is possible to construct a convergent homotopy-series solution when the iteration matrix G of the iterative scheme has particular properties. The numerical experiments confirm the theoretical results and show the efficiency of the new method.

Varga R. S. Matrix Iterative Analysis 1962 Englewood Cliffs, NJ, USA Prentice-Hall MR0158502 Young D. M. Iterative Solution of Large Linear Systems 1971 New York, NY, USA Academic Press MR0305568 Hageman L. A. Young D. M. Applied Iterative Methods 1981 New York, NY, USA Academic Press Computer Science and Applied Mathematics MR630192 Li Y.-T. Li C.-X. Wu S.-L. Improvements of preconditioned AOR iterative method for L-matrices Journal of Computational and Applied Mathematics 2007 206 2 656 665 10.1016/j.cam.2006.08.019 MR2333703 Wang H. Li Y.-T. A new preconditioned AOR iterative method for L-matrices Journal of Computational and Applied Mathematics 2009 229 1 47 53 10.1016/j.cam.2008.10.003 MR2522498 Wang L. Song Y. Preconditioned AOR iterative methods for M-matrices Journal of Computational and Applied Mathematics 2009 226 1 114 124 10.1016/j.cam.2008.05.022 MR2501886 Yun J. H. A note on preconditioned AOR method for L-matrices Journal of Computational and Applied Mathematics 2008 220 1-2 13 16 10.1016/j.cam.2007.07.009 MR2444149 Zhang Y. Huang T.-Z. Modified iterative methods for nonnegative matrices and M-matrices linear systems Computers & Mathematics with Applications 2005 50 10–12 1587 1602 10.1016/j.camwa.2005.07.005 MR2185710 Huang T.-Z. Wang X.-Z. Fu Y.-D. Improving Jacobi methods for nonnegative H-matrices linear systems Applied Mathematics and Computation 2007 186 2 1542 1550 10.1016/j.amc.2006.07.133 MR2316949 Keramati B. An approach to the solution of linear system of equations by He's homotopy perturbation method Chaos, Solitons & Fractals 2009 41 1 152 156 10.1016/j.Chaos.2007.11.020 Yusufoğlu E. An improvement to homotopy perturbation method for solving system of linear equations Computers & Mathematics with Applications 2009 58 11-12 2231 2235 10.1016/j.camwa.2009.03.010 MR2557353 Liu H.-K. Application of homotopy perturbation methods for solving systems of linear equations Applied Mathematics and Computation 2011 217 12 5259 5264 10.1016/j.amc.2010.11.024 MR2770142 Liao S. Beyond Perturbation: Introduction to the Homotopy Analysis Method 2003 2 Boca Raton, Fla, USA Chapman & Hall/CRC Press Modern Mechanics and Mathematics MR2058313 Liao S. J. The proposed homotopy analysis technique for the solution of nonlinear problem [Ph.D. thesis] 1992 Shanghai, China Shanghai Jiao Tong University Liao S. On the homotopy analysis method for nonlinear problems Applied Mathematics and Computation 2004 147 2 499 513 10.1016/S0096-3003(02)00790-7 MR2012589 He J.-H. Homotopy perturbation technique Computer Methods in Applied Mechanics and Engineering 1999 178 3-4 257 262 10.1016/S0045-7825(99)00018-3 MR1711041 He J.-H. A coupling method of a homotopy technique and a perturbation technique for non-linear problems International Journal of Non-Linear Mechanics 2000 35 1 37 43 10.1016/S0020-7462(98)00085-7 MR1723761 He J.-H. Homotopy perturbation method: a new nonlinear analytical technique Applied Mathematics and Computation 2003 135 1 73 79 10.1016/S0096-3003(01)00312-5 MR1934316 Hadjidimos A. Yeyios A. Symmetric accelerated overrelaxation (SAOR) method Mathematics and Computers in Simulation 1982 24 1 72 76 10.1016/0378-4754(82)90053-2 MR654656 Davis T. A. Hu Y. The University of Florida sparse matrix collection Association for Computing Machinery 2011 38 1, aricle 1 10.1145/2049662.2049663 MR2865011