ON AN ACCELERATED PROCEDURE OF EXTRAPOLATION

This paper presents some theoretical results concerning an extrapolation method, based on a completely consistent linear stationary iterative method of first degree, for the numerical solution of the linear system Au=b.The main purpose of the paper is to find ranges for the extrapolation parameter, such that the extrapolation method converges independently of whether the original iterative method is convergent or not.


INTRODUCTION.
For the numerical solution of the linear system of equations Au b, (I.i)where A is a given nonsingular real nn matrix, b is a given real vector and u is the solution-vector, which is to be determined, various iterative me- thods can be applied.Among them, we consider a completely consistent linear stationary iterative method of first degree (see e.g.[ where G is some real matrix, which is called the iteration matrix of the me- thod (1.2), k is some real vector and u () is an arbitrary initial approxi- A. YEYIOS marion to the solution u of (i.i).Moreover, we have k (I-G)A-Ib and det(l-G) 0 (1. 3)   In order to accelerate the rates of convergence of methods like (1.2), vari- ous procedures and modifications are used.One of them is the extrapolation method based on (1.2).This is defined by where is a real parameter ( 0) called the extrapolation parameter.We no- te here, that the idea of using an extrapolation parameter, 0, appeared long ago in the stationary Richardson method [2], based on (i.i), defined by which follows from (1.4) as a special case if G-I+A.
For re=l, method (1.4) coincides with (1.2).The iteration matrix of method where I is the identity matrix of order n.Thus, (1.4)  where k'=mk.Since and (I-GIn)A-Ib m(I-G)A-b -mk k" det(l-Gm) det(m(l-G)) mndet(l-G) 0 it follows that the extrapolation method is completely consistent with the system (I.i).
The problem which now arises is how the parameter must be chosen in order to have p(G )< p(G) with @(G )<i where (G), (G are the spectral radii of the matrices G and G respectively. As is known, the problem of finding a theoretical optimum value for m, say has been solved in some special cases, but not in the general case.It is easy to show (see e.g.[3]) that, if the matrix G has real eigenvalues j such that me_j <_M < 1 (1.7) then p(G )<i iff 0<m< 2/(l-Bm) and the spectral radius p(G is minimized if we take -opt-2/(2-(Bm+M)).Moreover, we have opt Therefore, in this case, the optimum extrapolation method is always conver- gent, although o(G) is not necessarily less than one.It must be noted that (1.7) holds if G is the matrix B of the Jacobi method, corresponding to a po- sitive definite matrix A of the original system.Then, the optimum JOR me- thod [i] converges.We also note, that if G has real eigenvalues such that l<m_ <j <--M' then o(Gm) < i iff 2/(i-M < m < O.Moreover, mopt 2/(2-(m+UM)) and p(Gm (M-m)/(m+BM-2) < i.
opt In a recent work [4], a geometrical approach of the general case with o(G)<i is discussed, where the construction of a capturing circle of the spectrum of G is required.
In the next section we study extrapolation method (1.6) in order to find ran- ges for m in which convergence is achieved in the general case. 2.

CONVERGENCE THEORY.
First we observe that, if for some norm of G we have fIG If< i, then for < 1 we obtain that is, the extrapolation method converges.We assume now that, i8.

Vj =0je
] j =i(i)n (2.1) are the eigenvalues of G, where O!OmOj <--PM o(G), i= (-i Ijl oj and m=min.0j, M=max.Oj In the sequel we omit the subscript j in Oj, @j sin- ce no confusion can be made; that is, from now on ,0 are used in place of j respectively.Evidently, the eigenvalues of Gm are given by lj =ovj+l-m j l(1)n (2.2) A. YEYIOS Therefore, for the spectral radius of G we have that p G j J +l-m <max{Im I+II-I}-ll-l+Imlmaxljl- We examine now the convergence of method (1.6) in relation to that of method (1.2).We discuss four basic cases, the first two of which are rather trivi- al.
Case I: p O.Here we assume that la It is obvious that the optimum valu of m, which minimizes p(Gm), is opt:l, since p(G) 0.
Case II-0-i.Now we have that {j{-i for all j.Then, lj[ < iff 2m(m-l)(l-cos8) <0.If cosel, that is, if .i for all j, then for O<m<l ] we have p(Gm)< i.If j i for some j, then method (1.6) does not converge.
I for all j, we seek the optimum value of m6(0,1) In the case where j which minimizes p(G0).Suppose that -l_<x <_x <_x 2 < i where x-cos8 Re Z..It is apparent that --i (which is valid if j -i for all j), then we have j 3 p(%.)o.
] and the following theorem has been proved.
Remark It can be proved that g(p,) > i for p < 1.
(2.10) m m The inequality (2.10) can be written as follows The inequality above holds because of the relationships 0 < l-x < l-x and m 0 <l-p2(G)_<l-p 2 Corollary 2.5 If the iteration matrix G of the method (1.2), has eigenvalu- es pj such that, Repj_> 0 for all j and p(G) <i, then p(Gm) < i for all 0 <o<2/(l+p2(G)).
Case IV-0 <_pm <-p <-pM and Pm < i< PM P (G) It is also assumed, in this case, that pj i, j-l(1)n.Consider now only those eigenvalues pj of G with l<p<_pM-P(G).Then ljl <i iff -(p2+l-2pcose)+20(pcose-1) < O. (2.11) We distinguish two cases according to whether m is less or greater than zero.i.Let m<0.Since p2+l-2pcos8 >_(l-p) 2 >0, by virtue of (2.11) we have m> 2(l-pcose)/(p2+l-2pcos@).In order that negative values for m exist, we must have l-pcos8 < 0; Namely, the real parts of the eigenvalues, which we con- sider, must be > i. Since, according to Cases I, T, and IE for the other el- genvalues of G with 0iPm<_p < i, there is no convergence for w < 0, we conclu- de that m can not take negative values.
2. Now let m >0.Then (2.11) gives m<2(l-pcosS)/(p2+l-2pcosS).In order that positive values for m exist, we must have pcos@ < i or Re pj < i for tho- se eigenvalues j with p > i. Considering now all the eigenvalues of G and ta- king into account the results of Cases I, II, and IE, together with the ob- servation that 2(l-pcosO)/(02+l-2cos@)_<l for 0>_i, the theorem below fol- IOWS.
The following Theorem is an immediate result of Case IV.
Theorem 2.8 If G has eigenvalues j statements are valid.
Theorem 2.10 If all the eigenvalues of B are real with moduli > i, then the JOR method does not converge.
As an application of Theorem 2.12, we consider the following examples.

CONCLUDING REMARKS
From the previous convergence results of section 2, it is clear, that in order to be possible to find ranges for such that 0(Gin)< i, either Relaj < 1 or Repj > 1 must hold for all the eigenvalues p j, j-l(1)n of G.
For practical purposand in a case which is not a special one, the choice A. YEYIOS of m is made computationally, since, the range for m for which p(G )< i opt is known.
Finally, we note that the Accelerated Overrelaxation (AOR) method stu- died in [6], which in turn was an extension of the corresponding one intro- duced by Hadjidimos [7], is an extrapolation of an obvious extension of the well known Successive Overrelaxation method (SOR) (see e.g. [8],[I] ).Also, in a paper of Niethammer [9], an extrapolation of the SOR is studied.Thus, all the theory developed in this paper could be applied to the extrapolation method based on SOR in order to obtain better rates of convergence.

opt j 3 Remark
If max Re B.