Two Bi-Accelerator Improved with Memory Schemes for Solving Nonlinear Equations

the


Introduction
Finding the root of a nonlinear equation frequently occurs in scientific computation.Newton's method is the most wellknown method for solving nonlinear equations and has quadratic convergence.However, the existence of the derivative in the neighborhood of the required root is compulsory for convergence of Newton's method, which restricts its applications in practice.To overcome the this difficulty, Steffensen replaced the first derivative of the function in Newton's iterate by forward finite difference approximation.This method also possesses the quadratical convergence and the same efficiency just like Newton's method.Kung and Traub are pioneers in constructing optimal general multistep methods without memory.Moreover, they conjectured any multistep methods without memory using  function evaluations that may reach the convergence order at most 2 −1 [1].Thus both Newton's and Steffenssen's methods are optimal in the sense of Kung and Traub.But the superiority of Steffenssen's method over Newton's method is that it is derivative free.So it can be applied to nondifferentiable equations also.To compare iterative methods theoretically, Owtrowski [2] introduced the idea of efficiency index given by  1/ , where  is the order of convergence and  number of function evaluations per iteration.In other words we can say that an iterative method with higher efficiency index is more efficient.To improve convergence order as well as efficiency index without adding any new function evaluations, Traub in his book introduced the method with memory.In fact he changed Steffensen's method slightly as follows (see [3, pp. 185-187]).
0 ,  0 are given suitably: The parameter   is called self-accelerator and method (1) has -order of convergence 2.414.The possibility to increase the convergence order cannot be denied by using more suitable parameters.Many researchers from the last few years, are trying to construct iterative methods without memory which support Kung and Traub conjecture such as [4][5][6][7][8][9][10][11][12][13] which are few of them.Although construction of optimal methods without memory is still an active field, from the last one year, many authors are shifting their attention for developing more efficient methods with memory.
In the convergence analysis of the new method, we employ the notation used in Traub's book [3]: if   and   are null sequences and   /  → , where  is a nonzero constant, we will write   = (  ) or   ∼   .We also use the concept of -order of convergence introduced by Ortega and Rheinboldt [14].Let   be a sequence of approximations generated by an iterative method (IM).If this sequence converges to a zero  of function  with the -order   ((IM), ) ≥ , we will write where  , tends to the asymptotic error constant   of the iterative method (IM) when  → ∞.
The rest of the paper is organized as follows: in Section 2 we describe the existing two-and three-point with memory derivative-free schemes and then their convergence orders are accelerated from six to seven and twelve to fourteen, respectively, without doing any extra evaluation.Proposed methods are obtained by imposing one more suitable iterative parameter.The parameter is calculated using Newton's interpolatory polynomial.
The numerical study is also presented in the next section to confirm the theoretical results.Finally, we give the concluding remarks.

Brief Literature Review and Improving with Memory Schemes
Two-step (double) and three-step (triple) Newton's methods can be, respectively, written as The orders of convergence of the schemes (3) and ( 4) have been increased to four and eight, respectively, but both of them have no improvement in efficiency as compared to the original Newton's method.One major draw back of the same schemes is that they also involved derivatives.For the purpose of obtaining efficient as well as free from derivatives schemes, Lotfi et al. [15] approximated the derivatives   (  ),   (  ), and   (  ) by Lagrange's interpolatory polynomials of degrees one, two, and three, respectively, which are given by where   =   + (  ) and  ∈  − {0}, Thus modified versions of schemes ( 3) and ( 4), respectively, become The authors have shown that without memory methods (7) and ( 8) preserve the convergence order with reduced number of function evaluation.Their corresponding error expressions are given by respectively, where   =  () ()/!.The above two without memory schemes are optimal in the sense of Kung and Traub.
Initially, Traub showed that the convergence of the without memory methods can be increased by use of information from current and previous iterations without adding any evaluation, which are known as with memory methods.To get increased order of convergence the authors in the same paper [15] first replaced  by   and then   = −1/  () for  = 1, 2, . .., where  is the exact root and   () is the approximation of   ().In addition, they used the approximation for method (7) and for method (8), where are Newton's interpolatory polynomial of degrees three and four, respectively.Here the single prime (  ) denotes the first derivative and double prime (  ) will later denote the second derivative.So that the one-parametric version of the methods ( 7) and ( 8) can be written as The authors showed that the convergence order of methods ( 14) and ( 15) is increased from 4 to 6 and from 8 to 12, respectively.The aim of this paper is to find more efficient methods using the same number of evaluations.For this purpose we introduce one more iterative parameter in the above methods; then the modified with memory methods are given by with their error expression With its error expression, Since the above error equations contain the iterative parameters both   and   , we should approximate these parameters in such a way that they will increase the convergence order.
To this end, we approximate   and   as follows.
For method ( 16) and for method 2 where are Newton's interpolatory polynomial of degrees three, four, four and five, respectively.Now we denote where  is the exact root.Before going to prove the main result, we state the following two lemmas which can be obtained by using the error expression of Newton's interpolation, in the same manner as in [16].
The theoretical proof of the order of convergence of the proposed methods is given by the following theorem.Theorem 3. If an initial approximation  0 is sufficiently close to a simple zero  of () and the parameters   and   in the iterative scheme ( 16) and 2 are recursively calculated by the forms given in ( 20) and ( 21), respectively, then the -orders of convergence of with memory schemes (16) and 2 are at least seven and fourteen, respectively.
Proof.First, we assume that the -orders of convergence of the sequences   ,   ,   , and   are at least ,  1 ,  2 , and  3 , respectively.Hence , ∼  , 1 Similarly Now we will prove the results in two parts.First for method (16) and then for 2.
The absolute errors for the first three iterations are given in Table 1.In the table  ±  stands for  × 10 ± .Note that a large number of three-step derivative-free (with and without memory) methods are available in the literature.But the methods which have been tested for nonsmooth functions are rare and this clearly proves the significance of this paper.
The effectiveness of the new proposed derivative free with memory methods is confirmed by comparing this with the existing memory family.The numerical results shown in Table 1 are in concordance with the theory developed here.From the theoretical result, we can conclude that the order of convergence of the without memory family can be made more higher than the existing with memory family by imposing more self-accelerating parameter without any additional calculations and the computational efficiency of the presented with memory method is high.The -orders of convergence are increased from 6 to 7 and 12 to 14 in accordance with the quality of the applied accelerating method proposed in this paper.We can see that the selfaccelerating parameters play a key role in increasing the order of convergence of the iterative method.

Table 1 :
Comparison of the absolute error in first, second, and third iteration.