Efficient Iterative Methods with and without Memory Possessing High Efficiency Indices

The main goal and motivation in constructing iterative methods for solving nonlinear equations is to obtain as high as possible order of convergencewithminimal computational cost (see, e.g., [1–3]). Hence, many researchers (see, e.g., [4, 5]) have paid much attention to construct optimal multipoint methods without memory based on the unproved conjecture of Kung and Traub [6], which indicates that any multipoint iterative method without memory using n + 1 functional evaluations can reach the optimal order 2. Let α be a simple real zero of a real function f : D ⊆ R → R and let x 0 be an initial approximation to α. In many practical situations, it is preferable to avoid calculations of derivatives of f. This makes the construction of iterative derivative-free methods important [7]. This paper follows two main goals. First, developing some optimal three-step derivative-free families of methods without memory. And second, developing the proposed families to methods with memory in a way that reaches convergence R-orders 6 and 12 without any new functional evaluation. The main idea in methods with memory is based on the use of suitable two-valued functions and the variation of a free parameter in each iterative step. This parameter is calculated using information from the current and previous iteration so that the developedmethodsmay be regarded asmethodswith memory following Traub’s classification [8]. A supplemental inspiration for studying methods with memory stands up from an astonishing fact that such classes of iterativemethods have been considered in the literature very rarely despite their high efficiency indices. The paper is organized as follows. First, two families of optimal methods with orders four and eight consuming three and four function evaluations per iteration, respectively, are proposed in Section 2. Then in Section 3, we state methods with memory of very high computational efficiencies. The increase of convergence speed is achieved without additional function evaluations, which is the main advantage of the methods with memory. Section 4 is assigned to numerical results connected to the order of the methods with and without memory. Concluding remarks are given in Section 5.


Preliminaries
The main goal and motivation in constructing iterative methods for solving nonlinear equations is to obtain as high as possible order of convergence with minimal computational cost (see, e.g., [1][2][3]).Hence, many researchers (see, e.g., [4,5]) have paid much attention to construct optimal multipoint methods without memory based on the unproved conjecture of Kung and Traub [6], which indicates that any multipoint iterative method without memory using  + 1 functional evaluations can reach the optimal order 2  .
Let  be a simple real zero of a real function  :  ⊆ R → R and let  0 be an initial approximation to .In many practical situations, it is preferable to avoid calculations of derivatives of .This makes the construction of iterative derivative-free methods important [7].
This paper follows two main goals.First, developing some optimal three-step derivative-free families of methods without memory.And second, developing the proposed families to methods with memory in a way that reaches convergence R-orders 6 and 12 without any new functional evaluation.
The main idea in methods with memory is based on the use of suitable two-valued functions and the variation of a free parameter in each iterative step.This parameter is calculated using information from the current and previous iteration so that the developed methods may be regarded as methods with memory following Traub's classification [8].A supplemental inspiration for studying methods with memory stands up from an astonishing fact that such classes of iterative methods have been considered in the literature very rarely despite their high efficiency indices.
The paper is organized as follows.First, two families of optimal methods with orders four and eight consuming three and four function evaluations per iteration, respectively, are proposed in Section 2. Then in Section 3, we state methods with memory of very high computational efficiencies.The increase of convergence speed is achieved without additional function evaluations, which is the main advantage of the methods with memory.Section 4 is assigned to numerical results connected to the order of the methods with and without memory.Concluding remarks are given in Section 5.

Construction of New Families
This section deals with constructing new multipoint methods for solving nonlinear equations.The discussions are divided into two subsections.Let the scalar function  :  ⊂ R → R and () = 0 ̸ =   () =  1 ; that is,  is a simple zero of () = 0.

New Two-
Step Methods without Memory.In this subsection, we start with the two-step scheme: Note that we omit the iteration index  for the sake of simplicity only.The order of convergence of scheme (1) is four, but it does not contain any contributions.We substitute derivatives in the first and second steps by suitable approximations that use available data; thus, we introduce an approximation as follows.
Using the points  and  =  + () ( is a nonzero real constant), we can apply Lagrange's interpolatory polynomial for approximating   (): Fix   () ≈   1 (), and by setting  = , we have Also, Lagrange's interpolatory polynomial at points , , and  for approximating   () can be given as follows: We obtain Finally, we set above approximations in the denominator of (1), and so our derivative-free two-step iterative method is derived in what follows: Now to check the convergence order of (6), we avoid retyping the widely practiced approach in the literature and put forward the following self-explained Mathematica code: Considering Out[b] of the above Mathematica program, we observe that the order of convergence of the family ( 6) is four, and so we can state a convergence theorem in what follows.
Theorem 1.If an initial approximation  0 is sufficiently close to a simple zero  of , then the convergence order of the twostep approach (6) is equal to four.And its error equation is given by 2.2.New Three-Step Family.Now, we construct a three-step uniparametric family of methods based on the two-step method (6).We start from a three-step scheme where the first two steps are given by (6), and also the third step is Newton's method; that is, The derivative   () in the third step of (8) should be substituted by a suitable approximation in order to obtain as high as possible of convergence order and to make the scheme optimal.To provide this approximation, we apply Lagrange's interpolatory polynomial at the points  =  + (), , , and ; that is, It is obvious that  3 () = ().By differentiating (9) and setting  = , we obtain By substituting   () ≈   3 () in ( 8), we have where   2 () is defined by ( 5) and   3 () given by (10).In the following theorem, we state suitable conditions for deriving an optimal three-step scheme without memory.Theorem 2. Let  :  ⊂ R → R be a scalar function which has the simple root  in the open interval ; also initial approximation  0 is sufficiently close to a simple zero  of .The three-step iterative method (11) has eighth order and satisfies the following error equation: Proof.We are going to employ symbolic computation in the computational software package Mathematica.We write the following: Now by introducing the abbreviations  =  − ,  =  − ,  =  − ,  =  − , and   =  () ()/(! ()), we provide the following Mathematica program in order to obtain the convergence order of (11).
Program Written in Mathematica.Consider the following.
As a result, the proof of the theorem is finished.According to Out[c], it possesses eighth order.
Error equations ( 7) and (12) indicate that the orders of methods ( 6) and ( 11) are four and eight, respectively.
In the next section, we will modify the proposed methods and introduce new methods with memory.With the use of accelerator parameters, the order of convergence will significantly increase.

Extension to Methods with Memory
This section is concerned with extraction of efficient methods with memory from ( 6) and (11), through a careful inspection of their error equations containing the parameter , which can be approximated in such a way that increases the local order of convergence.
Toward this goal, we set  =   as the iteration proceeds by the formula   = −1/  () for  = 1, 2, . .., where   () is an approximation of   ().We therefore use the approximation for ( 6) and the following one for (11).Here, we consider Newton's interpolating polynomial of third degree as the method for approximating   () in two-step method ( 6) and Newton's interpolating polynomial of fourth degree for approximating   () in the threestep method (11), where  3 () is the Newton's interpolation polynomial of third degree, set through four available approximations   ,  −1 ,  −1 ,  −1 and  4 () is the Newton's interpolation polynomial of fourth degree, set through five available approximations   ,  −1 ,  −1 ,  −1 ,  −1 .Consider Note that a divided difference of order , defined recursively as has been used throughout this paper.Hence, the with memory developments of ( 6) and ( 11) can be presented as follows: Remark 3. Accelerating methods obtained by recursively calculated free parameter may also be called self-accelerating methods.The initial value  0 should be chosen before starting the iterative process, for example, using one of the ways proposed in [8].
Here, we attempt to prove that the methods with memory (19) and (20) have convergence orders six and twelve provided that we use accelerator   as in ( 14) and (15).For ease of continuing analysis, we introduce the convenient notation as follows.If the sequence {  } converges to the zero  of  with the order , then we write  +1 ∼    , where   =   − .
The following lemma will play a crucial role in improving the convergence order of the methods with memory to be proposed in this paper.The proof is complete.
In order to obtain the R-order of convergence [9] of the method with memory (19), we establish the following theorem.
Theorem 5.If an initial approximation  0 is sufficiently close to the zero  of () and the parameter   in the iterative scheme (19) is recursively calculated by the forms given in (14), then the R-order of convergence for (19) is at least six.
Proof.Let {  } be a sequence of approximations generated by the iterative method with memory (19).If this sequence converges to the zero  of  with the order , then we write Thus Moreover, assume that the iterative sequences   and   have the orders  1 and  2 , respectively.Then, (22) gives , ∼ Since using Lemma 4 and ( 27), induce Matching the powers of  −1 on the right hand sides of ( 24)-( 29), ( 25)-(30), and ( 23)-( 31), one can obtain The nontrivial solution of this system is  1 = 2,  2 = 3, and  = 6.This completes the proof.
Proof.The proof of this lemma is similar to Lemma 4. It is hence omitted.
Similarly for the three-step method with memory (20), we have the following theorem.Theorem 7. If an initial approximation  0 is sufficiently close to the zero  of () and the parameter   in the iterative scheme (20) is recursively calculated by the forms given in (15), then the order of convergence for (20) is at least twelve.
Remark 8.The advantage of the proposed methods is in their higher computational efficiency indices.We emphasize that the increase of the R-order of convergence has been obtained without any additional function evaluations, which points to very high computational efficiency.Indeed, the efficiency index 12 1/4 ≈ 1.861 of the proposed three-step twelfth-order method with memory is higher than the efficiency index, 6 1/3 ≈ 1.817 of (19), 8 1/4 ≈ 1.682 of the optimal three-point method (11), and 4 1/3 ≈ 1.587 of (6).
Remark 9. We observe that the methods ( 19) and (20) with memory are considerably accelerated (up to 50%) in contrast to the corresponding method (11) without memory.

Numerical Experiments
In this section, we test our proposed methods and compare their results with some other methods of the same order of convergence.The results are reported using the programming package Mathematica 8 in multiple precision arithmetic environment.We have considered 1000 digits floating point arithmetic so as to minimize the round-off errors as much as possible.
It is assumed that the initial estimate  0 should be chosen before starting the iterative process, and also  0 is given suitably.
Several iterative methods of optimal orders four and eight, for comparing with our proposed methods, have been chosen as comes next.
In Tables 1, 2, and 3 our two-step proposed classes ( 6) and ( 19) have been compared with optimal two-step methods KT4, ZLH4, and LT4.We observe that all these methods behave very well practically and confirm their theoretical results.
Also Tables 4, 5, and 6 present numerical results for our three-step classes (11) and (20) and methods KT8, ZLH8, and LT8.It is also clear that all these methods behave very well practically and confirm their relevant theories.
We remark the importance of the choice of initial guesses.If they are chosen sufficiently close to the sought roots, then the expected (theoretical) convergence speed will be reached in practice; otherwise, all iterative root-finding methods show slower convergence, especially at the beginning of the iterative process.Hence, a special attention should be paid to finding good initial approximations.We note that efficient ways for the determination of initial approximations of great accuracy were discussed thoroughly in the works [12][13][14].

Conclusions
We have constructed two families of iterative methods without memory which are optimal in the sense of Kung and Traub's conjecture in this paper.Our proposed methods do not need any derivative.
In addition, they contain an accelerator parameter which raises convergence order without any new functional evaluations.In other words, the efficiency index of the three-step with memory hit 12 1/4 ≈ 1.861.
We finalize this work by suggesting some points for future researches: first developing the proposed methods for some matrix functions, such as the ones in [15,16], second