A Family of Fourteenth-Order Convergent Iterative Methods for Solving Nonlinear Equations

We present a family of fourteenth-order convergent iterative methods for solving nonlinear equations involving a specific step which when combined with any two-step iterative method raises the convergence order by n + 10, if n is the order of convergence of the two-step iterative method. This new class include four evaluations of function and one evaluation of the first derivative per iteration. Therefore, the efficiency index of this family is 141/5 = 1.695218203. Several numerical examples are given to show that the new methods of this family are comparable with the existing methods.


Introduction
Let us consider the equation where  is a real valued univariate nonlinear function.
In the recent past, many higher-order convergent iterative methods have been developed.However, in the recent years, locating zeroes of such functions has been addressed majorly in the context of methods with higher-order convergence as well as less computational effort.Thus, the methods are now usually described in terms of their computational order of convergence, efficiency index, and computational efficiency index.The conjecture of Kung and Traub [1] has also remained under consideration to develop the computationally efficient classes of iterative methods for solving nonlinear equations.
Definition 1 (computational order of convergence).Let  * be a root of the function () and suppose that  +1 ,   ,  −1 are three consecutive iterates closer to the root  * .Then, the computational order of convergence  can be approximated using the following formula: ( Definition 2 (efficiency index).The efficiency index of the iterative method is given by Efficiency Index =  1/ , where  is the order of convergence and  is computational cost.
Conjecture 3 (optimal order of convergence).An optimal iterative method without memory based on  evaluations could achieve an optimal convergence order of 2 −1 .
In 1973, King [2] developed a one parameter family of fourth-order convergent iterative methods for nonlinear equations given by   =   −  (  )   (  ) , where  ∈ R is a constant.It is a two-step family which at the second step adds only one more function evaluation and the order of convergence increases from two to four 2 Chinese Journal of Mathematics with efficiency index of 3 √ 4 = 1.587401052.The Ostowski's Method [3,4] is a member of this family for  = 0, given as In 2009, Bi et al. [5] presented a family of three-step eighth-order convergent iterative methods for solving nonlinear equations.This scheme is based on King's fourth-order convergent iterative methods and the family of sixth-order iterative methods developed by Chun and Ham [6].The first two steps are the King's fourth-order method and the third step is developed with a real-valued function (  ), similar to that introduced by Chun and Ham [6], where   = (  )/(  ).The method is given as follows: where In the third step, only one additional function evaluation increases the order of convergence from four to eight.The efficiency index of this family is 4  √ 8 = 1.681792831, which is the optimal order of convergence with four function evaluations.
In 2011, Sargolzaei and Soleymani [7] presented new fourteenth order convergent iterative methods to approximate the simple roots of nonlinear equations.The three-step eighthorder construction can be viewed as a variant of Newton's method by using Hermite interpolation to reduce the number of function evaluations.The method is given in the following form: where This method requires five evaluations per iteration.So, the efficiency index of this method is 5√ 14 = 1.695218203.Also in 2011, Soleymani and Sharifi [8] presented an efficient class of iterative methods.This class of methods consists of optimal eighth-order convergent iterative methods with an additional fourth-step developed by Soleymani The first three steps are the eighth-order convergent iterative method of [9].The method requires five evaluations per iteration.So, the efficiency index of this class of methods is 5  √ 15 = 1.718771928.This index is just a little bit lower than that of optimal sixteenth-order methods, 1.741101127.
In this paper, we, by using the divided difference decomposition of the derivative involved in the third step, have developed a specific step which when combined with any two-step iterative method raises the convergence order by  + 10, where  is the order of convergence of the two-step iterative method.Specifically, we have taken King's fourth order convergent iterative methods which when combined with our specific third step raises the convergence order of the new family of methods to fourteen with the efficiency index

Development of the New Class
In this section, we first construct a general class of three-step fourteenth-order convergent iterative methods by combining a new derivative free step with a general fourth order convergent iterative method.We suggest the following specific step: where This approximation of   (  ) is taken from [8].Now the general three-step fourteenth-order convergent iterative method has the following form: where  4 (  ) is any two-step fourth-order convergent iterative method and   =   − ((  )/  (  )).
The order of convergence of the iterative method (16) using Maple 7.0 is given in the form of the following theorem. where Proof.Let   be a fourth-order convergent method Let  * be the zero of  and   =  * +   with an error   .By Taylor's expansion, we have For the calculation of   (  ), we have used divided difference approximation of   (  ) given by The simplified form of ( 21) is given by Now we have where Using (23), we have Thus, by using (25) we get Also for the calculation of   (  ) we have used divided difference approximation of   (  ) as follows: Using this approximation we have found that Finally, combining ( 26) and (28), we get and the error equation is given by where The error equation shows that the order of convergence of the new family of iterative methods is fourteen.

The Iterative Methods
In this section we give two special cases of formula (16) as follows.
The methods (32) and (33) achieve fourteenth-order convergence.Per iteration the presented methods require four evaluations of functions and one evaluation of the first derivative.We note that the new class of methods (16) has the efficiency index of 5√ 14 = 1.695218203 which is better than √ 2 = 1.414213562 of Newton's method, 3  √ 4 = 1.587401052 of King's method [2], and 4  √ 8 = 1.681792831 of threestep iterative method with eighth-order convergence [5].However, the fourteenth-order convergent iterative method of Sargolzaei and Soleymani [7] has the same efficiency index

Numerical Examples
In this section, we now consider some numerical examples to demonstrate the performance of the newly developed iterative methods.All the computations for the above-mentioned methods are performed using software Maple 7 with 500 digits precision and  = 10 −17 is taken as tolerance.The following criteria is used for estimating the zeros: Thus, for convergence criteria, it is required that the functional value at the root must be less than 10 −17 . 0 represents the initial guess and  * the exact zero of the nonlinear function ().
The numerical examples used for comparison are given in Table 1, most of which are taken from [5,7,8].
In Table 2, we compare the classical Newton's method (NM), the eighth-order convergent iterative method (BI) of Bi et al. [5], the fourteenth-order convergent iterative method (SP) of Sargolzaei and Soleymani [7], the fifteenth-order convergent iterative method (SS) of Soleymani and Sharifi [8], and the newly developed fourteenth-order convergent iterative methods (M1) and (M2).The methods are compared in terms of number of iterations (), the total number of function evaluations (NFE), the computational order of convergence (COC), and the absolute values of the function |(  )|.

Conclusion
New methods are tested for almost all types of nonlinear functions, polynomials, and transcendental functions.Table 2 shows that the newly developed fourteenth-order methods are comparable with the existing methods of this domain in terms of number of iterations and number of function evaluations per iteration.In many examples, the newly developed methods perform better than the existing methods.For example, the method (BI) diverges for the functions  1 and  14 , the method (SS) diverges for function  1 , but the new methods (M1) and (M2) converge for these cases.Similarly, for the function  2 the method (BI) approximates the root in three iterations, while the new methods in two iterations with ten evaluations.For the function  3 , the method (BI) gives the approximate root in nine iterations with thirty-six evaluations while the methods (M1) and (M2) give the approximate root in three and two iterations, respectively.For the function  14 , the method (SS) approximates the root in six iterations while the method (M1) approximates the root in four iterations and the method (M2) approximates the root in three iterations.It may be noted that for almost all functions the new methods (M1) and (M2) are correct up to more significant decimal places as compared to the method (BI) and in some cases to the method (SP).The performance of the fourteenth-order method (SP) [7] is the same as that of the new methods of the family.It is also clear from the tables that for the choice of initial guess, near to exact root or far from the exact root, the performance of the new ≈ ln     ( +1 −  * ) / (  −  * )     ln     (  −  * ) / ( −1 −  * )     .

Theorem 4 .
Let  * ∈  be a simple zero of sufficiently differentiable function  :  ⊑ R → R for an open interval .If  0 is sufficiently close to  * , then the iterative method (16) is fourteenth-order convergent.The error equation is given by ,   ], [  ,   ], and [  ,   ] are the divided differences and   (  ) is approximated by