Computing Simple Roots by an Optimal Sixteenth-Order Class

The problem considered in this paper is to approximate the simple zeros of the function f x by iterative processes. An optimal 16th order class is constructed. The class is built by considering any of the optimal three-step derivative-involved methods in the first three steps of a four-step cycle in which the first derivative of the function at the fourth step is estimated by a combination of already known values. Per iteration, each method of the class reaches the efficiency index 5 √ 16 ≈ 1.741, by carrying out four evaluations of the function and one evaluation of the first derivative. The error equation for one technique of the class is furnished analytically. Some methods of the class are tested by challenging the existing high-order methods. The interval Newton’s method is given as a tool for extracting enough accurate initial approximations to start such high-order methods. The obtained numerical results show that the derived methods are accurate and efficient.


Introduction
Consider that the function f : I ⊆ R → R is a sufficiently differentiable scalar function.We assume that r ∈ I be a simple zero of the nonlinear function, that is, f r 0 and f r / 0. There is a vast literature on simple zeros of such nonlinear functions by iterative processes, see, for example, 1-3 .In 1974, Kung and Traub 4 conjectured that a multipoint iterative method without memory for solving single variable nonlinear equations consisting of n 1 evaluations per iteration has the maximal convergence rate 2 n and, subsequently, the maximal efficiency index will be n 1 √ 2 n .By taking into consideration this concept, many authors, see, for example, 5-7 , have tried to produce optimal multistep methods.
for j 2, . . ., n − 1, where S j y is the inverse Hermite interpolatory polynomial of degree at most j such that S j f x n x n , S j f x n 1 f x n , S j f q λ f x n q λ x n , λ 2, . . ., j.

1.2
This technique is defined by x n 1 q n f x n starting with an initial point x 0 .Recently, Neta and Petković applied the concept of inverse interpolation again in 8 to approximate the first derivative of the function in the third and fourth steps of a three-and four-step cycle.They obtained the following optimal 8th order technique: with t ∈ R, and c, d as comes next

1.4
They also presented an optimal 16th order technique in the following structure consisting of four evaluations of the function and one evaluation of the first derivative: 1.6 and also For further reading, one may refer to 9, 10 .Contents of the paper are summarized in what follows.In the next section, our novel contribution is constructed by considering an optimal eighth-order method in the first three steps of a four-step cycle, in which the derivative in the quotient of the new Newton's iteration is estimated such that the order remains at 16, namely, optimal according to the Kung-Traub hypothesis.The new class of methods is supported with detailed proof in this section to verify the construction theoretically.
Then, it will be observed that the computed results listed in Table 2 completely support the theory of convergence and efficiency analyses discussed in the paper.Section 4 reminds the well-known Interval Newton method in the efficient programming package Mathematica following four-step four-point iteration, in which the first three steps are any of the optimal three-step three-point without memory derivative-involved methods: optimal 8th-order method

2.1
It can be seen that per full cycle of the structure 2.1 , it includes four evaluations of the function and two evaluations of the first derivative to reach the convergence rate 16.At this time, we should approximate the first derivative of the function at the fourth step such that the convergence order does not decrease.In order to do this effectively, we have to use all of the known data from the past steps, that is, f x n , f x n , f y n , f z n , and f w n .Herein, we take into account the nonlinear fraction given by without the index This nonlinear fraction is inspired by Pade approximant in essence.The approximation function should meet the interpolation conditions f x n p x n , f x n p x n , f y n p y n , f z n p z n , and f w n p w n .Note that the first derivative of 2.2 takes the following form:  Substituting the known data in 2.2 and 2.3 will result in obtaining the five unknown parameters.It is obvious that b 1 f x n , hence we obtain the following system of four linear equations with four unknowns:

2.4
Journal of Applied Mathematics 7 Solving 2.4 and simplifying without the index n and using divided differences yields in wherein y − x Y , z − x Z, w − x W. Now we have a powerful approximation of the first derivative of the function in the fourth step of 2.1 , which doubles the convergence rate of the optimal 8th order methods.Therefore, we attain the following class, in which we have four evaluations of the function and one evaluation of the first-order derivative: Any optimal 8th-order method and f x n are available,

2.6
Now using any of the optimal three-point three-step methods, we could obtain a novel 16th order technique which satisfies the Kung-Traub hypothesis as well.Using the optimal 8th order method given by Wang and Liu in 11 ends in the following four-step method:

2.7
Theorem 2.1.Assume that the scalar function f be sufficiently smooth in the real open domain I ⊆ R.
Furthermore let e n x n −r and c k f k r / k!f r , for k ∈ N, N is the set of natural numbers.
Then, the iterative method 2.7 has the optimal order of convergence 16 and satisfies the following error equation: .Applying these expansions in the first step of 2.7 , we have 2.9 Note that to keep the prerequisites at the minimum, we only mention the simplified error equations at the end of first, second, and third steps.Because we also know that the first three steps are an eighth-order technique.Taylor series expanding in the second step of 2.7 by applying 2.9 yields in

2.10
Using 2.10 and the third step of 2.7 gives us

2.11
Now, we expand f w n , we obtain

2.12
Subsequently, in the last step we have

2.13
Finally, using 2.12 and 2.13 in the last step of 2.7 ends in n O e 17 n , 2.14 which shows that 2.7 is a 16th order method consuming only five evaluations per iteration.This completes the proof.
Remark 2.2.If we choose any of the other optimal 8th order derivative-involved methods in the first three steps of 2.6 , then a novel optimal 16th order technique will be obtained.For instance, using the optimal 8th order method given by J. R. Sharma and R. Sharma in 13 results in the following four-step optimal method:

2.15
which satisfies the following error equation: n O e 17 n .

2.16
Remark 2.3.Any method of the developed class carries out five evaluations per full cycle to reach the optimal order 16.Hence, the index of efficiency for our class is 5 √ 16 ≈ 1.741, which is greater than that of optimal 4th order techniques 3 √ 4 ≈ 1.587, and optimal 8th order techniques 4  √ 8 ≈ 1.682.

Computational Experiments
The analytical outcomes given in the last section are fully supported by numerical experiments here.Two methods of the class 2.6 , that is, methods 2.7 S-S 16 I and 2.15 S-S 16 II , are compared with some of high-order methods.
of a nonlinear function in an interval.Hopefully, Mathematica 8 gives the users a friendly environment for working with lists sequences , that is to say, the obtained list then would be corrected using the new 16th order schemes of this paper.This procedure was efficiently coded in 22 and with some changes we have provided in what follows: (intnewt[f,df,x,#,eps,n-1])&/@(List@@int)];

Options[intervalnewton]={MaxRecursion->2000};
The above piece of code takes the nonlinear function f, the lower and upper bounds of the working interval and applies the interval form of the Newton's iteration with the maximum number of recursion considered as 2000 and also the use of the command SetAccuracy [exp,16].Note that in this case, like the Newton's method, the function should have first order derivative on the interval.Thus now, the call-out function to implement the above piece of code could be given by intervalnewton[f ,x ,int Interval,eps ,opts ] :=Block[{df,n},n=MaxRecursion/.{opts}/.Options[intervalnewton]; df=D[f,x]; The tolerance should be chosen to be arbitrary in machine precision.To illustrate the procedure further, we consider the oscillatory nonlinear function f x e sin log x cos 20x − 2, where its plot has been given in Figure 1, in the interval a, b 2., 10. while the tolerance is set to 0.0001 as follows: The implementation of the above part of code shows that the number of zeros is 51 while the list of initial approximations is {2.We should remark that now by considering the new optimal 16th order methods from the class 2.6 , one may enrich the accuracy of the initial guesses up to the desired tolerance when high precision computing is needed.

Conclusions
A class of four-point four-step iterative methods has been developed for solving nonlinear equations.The analytical proof for one method of the class was written to clarify the 16th order convergence.The class was attained by approximating the first derivative of the function in the fourth step of a cycle in which the first three steps are any of the optimal eighth-order derivative-involved methods.
Per full cycle, the methods of the class consist of four evaluations of the function and one evaluation of the first derivative which results in 1.741 as the efficiency index.The presented class satisfies the still unproved conjecture of Kung-Traub on multi-point iterations without memory.
The accuracy and efficiency of two obtained methods of the class were illustrated by solving a lot of numerical examples.Numerical works also have attested the theoretical results given in the paper and have shown the fast rate of convergence.

Table 1 :
Test functions and their zeros.

Table 2 :
Results of comparisons for different methods.