A Class of Three-Step Derivative-Free Root Solvers with Optimal Convergence Order

A class of three-step eighth-order root solvers is constructed in this study. Our aim is fulfilled by using an interpolatory rational function in the third step of a three-step cycle. Each method of the class reaches the optimal efficiency index according to the Kung-Traub conjecture concerning multipoint iterative methods without memory. Moreover, the class is free from derivative calculation per full iteration, which is important in engineering problems. One method of the class is established analytically. To test the derived methods from the class, we apply them to a lot of nonlinear scalar equations. Numerical examples suggest that the novel class of derivative-free methods is better than the existing methods of the same type in the literature.


Introduction
One of the common problems encountered in engineering problems is that given a single variable function f x , find the values of x for which f x 0. The solutions values of x are known as the roots of the equation f x 0, or the zeros of the function f x .The root of such nonlinear equations may be real or complex.In general, an equation may have any number of real roots or no roots at all.There are two general types of methods available to find the roots of algebraic and transcendental equations.First, direct methods, which are not always applicable to find the roots, and second, iterative methods based on the concept of successive approximations.In this case, the general procedure is to start with one or more initial approximation s to the root and attain a sequence of iterates, which in the limit converges to the true solution 1 .
Here, we focus on the simple roots of nonlinear scalar equations by iterative methods.The prominent one-point or one-step Newton's method of order two, which is a basic tool in numerical analysis and numerous applications, has widely been applied and discussed in the literature; see, for example, 2-7 .Newton's iteration and any of its variants, include derivative calculation per full cycle to proceed, which is not useful in engineering problems.Since the calculation of more derivatives often takes up an eye-catching time.
To remedy this, first Steffensen coined the following quadratical scheme: x n 1 x n − f x n 2 / f x n f x n − f x n .Inspired by this method, so many techniques with better orders of convergence have been provided through two-or three-step cycles; see 8 and the bibliographies therein.In between, the concept of optimality, which was mooted by  , also plays a crucial role; a multipoint method for solving nonlinear scalar equations without memory has the optimal order 2 n−1 /n , where n is the total number of evaluations per full cycle.In what follows, we review some of the significant derivative-free iterations.
Peng et al. in 10 investigated an optimal two-step derivative-free technique as comes next x n and t n is adaptively determined.Ren et al. in 11 furnished an optimal quartical scheme using divided differences as follows: where w n x n f x n and a ∈ R.
Taking into account the divided differences, 12 contributed the following two-step optimal method: where w n x n f x n .Note that the notation of divided differences will be used throughout this paper.Therefore, we have In 2011, Khattri and Argyros 13 formulated a sixth-order method as follows: 1.4 Very recently, Thukral in 14 gave the following optimal eighth-order derivative-free iterations without memory: and also Unfortunately, by error analysis, we have found that 1.6 relation 2.32 in 14 does not possess the convergence order eight.Thus, it is not optimal in the sense of Kung-Traub's conjecture.Hence, 1.6 is excluded from our list of optimal eighth-order derivativefree methods.Note that 2.27 of 14 has also an eye-catching typo in its structure, which does not produce optimal order.
Derivative-free methods 15-17 have so many applications in contrast to the derivative-involved methods.Anyhow, there are some other factors, except being free from derivative or high order of convergence in choosing a method for root finding; for example, we refer the readers to 18 , to see the importance of initial guesses in this subject matter.For further reading, one may refer to 19-26 .

Journal of Applied Mathematics
This research contributes a general class of three-step methods without memory using four points we mean the evaluations of the function need to be computed four times per step and four evaluations per full cycle for solving single-variable nonlinear equations.The contributed class has some important features in what follows.First, it reaches the optimal efficiency index 1.682.Second, it is free from derivative calculation per full cycle, which is so much fruitful for engineering and optimization problems.Third, using any optimal quartically convergent derivative-free two-step method in its first two steps will yield a new optimal eighth-order derivative-free iteration without memory in the sense of Kung-Traub's conjecture 9 .Finally, we will find that the new eighth-order derivative-free methods are fast and convergent.

Main Contribution
Consider the nonlinear scalar function f : D ⊆ R → R that is sufficiently smooth in the neighborhood D, of a simple zero α.To construct derivative-free methods of optimal order eight, we consider a three-step cycle in which the first two steps are any of the optimal twostep derivative-free schemes without memory as follows: Optimal two-step derivative-free method x n and w n are available, y n is available,

2.1
Please note that w n x n βf x n or w n x n − βf x n , where β ∈ R − {0} is specified by the user.The value of w n completely relies on the two-point method allocated in the first two steps of 2.1 .The purpose of this paper is to establish new derivative-free methods with optimal order; hence, we reduce the number of evaluations from five to four by using a suitable approximation of the new-appeared derivative.Now to attain a class of optimal eighth-order techniques free from derivative, we should approximate f z n using the known values, that is, f x n , f w n , f y n , and f z n .We do this procedure by applying the nonlinear rational fraction inspired by Padé approximant as follows: where a 1 − a 0 a 3 / 0. The general setup in approximation theory is that a function f is given and next a user wants to estimate it with a simpler function g but in such a way that the difference between f and g is small.The advantage is that the simpler function g can be handled without too many difficulties.Polynomials are not always a good class of functions if one desires to estimate scalar functions with singularities, because polynomials are entire functions without singularities.They are only useful up to the first singularity of f near the singularity point.Rational functions the concept of the Padé approximant are the simplest functions with singularities.The idea is that the poles of the rational functions will move to the singularities of the function f, and hence the domain of convergence can be enlarged.The m, n Padé approximant of f is the rational function Q m /P n , where Q m is a polynomial of degree ≤ m and P n is a polynomial of degree ≤ n, in which the interpolation condition is satisfied.As a matter of fact, 2.2 is an interpolatory rational function, which is inspired by the Padé approximant.
Hence, by substituting the known values in 2.2 , that is f x n p x n , f w n p w n , f y n p y n , and f z n p z n , we have the following system of three linear equations with three unknowns a 0 f x n is obvious : which has the follow-up solution after simplifying

2.4
At present, by using 2.1 and 2.4 , we have the following efficient and accurate class Optimal two-step derivative-free method x n and w n are available, y n is available,

2.5
By considering any optimal two-step derivative-free method in the first two steps of 2.5 , we attain a new optimal derivative-free eighth-order technique.Applying 1.3 , we have

2.6
To obtain the solution of any nonlinear scalar equation by the new derivative-free methods, we must set a particular initial approximation x 0 , ideally close to the simple root.In numerical analysis, it is very useful and essential to know the behavior of an approximate method.Therefore, we are about to prove the convergence order of 2.6 .Theorem 2.1.Let α be a simple root of the sufficiently differentiable function f in an open interval D. If x 0 is sufficiently close to α, then 2.6 is of eighth order and satisfies the error equation below

2.8
By considering this relation and the first step of 2.6 , we obtain

2.9
At this time, we should expand f y n around the root by taking into consideration 2.9 .Accordingly, we have

2.10
Using 2.9 and 2.10 in the second step of 2.6 results in

2.11
On the other hand, we have

2.12
We also have the following Taylor series expansion for the approximation of f z n in the third step using 2.11 :

2.13
Now by applying 2.12 and 2.13 , we attain

8 Journal of Applied Mathematics
At this time, by applying 2.14 in 2.6 , we obtain

2.15
This ends the proof and shows that 2.6 is an optimal eighth-order derivative-free scheme without memory.
As a consequence using 1.2 , we attain the following biparametric eighth-order family

2.16
which satisfies the following error equation: 2.17 In what follows, we give other methods that can simply be constructed using the approach given above and just by varying the first two steps and using optimal derivative-free quartical iterations.Therefore, as other examples, we have

2.18
which satisfies the following error equation:

2.20
with the follow-up error relation

2.21
We can also present by considering
In Table 1, a comparison of efficiency indices for different derivative-free methods of various orders is given.Equations 2.6 , 2.16 or any optimal eighth-order scheme resulting from our class reaches the efficiency index 8 1/4 ≈ 1.682, which is optimal according to Remark 2.2.The introduced approximation for f z n in the third step of 2.5 always doubles the convergence rate of the two-step method without memory given in its first two steps.That is, using any optimal fourth-order derivative-free method in its first two steps yields a novel three-step optimal eighth-order method without memory free from derivative.This makes our class interesting and accurate.Note that using any cubical derivative-free method without memory using three evaluations in the first two steps of 2.5 ends in a sixthorder derivative-free technique.This also shows that the introduced approximation for f z n doubles the convergence rate.
The free nonzero parameter β plays a crucial rule for obtaining new numerical and theoretical results.In fact, if the user approximate β per cycle with another iteration using only the available data at the first step, the convergence behavior of the refined method s will be improved.That is to say, a more efficient refined version of the attained optimal eighth-order derivative-free methods can be constructed by accepting an iteration for β inside the real iterative scheme, as well as more computational burden.This way of obtaining new methods is called "with memory" iterations.Such developments can be considered as future improvements in this field.We here mention that according to the experimental results, choosing very small value for β, such as 10 −10 , will mostly decrease the error equation, and thus the numerical output results will be more accurate and reliable without much more computational load.
In what follows, the findings are generalized by illustrating the effectiveness of the eighth-order methods for determining the simple root of a nonlinear equation.

Numerical Testing
The results given in Section 2 are supported through the numerical works.We here also mention a well-known derivative-involved method for comparisons to show the reliability of our derivative-free methods.
Wang and Liu 27 suggested an optimal derivative-involved eighth-order method as follows: 3.1 Iteration 3.1 consists of derivative evaluation per full cycle, which is not good in engineering problems.Three methods of our class, 2.6 , 2.16 with β 1, a 0, 2.16 with  1.2 with a 0, the sixth-order scheme of Khattri and Argyros 1.4 , the optimal eighth-order methods of Thukral 1.5 .The comparison was also made by the derivative-involved method 3.1 .

Nonlinear functions Roots
All the computations reported here were done using MATLAB 7.6 using VPA command, where for convergence we have selected the distance of two consecutive approximations to be less than ε 1.E−6000.That is, |x n 1 −x n | ≤ 10 −6000 or |f x n | ≤ 10 −6000 .Scientific computations in many branches of science and technology demand very high precision degree of numerical precision.The test nonlinear scalar functions are listed in Table 2.
The results of comparisons for the test functions are provided in Table 3.It can be seen that the resulting methods from our class are accurate and efficient in terms of number of accurate decimal places to find the roots after some iterations.In terms of computational cost, our class is much better than the compared methods.The class includes four evaluations of the function per full iteration to reach the efficiency index 1.682.
An important problem that appears in practical application of multipoint methods is that a quick convergence, one of the merits of multipoint methods, can be attained only if initial approximations are sufficiently close to the sought roots; otherwise, it is not possible to realize the expected convergence speed in practice.For this reason, in applying multipoint root-finding methods, a special attention should be paid to find good initial approximations.Yun in 28 outlined a noniterative way for this purpose as comes next where x 0 is a simple root an approximation of it of f x 0 on the interval a, b with f a f b < 0, and δ is a positive number in the set of natural numbers.
If the cost of derivatives is greater than that of function evaluation, then 3.1 or any optimal eighth-order derivative-involved method is not effective.To compare 2.5 with optimal eighth-order derivative-involved methods, we can use the original computational efficiency definition due to Traub 29 : where Λ j is the cost of evaluating f j , j ≥ 0. If we take Λ 0 1, we have CE T 2.5 ≈ 8 1/4 and CE T 3.1 ≈ 8 1/ 3 Λ 1 , so that CE 3.1 < CE 2.5 if Λ 1 > 1.

Concluding Notes
The design of iterative formulas for solving nonlinear scalar equations is an interesting task in mathematics.On the other hand, the analytic methods for solving such equations are almost nonexistent, and therefore, it is only possible to obtain approximate solutions by relying on numerical methods based upon iterative procedures.In this work, we have contributed a simple yet powerful class of iterative methods for the solution of nonlinear scalar equations.The class was obtained by using the concept of Padé approximation an interpolatory rational function in the third step of a three-step cycle, in which the first two steps are available by any of the existing optimal fourth-order derivative-free methods without memory.We have seen that the introduced approximation of the first derivative of the function in the third step doubles the convergence rate.The analytical proof for one method of our novel class was written.Per full cycle, our class consists of four evaluations of the function and reaches the optimal order 8. Hence, the efficiency index of the class is 1.682, that is, the optimal efficiency index.The effectiveness of the developed methods from the class was illustrated by solving some test functions and comparing to the well-known derivative-free and derivativeinvolved methods.The results of comparisons were fully given in Table 3.Finally, it could be concluded that the novel class is accurate and efficient in contrast to the existing methods in the literature.

x − 1
.7461395304080124 β 0.01, a 0, are compared with the quartically schemes of Liu et al. 1.3 and Ren et al.
We provide the Taylor series expansion of any term involved in 2.6 .By Taylor expanding around the simple root in the nth iterate, we have f x n c 1 e n c 2 e 2 n − α, and c k f k α /k!, ∀k 1, 2, 3, . . . .Proof.n .Furthermore, it is easy to find

Table 1 :
Comparison of efficiency indices for different derivative-free methods.

Table 2 :
Test functions and their roots.

Table 3 :
Results of comparisons for test functions.