Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods

and Applied Analysis 3 index, we first consider the same approximation as used in the second step of 2.1 to annihilate f ′ zn and then take advantage of weight function approach as follows: yn xn − f xn f xn,wn , wn xn βf xn , zn yn − f ( yn ) f wn ( f wn − f ( yn )) f [ xn, yn ] , xn 1 zn − f zn f wn ( f wn − f ( yn )) f [ xn, yn ] { G ( φ ) ×H τ ×Q σ × Lρ, 2.2 where β ∈ R\{0}, φ f z /f y , τ f z /f w , σ f z /f x , ρ f y /f w , without the index n, and G φ , H τ , Q σ , L ρ are four real-valued weight functions, which force the order to arrive at themaximum level eight by using the fixed number of evaluations per cycle. Theorem 2.1 shows that 2.2 will arrive at local eighth-order convergence by using only four function evaluations per full cycle. This reveals that any method from our proposed class possesses 1.682 as the efficiency index, which is optimal according to the Kung and Traub conjecture, while it is fully free from derivative evaluations. Theorem 2.1. Let α ∈ D be a simple zero of a sufficiently differentiable function f : D ⊆ R → R and let that cj f j α /j!, j ≥ 1. If x0 is sufficiently close to α, then, (i) the local order of convergence of the solution by the class of methods without memory defined in 2.2 is eight, when G 0 G′ 0 1, ∣G′′ 0 ∣ < ∞, H 0 H ′ 0 1, Q 0 1, Q′ 0 1, L 0 1, L′ 0 L′′′ 0 0, L′′ 0 2 2βf xn,wn , ∣∣∣L 4 0 ∣∣ < ∞, 2.3 and (ii) this solution reads the error equation en 1 xn 1 − α − 1 24c7 1 ( c2 ( 2c2 2 − c1c3 )( 1 c1β )2 × ( 12 ( 1 c1β )2 × ( −2 ( 7c4 2 − 8c1c 2c3 c2 1c 3 c2 1c2c4 ) ( −2c2 2 c1c3 )2 G′′ 0 ) c4 2L 4 0 )) e8 n O ( e9 n ) . 2.4 4 Abstract and Applied Analysis Proof. We expand any terms of 2.2 around the simple root α in the nth iterate by taking into account en xn −α and f α 0. Thus, we write f xn c1en c2e n c3e n c4e n · · · O e9 n . Accordingly, we attain yn − α c2 ( 1 c1 β ) e2 n · · · O ( e9 n ) . 2.5 In the same vein, we obtain zn − α ( 2c3 2 − c1c2c3 )( 1 c1β )2 c3 1 e4 n − 1 c4 1 ( 1 c1β ) × ( c2 1c 2 3 ( 1 c1β )( 2 c1β ) c2 1c2c4 ( 1 c1β )( 2 c1β ) c4 2 ( 10 c1β ( 11 5c1β )) −c1c 2c3 ( 14 c1β ( 19 7c1β ))) e5 n · · · O ( e9 n ) . 2.6 We also have f zn 1/c2 1 2c 3 2 − c1c2c3 1 c1β e4 n · · · O e9 n . Now using symbolic computation in the last step of 2.2 , we attain zn − f zn f wn ( f wn − f ( yn )) f [ xn, yn ] − α ( 6c5 2 − 5c1c 2c3 c2 1c2c 3 )( 1 c1β )3 c5 1 e6 n · · · O ( e9 n ) .


Introduction
This paper focuses on finding approximate solutions to nonlinear scalar and sufficiently smooth equations by derivative-free methods. Techniques such as the false position method in the root-finding of a nonlinear equation f x 0, requires bracketing of the zero by two guesses. Such schemes are called bracketing methods. These methods are almost always convergent since they are based on reducing the interval between the two guesses so as to zero in on the root of the equation. In the Newton method, the zero is not bracketed. In fact, only one initial guess of the solution is needed to get the iterative process started in order to find the zero. The method hence falls in the category of open methods.
Convergence in open methods is not guaranteed but if the method does converge, it does so much faster than the bracketing methods 1 . Although Newton's iteration has widely been discussed and improved in the literature, see for example, 2, 3 , one of the main drawback, which is the matter of first derivative evaluation, occasionally restricts the application of this method or its variants. On the other hand, when we discuss improving the iterative without memory methods for solving one variable nonlinear equations, the conjecture of Kung and Traub 4 is taken automatically into consideration. It should be remarked that according to the unproved conjecture of Kung and Traub 4 , the index of efficiency will not pass the maximum level 2 n−1 /n , wherein n is the total number of functional evaluations.
Apart from being involved in the first derivative, Newton's iteration will use the second derivative of the function, when it is applied in optimization problems to final the local minima. Therefore, its use will be limited especially when the cost of first and second derivatives of the functions is expensive. And subsequently, derivative-free algorithms come to attention.
For the first time, Steffensen in 5 gave the following derivative-free form of Newton's iteration which possesses the same rate of convergence and efficiency index as Newton's. In this work, we suggest novel classes of three-step four-point iterative methods, which are without memory, derivative-free, optimal, and therefore consistent with hard problems. For this reason, the contents of the paper unfold as comes next. Section 2 reveals our main contribution as some generalizations to the fourth-order uniparametric family of Kung and Traub 4 . Subsequently, Section 3 gives a very short discussion on the available derivative-free methods in the literature. Section 4 discusses the implementation of the new produced methods from our classes on large number of numerical examples. And finally, a short conclusion will be given in Section 5.

2.2
where β ∈ R\{0}, ϕ f z /f y , τ f z /f w , σ f z /f x , ρ f y /f w , without the index n, and G ϕ , H τ , Q σ , L ρ are four real-valued weight functions, which force the order to arrive at the maximum level eight by using the fixed number of evaluations per cycle. Theorem 2.1 shows that 2.2 will arrive at local eighth-order convergence by using only four function evaluations per full cycle. This reveals that any method from our proposed class possesses 1.682 as the efficiency index, which is optimal according to the Kung and Traub conjecture, while it is fully free from derivative evaluations. Theorem 2.1. Let α ∈ D be a simple zero of a sufficiently differentiable function f : D ⊆ R → R and let that c j f j α /j!, j ≥ 1. If x 0 is sufficiently close to α, then, (i) the local order of convergence of the solution by the class of methods without memory defined in 2.2 is eight, when

4 Abstract and Applied Analysis
Proof. We expand any terms of 2.2 around the simple root α in the nth iterate by taking into account e n x n − α and f α 0. Thus, we write f x n c 1 e n c 2 e 2 n c 3 e 3 n c 4 e 4 n · · · O e 9 n . Accordingly, we attain y n − α c 2 1 c 1 β e 2 n · · · O e 9 n .

2.5
In the same vein, we obtain

2.6
We also have f z n 1/c 2 1 2c 3 2 − c 1 c 2 c 3 1 c 1 β 2 e 4 n · · · O e 9 n . Now using symbolic computation in the last step of 2.2 , we attain

2.7
Furthermore, by considering 2.7 and the real-valued weight functions as in 2.3 , we will obtain the error equation 2.4 . This shows that the proposed class of derivative-free methods 2.2 -2.3 reaches the optimal eighth-order convergence by using only four function evaluations per full iteration. This ends the proof. Now, any optimal three-step four-point derivative-free without memory method can be produced by using 2.2 -2.3 . As an instance, we can have β 0.01 y n x n − f x n f x n , w n , w n x n 0.01f x n , z n y n − f y n f w n f w n − f y n f x n , y n , where its error equation reads n .

2.9
We should here recall that per computing step for any of the methods from the new class, the values of f w n / f w n −f y n f x n , y n and f x n , w n should be computed only once, and then their values will be considered in the rest of the cycle wherever required.
Many of the nonlinear functions are arising from solving complex environmental engineering problems, where the objective depends on the output of a numerical simulation of a physical process. These simulators are expensive to evaluate because they involve numerically solving systems of partial differential equations governing the underlying physical phenomena. However, function evaluation remains the dominant expense in optimization problems since the savings in time are often offset by increased accuracy of the simulation. For these reasons, algorithms like 2.2 -2.3 in which there is no need of derivative evaluation, the order is optimal as well as the efficiency index is high are important in hard problems.
Before going to the next sections, it is required to have a discussion on another similar class of derivative-free methods that is attainable by choosing a different approximation in the first step of 2.1 and also with a somehow different weight functions in the last step. Considering these in 2.1 , we can have the following without memory derivative-free class

2.10
which its order is given in Theorem 2.2 to be eight by choosing appropriate weight functions.
f y /f w , without the index n, G ϕ , H τ , Q σ , and L ρ are four real-valued weight functions. 6 Abstract and Applied Analysis Theorem 2.2. Let α ∈ D be a simple zero of a sufficiently differentiable function f : D ⊆ R → R and let that c j f j α /j!, j ≥ 1. If x 0 is sufficiently close to α, then, (i) the local order of convergence of the solution by the class of methods without memory defined in 2.10 is eight, when

2.11
and (ii) this solution reads the error equation

2.12
Proof. The proof of this theorem is similar to the proof of Theorem 2.1. It is hence omitted. Now, by using 2.10 -2.11 , we can have some other efficient optimal eighth-order without memory derivative-free methods, such as

A Brief Look at the Literature
In this section, we shortly present some of the well-known high-order derivative-free techniques to find the simple zeros of nonlinear equations for the sake of comparison. Kung and Traub in 4 introduced the following without memory two-step iteration

3.3
For further reading, one should consult the papers 7-10 and the references therein.

Numerical Experiments
We now check the effectiveness of the novel derivative-free classes of iterative methods. In order to do this, we choose 2.8 as the representative from our class 2.2 -2.3 . Note that the derived methods from the class 2.10 -2.11 can also be used.
We have compared 2.8 with Steffensen's method 1.1 , the fourth-order family of Kung and Traub 3.1 with β 0.01, the seventh-order technique of Soleymani 3.3 , and the optimal eighth-order family of Kung and Traub 3.2 with β 1, using the examples listed

Test functions
Zeros Table 1. The results of comparisons are given in Table 2 in terms of the number significant digits for each test function after the specified number of iterations, that is, for example, 0.1e − 207 shows that the absolute value of the given nonlinear function f 1 after nine iterations corresponding to iteration 1.1 is zero up to 207 decimal places. It is important to review the proof of convergence for our proposed classes of methods or the compared methods in Table 2 before implementing them. Specifically, one should review the assumptions made in the proofs, when the result of the iterations become divergent. For situations, where the method fails to converge, it is because the assumptions made in the proofs are not met. For this reason in the long run, we give a Mathematica 8. code to extract initial approximations for all the zeros of a nonlinear function in an interval.
It can be observed from Table 2, that almost in most cases our contributed method from the class 2.2 -2.3 is superior in solving nonlinear equations. Generally speaking, iterative methods of the same order and efficiency, and also with the same background, that is, Newton-type or Steffensen-type methods, have similar numerical outputs due to their similar characters. Note also that experimental results show that whatever the value of β / 0 is smaller, then the output results of solving nonlinear equations will be more accurate.
We have completed Table 2 for two different initial guesses. In Table 2, IT and TNE stand for number of iterations, and total number of functional evaluations. To have a fair comparison, we used high number of iterations for 1.1 . The numerical results for the test functions support the theoretical discussions given in Section 2. We also point out that the procedure of applying the new iterative methods for nonsmooth functions is similar to those recently illustrated in 12 . And for multiple zeros, we must first take into account    Figure 1: The graph of the function f with finitely many zeros in an interval.
applied on lists for high precision computing. In fact using 15 , we could build a list of initial guesses enough close with good accuracy to start the procedure of our optimal derivative-free eighth-order methods. The procedure of finding such a robust list is based on the powerful command of NDSolve for the nonlinear function on the interval D a, b . Such a way can be written in the following piece of Mathematica code by considering an oscillatory function as the input test function on the domain D −2., 12. , see Algorithm 1. Thus now, we have an efficient list of initial approximations for the zeros of a nonlinear once differentiable function with finitely many zeros in an interval. The number of zeros and the graph of the function including the positions of the zeros can be given by the following commands see Figure 1 , see Algorithm 2. For this test, there are 33 zeros in the considered interval which can easily be used as the starting points for our proposed high-order derivative-free methods. Note that the output of the vector "initialPoints" contains the initial approximations. In this test problem, the list of zeros  9.09881, 9.32263, 9.43461, 9.65251, 9.76525, 9.9662, 10.0901, 10.2669, 10.4082, 10.557, 10.719, 10.8379, 11.023, 11.11, 11.3224, 11.3721}. Note that we end this section by mentioning that for very oscillatory functions, it is better to first divide the interval into some smaller subintervals and then obtain the solutions. The command NDSolve uses Maximum number of 10000 steps, if it is needed this could be changed. In cases when NDSolve fails, this algorithm might fail too.