On Generalization Based on Bi et al. Iterative Methods with Eighth-Order Convergence for Solving Nonlinear Equations

The primary goal of this work is to provide a general optimal three-step class of iterative methods based on the schemes designed by Bi et al. (2009). Accordingly, it requires four functional evaluations per iteration with eighth-order convergence. Consequently, it satisfies Kung and Traub's conjecture relevant to construction optimal methods without memory. Moreover, some concrete methods of this class are shown and implemented numerically, showing their applicability and efficiency.

In this paper we present a new optimal class of three-step methods without memory, which employs the idea of weight functions in the second and third steps. The order of this class is eight requiring four functional evaluations per step and therefore it supports Kung and Traub's conjecture [8]. The proposed class includes the Bi et al. methods [6,7].
In order to design the new methods, we will use the divided differences. Let ( ) be a function defined on an interval , where is the smallest interval containing + 1 distinct nodes 1 , 2 , . . . , . The divided difference [ 0 , 1 , . . . , ] with th-order is defined as follows: It is clear that the divided difference [ 0 , 1 , . . . ] is a symmetric function of its arguments 0 , 1 , . . . , . Moreover, if we assume that ∈ ( +1) ( ), where is the smallest interval containing the nodes 0 , 1 , . . . , and , then 2 The Scientific World Journal Moreover, we recall the so-called efficiency index defined by Ostrowski [38] as EI = 1/ , where is the order of convergence and is the total number of functional evaluations per iteration.

Main Result: Development and Convergence Analysis of the New Methods
It is well known that Newton's method converges quadratically under standard conditions. To obtain a higher order of convergence and higher efficiency index than that of Newton's scheme, we compose Newton's method twice as follows: As this scheme is eighth-order convergent but its efficiency is poor, we need to reduce the number of functional evaluations.
In the third step, ( ) can be approximated in a similar way as in [6]. Consider Also, a "frozen" derivative can be used in the second step and adequate weight functions will improve the efficiency in the second and last steps. So, the following three-step methods are proposed: It is clear that the proposed methods by (5) require only four functional evaluations per iteration, while they are not eighth-order methods, in general. To recover the optimal eighth-order, we find some suitable conditions on the introduced weight functions ( ) and ℎ( ).
To find the weight functions and ℎ in (5) providing order eight, we will use the method of undetermined coefficients and Taylor's series about 0, since → 0, → 0, when → ∞.
Let us consider The following result states suitable conditions for proving that the new class has eighth-order of convergence.

Theorem 1.
Assume that is a sufficiently differentiable real function. Let one suppose that ∈ is a simple zero of . If the initial estimation 0 is close enough to , then the sequence { } generated by any method of the family (5) converges to with eighth-order of convergence if and ℎ are real sufficiently differentiable functions satisfying (0) = ℎ(0) = 1, (0) = ℎ (0) = 2, and (0) = 10.
Proof. Let us introduce the following notations: Using Taylor's expansion and taking into account ( ) = 0, we have Also by direct differentiation, we obtain From (8) and (9) we get The Scientific World Journal 3 Hence, Similar to (8), Moreover, taking into account (8) By using Taylor's expansion around zero and by using (11)-(15), We now need to vanish 2 and 3 not only for making the first two steps optimal but also for simplifying subsequent relations. It is enough to ask the weight function to satisfy conditions (0) = 1 and (0) = 2. Then For the third step, we also require The Scientific World Journal [ , ] = ( ) [1 + 2 [ , , ] Now let ℎ ( ) ≈ ℎ (0) + ℎ (0) .
Finally, taking (0) = 10 and ℎ (0) = 2, we obtain which shows that under the provided conditions on weight functions and ℎ the method (5) has eighth-order convergence and it is optimal. This finishes the proof.
According to the above analysis, we can obtain the following special cases.

Some Concrete Methods
In this section, we put forward some particular three-step methods based on the general class designed in this work.

Methods 1 and 2.
Firstly, by combining the methods (26) and (27), Consequently, a special case of (29) appears when 1 ( ) = All the methods (29)-(35) require three functional evaluations, namely, ( ), ( ), and ( ), and one of the first derivative, namely, ( ), per iteration. Therefore, they are optimal in the sense of Kung and Traub's conjecture for = 4 with = 2 3 . Thus, if we assume that all the evaluations have the same cost, then EI = 1.682.

Numerical Implementation and Comparisons
This section concerns numerical results of the proposed methods (30)- (35). Moreover, they are compared with Kung-Traub's method presented in [8], whose iterative expression is Numerical results have been carried out using Mathematica 8 with 400 digits of precision. In each table, ACOC stands for Approximated Computational Order of Convergence (see [39]), which is given by Among many test problems, the following four examples are considered: From Table 1, it can be seen that all methods work perfectly. Furthermore, we can see that results from methods (34) and (35) are specially good. Table 2 shows that numerical results are in accordance with their theory well enough. In this example, methods (34) and (35) do not have as good behavior as in Example 1. Table 3 represents an important case. Although methods (34) and (35) are working very well in Example 1, however, they do not produce convergent iterations here. It should be remarked that these divergent sequences show that some methods work better in some cases, while they may not do it in other ones. Table 4 shows that all the methods work in concordance with theoretical results.

Conclusion
A new optimal class of three-step methods without memory has been obtained by generalizing Bi et al. families. This class uses four functional evaluations per iteration and it is optimal in the sense of Kung and Traub's conjecture. Some elements of the family have been presented and they have been tested in order to show its applicability and efficiency, showing that these methods work properly and confirm their theoretical aspects.