A General Three-Step Class of Optimal Iterations for Nonlinear Equations

Many of the engineering problems are reduced to solve a nonlinear equation numerically, and as a result, an especial attention to suggest efficient and accurate root solvers is given in literature. Inspired and motivated by the research going on in this area, this paper establishes an efficient general class of root solvers, where per computing step, three evaluations of the function and one evaluation of the first-order derivative are used to achieve the optimal order of convergence eight. The without-memorymethods from the developed class possess the optimal efficiency index 1.682. In order to show the applicability and validity of the class, some numerical examples are discussed.


Introduction
Numerical solution of nonlinear scalar equations plays a crucial role in many optimization and engineering problems.For example, many engineering systems can be modeled as neutral delay differential equations NDDEs that involve a time delay in the derivative of the highest order, which are different from retarded delay differential equations RDDEs that do not involve a time delay in the derivative of the highest order.To illustrate more, a system, which consists of a mass mounted on a linear spring to which a pendulum is attached via a hinged massless rod, is used to predict the dynamic response of structures to external forces using a set of actuators, and it is modeled as an NDDE if the delay in actuators is taken into consideration 1 .On the other hand, the stability of a delay differential equation can be investigated on the basis of the root location of the characteristic function.This simple example shows the importance of numerical root solvers in engineering problems.
There are numerical methods, which find one root at a time, such as Newton's iteration or its variant, and the schemes, which find all the roots at a time, namely, Mathematical Problems in Engineering simultaneous methods, such as Weierstrass method.Recently many journals such as Numerical Algorithms, Mathematical Problems in Engineering, Applied Mathematics and Computation, etc., have published new findings; see, for example, 2-5 and the references therein in this active topic of study.To shortly provide some of the newest findings in this field, we mention the following.
Noor et al. in 3 developed the follow-up quartically iterative scheme consisting of three steps and eight numbers of evaluation per full iteration as comes next 1.1 In 2010, an eighth-order method is provided in 6 using Ostrowski's method in the first two steps of a three-step cycle as follows: wherein f x 0 , x 1 , . . ., x k are the divided differences of the function f.Soleymani and Mousavi in 7 suggested an iteration without memory scheme including three steps and only four functional evaluations per iteration in what follows: where A denotes 9 f y n /f x n 4 , and B denotes 1 For further reading, one may consult 8 , where a complete review of the methods given in literature from 2000 to 2010 was furnished, and also 9 for obtaining a background on the application use of such root solvers.We here remark that the efficiency of different Mathematical Problems in Engineering 3 methods could be assessed by the measure of efficiency index, which could be defined as n √ p, wherein p is the order of convergence and n is the total number of evaluations per iteration.Now, we should remark that Kung and Traub in 10 conjectured that an iterative scheme without memory by using n evaluations per cycle can arrive at the maximum order of convergence 2 n−1 .Any without memory iteration, which satisfies this hypothesis, is named as an optimal method in literature.
After providing a short background of this research in this section, we give the main contribution in Section 2. The convergence study of our general three-step class is also furnished therein.We will also produce different optimal three-step iterations from the contributed class.Section 3 discusses some numerical comparisons with the existing methods in literature, and finally Section 4 draws a conclusion of this research paper.

New Class of Iteration Methods
In order to contribute and give a general class of methods consistent with the optimality conjecture of Kung-Traub, an iteration eighth-order scheme without memory in this section should be constructed such that four evaluations per computing step are used.Such schemes are also known as predictor-corrector methods in which the first step is Newton's step predictor, while the other two steps correct the obtained solution.To achieve our goal, we consider the following three-step scheme on which the first two steps are the King's fourthorder family with one free parameter in real numbers, β ∈ R, Clearly in 2.1 , f z n should be annihilated as the order of convergence remains at the highest level by the smallest use of number of evaluations per iteration.Toward this end, we approximate it by a polynomial of degree two that fits in f x n , f y n , and f z n .Therefore, we take into account f t ≈ A t a 0 a 1 t − y n a 2 t − y n 2 where A t a 1 2a 2 t − y n .Subsequently, by considering f x n A x n , f y n A y n , and f z n A z n , we attain a 0 f y n ,

2.2
Solving the system of two linear equations with two unknowns, 2.2 gives us a 1 and a 2 .Using the obtained relations for the unknowns in the approximation f z n ≈ A z n a 1 2a 2 z n − y n and simplifying, we have Considering 2.3 in 2.1 and using weight function approach, we have the following general class of three-step without-memory iteration: wherein G t , H τ , and Q γ are three real-valued weight functions with t f z /f y , τ f z /f x , and γ f y /f x without the index n which should be chosen such that the order of convergence arrives at the optimal level eight.We summarize this in the following theorem.
Theorem 2.1.Let α ∈ D be a simple zero of a sufficiently differentiable function f : D ⊂ R → R for an open interval D, which contains x 0 as the initial approximation of α, then the three-step iteration 2.4 , which includes four evaluations per full cycle, has the optimal convergence rate eight, when and satisfies the error equation below

2.6
Proof.By defining e n x n − α as the error of the iterative scheme in the nth iterate, applying the Taylor's series expansion for 2.4 , and taking into account f α 0, we have where  2.9 Furthermore, we find

2.10
Similarly, we have 2.11 n O e 9 n .

2.12
We moreover by using 2.11 and 2.5 Considering this new relation, 2.12 and 2.5 in the last step of 2.4 will end in n O e 9 n .This concludes the proof.And it shows that our suggested general class of three-step withoutmemory methods 2.4 -2.5 possesses the eighth order of convergence.

2.13
Remark 2.2.The class of three-step methods 2.4 -2.5 requires four evaluations and has the order of convergence eight.Therefore, this class is of optimal order and supports the Kung-Traub conjecture 10 .Hence, the efficiency index of the eighth-order derivative-involved methods from the class is 4 √ 8 ≈ 1.682.Some efficient methods from the contributed optimal three-step class are given below.Per computing step, these methods are free from second or higher order derivative computations.The new contributed methods are

2.15
where n O e 9 n is its error equation.We also here mention some of the typical forms of the weight functions G t , H τ , and Q γ in iteration 2.4 , which satisfy 2.5 for making the order optimal.These forms are listed in Table 1.
Other than the very efficient methods 2.14 and 2.15 of optimal order eight, many more three-step without-memory iterations can be constructed using Table 1, that is, 2.5 in 2.4 and also with different values for the free parameter β.Thus, now, in order to save the space and also giving some of the other such optimal eighth-order methods according to 2.4 and 2.5 , we list the interesting ones in Table 2.Note that we consider first that the weight functions satisfy 2.5 , and then we try to make new error equations based on the available data in 2.13 .
Future researches in this field of study can now be turned to finding optimal sixteenth-order four-step without-memory iterations based on the general class 2.4 -2.5 .Furthermore, producing with-memory iterations according to this class can also be of researcher's interest for future studies.

Computational Examples
The contribution given in Section 2 is supported here through numerical works.We check the effectiveness of the novel methods 2.14 and 2.15 from our class of methods.For this reason, we have compared our new methods with Newton's method NM , 1.1 , 1.2 , and 1.3 .The nonlinear test functions are furnished in Table 3.The results of comparisons are given in Table 4 in terms of the number significant digits for each test function after some specified iterations.
All computations in this paper were performed in MATLAB 7.6 using variable precision arithmetic VPA to increase the number of significant digits.We have considered the following stopping criterion |f x n | ≤ 10 −800 .In Table 4, 0.2e − 448 shows that the absolute value of the given nonlinear function after three iterations is zero up to 448 decimal places.In Table 4, IN and TNE stand for iteration number and total number of evaluation.As shown in Table 4, the proposed method 2.14 is preferable to Newton's method and some methods with fourth-and eighth-order of convergences.It is evident that 2.14 is more robust than the other competent from various orders.We also recall an important concern in using multi point iterations, which indicates that the high-order root solvers are very sensitive for initial guesses far from the root.And they are so powerful for starting points in the vicinity of the sought zero and so close.
Remark 3.1.If we need to solve a lot of equations from a large system of boundary-value problems, then the cost of function evaluations becomes important.Therefore, the proposed class 2.4 -2.5 is valuable for solving such problems.

Concluding Remarks
In recent years, numerous works have been focusing on the development of more advanced and efficient methods for nonlinear scalar equations.Many methods have been developed, which improve the convergence rate of the Newton's method.One practical drawback of so many methods is their slow rate of convergence.This paper has developed and established a rapid class of eighth-order iteration methods.Per iteration, the methods from our class require three evaluations of the function and one of its first derivatives; and therefore, the efficiency of the methods is equal to 4 √ 8 ≈ 1.682, which is better than that of the classical Newton's method.Kung and Traub 10 conjectured that a multipoint iteration without memory based on n evaluations of f or its derivatives could achieve optimal convergence order 2 n−1 .Newton's method is an example, which agrees with Kung-Traub's conjecture for n 2, and the class of methods 2.4 -2.5 is another example, which agrees with Kung-Traub's hypothesis for n 4. Thus, the suggested class 2.4 -2.5 is effective and attracts the attention of researchers.

y n α c 2 e 2 n 2 f α c 2 e 2 n 2 −c 2 2 c 3 e 3 n−7c 2 c 3 4c 3 2 3c 4 e 4 n•
Again by substituting this relation in the first step of 2.4 and writing the Taylor's series expansion for f y n , we obtain, respectively,

Table 1 :
Typical forms of the weight functions satisfying 2.5 .

Table 2 :
Interesting choices of β, G 0 , and Q 4 0 in 2.13 , which provide very efficient optimal root solvers.

Table 3 :
Test function, their roots, and the starting points.

Table 4 :
Comparison of different methods for finding the simple roots of test functions.