An Optimal Fourth Order Iterative Method for Solving Nonlinear Equations and Its Dynamics

We present a new fourth order method for finding simple roots of a nonlinear equation f(x) = 0. In terms of computational cost, per iteration the method uses one evaluation of the function and two evaluations of its first derivative. Therefore, the method has optimal order with efficiency index 1.587 which is better than efficiency index 1.414 of Newton method and the same with Jarratt method andKing’s family. Numerical examples are given to support that themethod thus obtained is competitive with other similar robust methods.The conjugacymaps and extraneous fixed points of the presentedmethod and other existing fourth ordermethods are discussed, and their basins of attraction are also given to demonstrate their dynamical behavior in the complex plane.


Introduction
Solving nonlinear equations is a common and important problem in science and engineering [1,2].Analytic methods for solving such equations are almost nonexistent and therefore it is only possible to obtain approximate solutions by relying on numerical methods based on iterative procedures.With the advancement of computers, the problem of solving nonlinear equations by numerical methods has gained more importance than before.
In this paper, we consider the problem of finding simple root  of a nonlinear equation () = 0, where () is a continuously differentiable function.Newton method is probably the most widely used algorithm for finding simple roots, which starts with an initial approximation  0 closer to the root (say, ) and generates a sequence of successive iterates {  } ∞ 0 converging quadratically to simple roots (see [3]).It is given by  +1 =   −  (  )   (  ) ,  = 0, 1, 2, 3, . . . .
In order to improve the order of convergence of Newton method, many higher order multistep methods [4] have been proposed and analyzed by various researchers at the expense of additional evaluations of functions, derivatives, and changes in the points of iterations.An extensive survey of the literature dealing with these methods of improved order is found in [3,5,6] and references therein.Euler method and Chebyshev method (see Traub [3]) Weerakoon and Fernando [7], Ostrowski's square root method [5], Halley [8], Hansen and Patrick [9], and so forth are well-known third order methods requiring the evaluation of ,   , and   per step.The famous Ostrowski's method [5] is an important example of fourth order multipoint methods without memory.The method requires two  and one   evaluations per step and is seen to be efficient compared to classical Newton method.Another well-known example of fourth order multipoint methods with same number of evaluations is King's family of methods [10], which contains Ostrowski's method as a special case.Chun et al. [11][12][13], Cordero et al. [14], and Kou et al. [15,16] have also proposed fourth order methods requiring two  evaluations and one   evaluation per iteration.Jarratt [17] proposed fourth order methods requiring one  evaluation and two   evaluations per iteration.All of these methods are classified as multistep methods in which a Newton or weighted-Newton step is followed by a faster Newton-like step.
Through this work, we contribute a little more in the theory of iterative methods by developing the formula of optimal order four for computing simple roots of a nonlinear equation.The algorithm is based on the composition of two weighted-Newton steps and uses three function evaluations, namely, one  evaluation and two   evaluations.
On the other hand, we analyze the behavior of this method in the complex plane using some tools from complex dynamics.Several authors have used these techniques on different iterative methods.In this sense, Curry et al. [18] and Vrscay and Gilbert [19,20] described the dynamical behavior of some well-known iterative methods.The complex dynamics of various other known iterative methods, such as King's and Chebyshev-Halley's families, Jarratt method, has also been analyzed by various researchers; for example, see [13,[21][22][23][24][25][26].
The paper is organized as follows.Some basic definitions relevant to the present work are presented in Section 2. In Section 3, the method is developed and its convergence behavior is analyzed.In Section 4, we compare the presented method with some existing methods of fourth order in a series of numerical examples.In Section 5, we obtain the conjugacy maps and possible extraneous fixed points of these methods to make a comparison from dynamical point of view.In Section 6, the methods are compared in the complex plane using basins of attraction.Concluding remarks are given in Section 7.

Basic Definitions
Definition 1.Let  be a simple root and let {  },  ∈ N, be a sequence of real or complex numbers that converges towards .Then, one says that the order of convergence of the sequence is  if there exists for some  ̸ = 0,  is known as the asymptotic error constant.
Definition 2. Let   =   −  be the error in the th iteration; one calls the relation the error equation.If one can obtain error equation for any iterative method, then the value of  is the order of convergence.
Definition 3. Let  be the number of new pieces of information required by a method.A "piece of information" typically is any evaluation of a function or one of its derivatives.The efficiency of the method is measured by the concept of efficiency index [27] and is defined by where  is the order of the method.
Definition 4. Suppose that  −1 ,   , and  +1 are three successive iterations closer to the root .Then, the computational order of convergence  (see [7]) is approximated by using (3) as

Development of the Method
Let us consider the two-step weighted-Newton iteration scheme of the type where , , , and  are some constants which are to be determined.A natural question arises: is it possible to find , , , and  such that the iteration method ( 6) has maximum order of convergence?The answer to this question is affirmative and is proved in the following theorem.
Proof.Let   be the error at th iteration, then   =   − .
We denote this method by SBM.
Thus, we have derived fourth order method ( 16) for finding simple roots of a nonlinear equation.It is clear that this method requires three evaluations per iteration and therefore it is of optimal order.

Numerical Results
In this section, we present the numerical results obtained by employing the presented method SBM equation ( 16) to solve some nonlinear equations.We compare the presented method with quadratically convergent Newton method denoted by NM defined by (1) and some existing fourth order iterative methods, namely, the method proposed by Chun et al. [13], Cordero et al. method [14], King's family of methods [10], and Kou et al. method [16].These above-mentioned methods are given as follows.
Chun et al. method (CLM) is Cordero et al. method (CM) is King's family of methods (KM) is where  ∈ R.

Kou et al. method (KLM) is
Test functions along with root  correcting up to 28 decimal places are displayed in Table 1.Table 2 shows the values of initial approximation ( 0 ) chosen from both sides for each method.Table 3 displays the computational order of convergence () defined by (5).The NFE is counted as sum of the number of evaluations of the function and the number of evaluations of the derivatives.In calculations, the NFE used for all the methods is 12.That means, for NM, the error |  | is calculated at the sixth iteration, whereas for the remaining methods this is calculated at the fourth iteration.The results in Table 3 show that the computational order of convergence is in accordance with the theoretical order of convergence.
The numerical results of Table 2 clearly show that the presented method is competitive with other existing fourth order methods being considered for solving nonlinear equations.It can also be observed that there is no clear winner among the methods of fourth order in the sense that one behaves better in one situation while others take the lead in other situations.All computations are performed by Mathematica [28] using 600 significant digits.
Since the present approach utilizes one  and two   evaluations per iteration, the newly developed method is very useful in the applications in which derivative   can be rapidly evaluated compared to  itself.Example of this kind occurs when  is defined by a polynomial or an integral.

Corresponding Conjugacy Maps for Quadratic Polynomials
In this section, we discuss the rational map   arising from various methods applied to a generic polynomial with simple roots.

Extraneous Fixed
Points.The methods discussed above can be written in the fixed point iteration form as where the corresponding   (  ) of the methods are listed in Table 4.
Clearly the root  of () is a fixed point of the method.However, the points  ̸ =  at which   () = 0 are also fixed points of the method as, with   () = 0, second term on right side of (28) vanishes.These points are called extraneous fixed points (see [20]).Moreover, a fixed point  is called attracting if |   ()| < 1, repelling if |   ()| > 1, and neutral otherwise, where is the iteration function.In addition, if |   ()| = 0, the fixed point is superattracting.
In this section, we will discuss the extraneous fixed points of each method for the polynomial  2 − 1.
Theorem 12.There are no extraneous fixed points for Newton method.
Proof.For Newton method (1), we have (30)  This function does not vanish and therefore there are no extraneous fixed points.

Theorem 14 (KLM).
There are four extraneous fixed points for KLM.

Theorem 15 (CM).
There are six extraneous fixed points for CM.

Theorem 17 (SBM).
There are four extraneous fixed points for SBM.
In the next section, we give the basins of attraction of these iterative methods in the complex plane.

Basins of Attraction
To study the dynamical behavior, we generate basins of attraction for two different polynomials using the above-mentioned methods.Following [29], we take a square [−1, 1] × [−1, 1] (⊆ C) of 1024 × 1024 points, which contains all roots of concerned nonlinear equation, and we apply the iterative method starting in every  0 in the square.We assign a color to each point  0 according to the simple root to which the corresponding orbit of the iterative method, starting from  0 , converges.If the corresponding orbit does not reach any root of the polynomial, with tolerance  = 10 −3 in a maximum of 25 iterations, we mark those points  0 with black color.
For the first test problem  1 () =  2 − 1, Figure 1 clearly shows that the proposed method SBM (Figure 1

Conclusion
In this work, we have proposed an optimal method of fourth order for finding simple roots of nonlinear equations.The advantage of the proposed method is that it does not require the use of second order derivative.The numerical results have confirmed the robustness and efficiency of the proposed method.The presented basins of attraction also show the good performance of the proposed method as compared to other existing fourth order methods in the literature.
(f)) seems to produce larger basins of attraction than CLM (Figure 1(e)) and CM (Figure 1(d)), almost competitive basins of attraction as KLM (Figure 1(c)), and smaller basins of attraction than NM (Figure 1(a)) and KM (Figure 1(b)).In our next problem, we have taken the cubic polynomial  2 () =  3 − 1.The results are given in Figure 2. Again the proposed method SBM (Figure 2(f)) seems to produce larger basins of attraction than KM (Figure 2(b)), CM (Figure 2(d)), and CLM (Figure 2(e)), almost competitive basins of attraction as KLM (Figure 2(c)), and smaller basins of attraction than NM (Figure 2(a)).

Table 2 :
Error using same NFE = 12 for all methods.

Table 4 :
(  ) of the compared methods.