Some Iterative Methods Free from Derivatives and Their Basins of Attraction for Nonlinear Equations

First, we make the Jain’s derivative-free method optimal and subsequently increase its efficiency index from 1.442 to 1.587. Then, a novel three-step computational family of iterative schemes for solving single variable nonlinear equations is given. The schemes are free from derivative calculation per full iteration. The optimal family is constructed by applying the weight function approach alongside an approximation for the first derivative of the function in the last step in which the first two steps are the optimized derivative-free form of Jain’s method. The convergence rate of the proposed optimal method and the optimal family is studied. The efficiency index for each method of the family is 1.682. The superiority of the proposed contributions is illustrated by solving numerical examples and comparing them with some of the existing methods in the literature. In the end, we provide the basins of attraction for some methods to observe the beauty of iterative nonlinear solvers in providing fractals and also choose the best method in case of larger attraction basins.


Introduction
In order to approximate the solution of nonlinear functions, it is suitable to use iteration methods which lead to monotone sequences.The construction of iterative methods for estimating the solution of nonlinear equations or systems is an interesting task in numerical analysis [1].During the last years, a huge number of papers, devoted to the iterative methods, have appeared in many journals, see, for example, [2][3][4][5] and their bibliographies.
A very important aspect of an iterative process is the rate of convergence of the sequence {  } ∞ =0 , which approximates a solution of () = 0.This concept, along with the cost associated to the technique, allows establishing the index of efficiency for an iterative process.In this way, the classical efficiency index of an iterative process [6] is defined by the value  1/ , where  is the convergence rate and  is the total number of evaluations per cycle.Consequently, Newton and Steffensen schemes both have the same efficiency index 1.414.
In addition, Kung and Traub in [7] conjectured that a multipoint iteration without memory consuming  evaluation per full iteration can reach the maximum convergence rate 2 −1 .Taking into account all these, many researchers in this topic have been trying to construct robust optimal methods; see, for example, [8,9] and their bibliographies.
The remained contents of this study are summarized in what follows.In Section 2, we present an optimized form of the well-known cubically Jain's method [10] with quartic convergence.Moreover, considering the Kung-Traub conjecture, we build a family of three-step without memory iterative methods of optimal convergence order 8.In order to give this, we use an approximation of the first derivative of the function in the last step of a three-step cycle alongside a well-written weight function.Analyses of convergence are given.A comparison with the existing, without memory methods of various orders, is provided in Section 3. We also investigate the basins of attraction for some of the derivativefree methods to provide the fractal behavior of such schemes in Section 4. Section 5 gives the concluding remarks of this research and presents the future works.

Main Results
The main idea of this work is first to present a generalization of the well-known Jain's method with optimal order four and the efficiency index 1.587 and then construct a three-step family of derivative-free eighth-order methods with optimal efficiency index 1.682.
Let us take into consideration the Jain's derivative-free method [10] as follows: ,  0 given, Equation ( 1) is a cubical technique using three function evaluations per iteration with 3 1/3 ≈ 1.442 as its efficiency index.This index of efficiency is not optimal in the sense of Kung-Traub hypothesis.Therefore, in order to improve the index of efficiency and make (1) optimal, we consider the following iteration: ,  0 given, wherein   =   + (  ) and  ∈ R \ {0}.If we use divided differences and define [  ,   ] = ((  ) − (  ))/(  ), then a novel modification of Jain's method (1) in a more simpler format than (2) can be obtained as follows: ,  0 given, where its convergence order and efficiency index are optimal.Theorem 1 illustrates this fact.If  0 is sufficiently close to , then the method defined by (3) has the optimal convergence order four using only three function evaluations.
Proof.We expand any terms of (3) around the simple zero  in the th iterate where   =  () ()/!,  ≥ 1, and   =   −.Therefore, we have (  ) =  1   +  2  2  +  3  3  +  4  4  + ( 5  ).Accordingly by Taylor's series expanding for the first step of (3), we get that Now, we ought to expand (  ) around the simple root by using (4).We have Note that throughout this paper we omit writing many terms of the Taylor expansions of the error equations for the sake of simplicity.Additionally, by providing the Taylor's series expansion, in the second step of (3), we have Using ( 4), (6), and the second step of (3), we attain This shows that (3) is an optimal fourth-order derivativefree method with three evaluations per cycle.Hence, the proof is complete.
Remark 2. It should be remarked that if one uses   =   − ((  ) 2 )/((  ) − (  − (  ))) in (3), that is, another variant of Steffensen's method by backward finite difference of order one, then a similar optimal quartical convergence method will be attained.To illustrate more, using this variant will end in ,  0 given, wherein   =   − (  ) with  ∈ R \ {0} and its error equation is as follows: Therefore, we have given a modification of the wellknown cubical Jain's method with fourth convergence order by using the same number of evaluations as the Jain's scheme, while the orders are independent to the free nonzero parameter .The derivative-free iterations ( 3) and ( 8) satisfy the Kung-Traub conjecture for constructing optimal highorder multipoint iterations without memory.This was the first contribution of this research.
Remark 3.Although we provide the sketch for the proofs of the main Theorems in R, the proposed Steffensen-type methods of this paper could be applied for finding complex zeros as well.Toward such a goal, a complex initial approximation (seed) is needed.Now in order to improve the convergence rate and the index of efficiency more, we compute a Newton's step as follows in the third step of a three-step cycle in which the first two steps are (3) with   =   + (  ) and  ∈ R \ {0}: ,  0 given, Obviously, ( 10) is an eighth-order method with five evaluations (four function and one first derivative evaluations) per full iteration to reach the efficiency index 8 1/5 ≈ 1.515.This index of efficiency is lower than that of (3).For this reason, we should approximate   (  ) by a combination of the already known data, that is, in a way that the order of (10) stay at eight but its number of evaluations lessen from five to four.
First, we consider that the new-appeared first derivative of the function at this step can be approximated as follows: In fact, ( 11) is a linear combination of two divided differences in which the best choice of  is zero, to attain a better order of convergence.However, (11) with  = 0 does not preserve the convergence rate of (10).As a result, the new three-step method up to now (by only using (11)) will be of order six which is not optimal.In order to reach the optimality, we use a weight function at the third step as well.Thus, we consider the following three-step family without memory of derivative-free methods with the parameter  ∈ R. Theorem 4 illustrates that ( 12) reaches the eighth-order convergence using only four function evaluations per full iteration to proceed ,  0 given, where   =   + (  ) and By combining these two ideas, that is, an approximation of the new-appeared first derivative in the last step and a weight function, we have furnished a novel family of iterations.Proof.Using the same definitions and symbolic computations as done in the Proof of Theorem 1, results in We also obtain by using ( 5) and ( 14) that Note that the whole of such symbolic computations could be done using a simple Mathematica code as given in Algorithm 1.
Additionally, applying ( 14) and (15) in the last step of (12) results in the following error equation: This ends the proof and shows that ( 12) is an optimal eighth-order family using four function evaluations per iteration.
Remark 5.The index of efficiency for (3) and ( 8) is 4 1/3 ≈ 1.587 and for ( 12) is 8 1/4 ≈ 1.682, which are optimal according to the conjecture of Kung and Traub.A question might arise that how the weight functions in (12) were chosen to attain as high as possible convergence order with as small as possible number of functional evaluations.Although we have tried to suggest a simple family of iterations in (12), the weight function should be chosen generally in what follows: ,  0 given, where   =   + (  ) and ((  )/(  )), ((  )/ (  )), ((  )/(  )) are three weight functions that satisfy the following: to read the following error equation:

Numerical Reports
The objective of this section is to provide a robust comparison between the presented schemes and some of the already known methods in the literature.For numerical reports here, we have used the second-order Newton's method (NM), the quadratical scheme of Steffensen (SM), our proposed optimal fourth-order technique (3) with  = 1 denoted by PM4, the optimal derivative-free eighth-order uni-parametric family of iterative methods given by Kung and Traub in [7] (KT1) with  = 1, and our presented novel derivative-free eighthorder family (12) with  = 0 and  = 1 denoted by PM8.Due to similarity of ( 3) and ( 8), we just give the numerical reports of (3).The considered nonlinear test functions, their zeros, and the initial guesses in the neighborhood of the simple zeros are furnished in Table 1.
The results are summarized in Table 2 after three full iterations.As they show, novel schemes are comparable with all of the famous methods.All numerical instances were performed using 700 digits floating point arithmetic.We have computed the root of each test function for the initial guess  0 .As can be seen, the obtained results in Table 2 are in harmony with the analytical procedure given in Section 2.
The proposed optimal fourth-order modification of Jain's method performs well in contrast to the classical one-step method.We should remark that, in light of computational complexity, our constructed derivative-free family ( 12) is more economic, due to its optimal order with only four function evaluations per full cycle.
In light of the classical efficiency index for the without memory methods which have compared in Table 2, we have NM and SM that possess 1.414; (3) reaches 1.587, while (KT1) and ( 12) reach 1.682.
An important aspect in the study of iterative processes is the choice of a good initial approximation.Moreover, it is known that the set of all starting points from which an iterative process converges to a solution of the equation can be shown by means of the attraction basins.
Thus, we have considered the initial approximations close enough to the sought zeros in numerical examples to reach the convergence.A clear hybrid algorithm written in Mathematica [11] has recently been given in [12] to provide robust initial guesses for all the real zeros of nonlinear functions in an interval.Thus, the convergence of such iterative methods could be guaranteed by following such hybrid algorithms for providing robust initial approximations.
In what follows, we give an application of the new scheme in Chemistry [13].
Application.An exothermic first-order, irreversible reaction,  → , is carried out in an adiabatic reactor.Upon combining the kinetic and energy-balance equations, the following

Finding the Basins
The basin of attraction for complex Newton's method was first considered and attributed by Cayley [14].The concept of this section is to use this graphical tool for showing the basins of different methods.In order to view the basins of attraction for complex functions, we make use of the efficient computer programming package Mathematica [15] using double precision arithmetic.We take a rectangle  = [−4, 4] × [−4, 4] ∈ C and we assign the light to dark colors (based on the number of iterations) for  0 ∈  (for each seed) according to the simple zero at which the corresponding iterative method starting from  0 converges.See, for more details, [16,17].
The Julia set will be denoted by white-like colors.In this section, we consider the stopping criterion for convergence to be || < 10 −2 with a maximum of 30 iterations and with a grid 400 × 400 points.In fact, the colors we used are based on Figure 1.
We compare the results of Steffensen's method with  = 0.01, the third-order method of Jain, the quartical convergent method (3) for two values  = 0.01 and  = 0.0001 in Figures 2, 3 , 0.866025 − 0.5, 0.866025 + 0.5}, respectively.We do not invite optimal eighth order methods due to their large basins of attraction (their order is high).
As was stated in [18][19][20], known derivative-free schemes do not verify a scaling theorem, so the dynamical conclusions on a set of polynomials cannot be extended to others of the same degree polynomials and they are only particular cases.Indeed, comparing the behavior of the methods analyzed in those papers with the dynamical planes obtained in this paper, it is clear that the introduction of the parameter  plays an important role in the analysis.
Note that considering tighter conditions on our written codes may produce pictures with much more quality than these.In Figures 2-6, darker blue areas stand for low number of iterations, darker blue needs more number of iterations to converge, and red areas mean no convergence or a huge number of iterations is needed.
Based on Figures 2-6, we can see that the method of (3) with  = 0.0001 is the best method in terms of less chaotic behavior to obtain the solutions.It also has the largest basins for the solution and is faster than the other ones.This also clearly shows the significance of the free nonzero parameter .In fact, whenever  is lower (is close to zero), the larger basin along with less chaotic behavior could be attained.
In order to summarize these results, we have attached a weight to the quality of the fractals obtained by each method.The weight of 1 is for the smallest Julia set and a weight of 4 for scheme with chaotic behaviors.We then averaged those results to come up with the smallest value for the best method overall and the highest for the worst.These data are presented in Table 4.The results show that (3) with  = 0.0001 is the best one.

Concluding Remarks
Many problems in scientific topics can be formulated in terms of finding zeros of the nonlinear equations.This is the reason why solving nonlinear equations or systems are important.In this work, we have presented some novel schemes of fourthand eighth-order convergence.The fourth-order derivativefree methods possess 1.587 as their efficiency index and the eighth-order derivative-free methods have 1.682 as their efficiency index.Per full cycle, the proposed techniques are free from derivative calculation.We have also given the fractal behavior of some of the derivative-free methods along some numerical tests to clearly show the acceptable behavior of the new scheme.We have concluded that  has a very important effect on the convergence radius and the speed of convergence for Steffensen-type methods.With memorization of the obtained fourth-and eighth orders families could be considered for future studies.

Theorem 1 .
Let  ∈  be a simple zero of a sufficiently differentiable function  :  ⊆  →  in an open interval .

Theorem 4 .
Let  ∈  be a simple zero of a sufficiently differentiable function  :  ⊆  →  in an open interval .If  0 is sufficiently close to , then the method defined by(12) has the optimal local convergence order eight.

Figure 1 :
Figure 1: The distribution of colors.

Table 1 :
The test functions considered in this study.

Table 2 :
Results of comparisons for different methods after three iterations.

Table 3 :
Results of comparisons for different methods in solving () = 0.

Table 4 :
Results of chaotic comparisons for different derivative-free methods.