Efficient Four-Parametric with-and-without-Memory Iterative Methods Possessing High Efficiency Indices

We construct a family of derivative-free optimal iterative methods without memory to approximate a simple zero of a nonlinear function. Error analysis demonstrates that the without-memory class has eighth-order convergence and is extendable to withmemory class. The extension of new family to the with-memory one is also presented which attains the convergence order 15.5156 and a very high efficiency index 15.51561/4 ≈ 1.9847. Some particular schemes of the with-memory family are also described. Numerical examples and some dynamical aspects of the new schemes are given to support theoretical results.


Introduction
In this manuscript, the problem of finding numerical solutions of nonlinear equations () = 0 is addressed.Iterative procedures are widely used to solve this problem, (see, e.g., [1][2][3]).Traub [3] classified the iterative methods as onestep and multistep schemes.One-step Steffensen's iterative method is a known improvement of Newton's method as it avoids using the derivative unlike in the case of Newton's method.The concept of optimal iterative method was given by Kung and Traub [4]; that is, a multistep iterative scheme, without memory, based on  + 1 functional evaluations could attain an optimal order of convergence 2  .
According to Ostrowski [2], if  is the convergence order of an iterative method and  is the total number of functional evaluations per iteration, then the index  =  1/ is known as efficiency index of an iterative method.Since multistep iterative methods overcome theoretical limits of one-step methods concerning the order of convergence and the efficiency index, therefore several multistep iterative schemes have been developed for solving nonlinear equations (see, e.g., [5][6][7] and the overview [8]).Some optimal eighthorder methods without memory can be found in [9][10][11][12][13][14]; these methods, among others, have been designed by using different techniques as composition of known schemes and elimination of functional evaluations using interpolation, rational approximations, and so on or by freezing the derivatives and using weight function procedure.
Multistep iterative methods with memory, which use information from the current and previous iterations, can increase the convergence order and the efficiency index of the multistep iterative methods without memory with no additional functional evaluations.The increasing in the order of convergence is based on one or more accelerator parameters which appear in the error equations of methods without memory.For this reason, several multistep memory iterative methods have been developed in recent years.For a background study regarding the acceleration of convergence order with memorization, one should see, for example, [8,15].
Traub [3] developed the first method with memory by a slight change in Steffensen's scheme: where  0 ,  0 are given and   is a self-accelerating parameter given by 1 being the first-degree Newton's interpolating polynomial.
Motivated by these ideas, here we present an efficient family of iterative methods with memory based on a family of optimal derivative-free iterative schemes without memory.In other words, we first construct a family of derivative-free optimal eighth-order methods without memory, involving four parameters in such a way that their corresponding error equations have the most suitable forms to achieve the highest possible convergence order and efficiency index when they are extended to with-memory methods.Then, by using Newton's interpolating polynomials passing through best saved points, we approximate the accelerating parameters involved at each step to construct highly efficient withmemory methods.As far as we know, there are a few iterative methods with memory in the literature involving four accelerators.It is necessary to remark that, in this work, without additional functional evaluations, the -order of convergence increases from 8 to 15.5156 and thus efficiency index is significantly improved from 1.68179 to 15.5156 1/4 ≈ 1.9847.
The content of the rest of the paper is as follows: In Section 2, an optimal three-step derivative-free family of methods without memory involving four parameters is defined which can be extended to methods with memory.Another derivative-free class of three-step optimal methods without memory is also presented as a special case of the proposed family.In Section 3, the extension of new withoutmemory iterative methods to with-memory methods is given.This is done by approximating the accelerating parameters involved at each step by using Newton's interpolating polynomials.Some weight functions and particular methods are given in Section 4. In Section 5, we give some dynamical aspects of the new methods.Section 6 includes numerical comparisons of the new methods with the existing efficient methods and conclusions.

A Family of Optimal Eighth-Order Methods without Memory
Let us consider the following three-step iterative scheme obtained by composing Newton's method three times with "frozen" derivative: By using the approximation where   =   +  1 (  ) at the first step and at the second step ( and  being weight functions with variable   = (  )/(  )) and also at the third step of (3), we get three-step derivative-free class of methods in the following form: where  1 ,  2 ,  3 , and  4 are free parameters and the weight functions (  ) and (  ) will be chosen in such a way that the above method (7) achieves optimal order 8 for a given initial estimation  0 .Moreover, we consider two-step Ostrowski's method [2] and add a third step as follows: By using the approximations (4) at the first step, at the second step (where  is a weight function with variable   = (  )/(  )), and (6) at the third step of (8), we obtain the following family of three-step derivative-free schemes: Function (  ) should be found in such a way that the method (10) gives optimal eighth-order convergence.We note that expression (10) is a particular case of (7), taking (  ) = 1/(1 − 2  ), where   = (  )/(  ).The following result shows that family (7) has eighth-order convergence.
Theorem 1.Let  ∈  be a simple zero of a sufficiently differentiable function  :  ⊆ R → R, where  is an open interval and the initial guess  0 is close enough to .Then, class (7) has convergence order of at least eight if The error equation of (7) for all values of  1 ,  2 ,  3 , and  4 is given by where   = where  , =   − .
Similarly, we can find the expression of (  ).Thus, the error term of   is Also, by applying conditions ( 11) and ( 12), we have the error term of By expanding (  ) and   in terms of   and replacing them in the third step, we obtain where Remark 2. It is clear from Theorem 1 that the convergence order of class of methods without memory ( 7) is eight, with efficiency index 8 1/4 ≈ 1.68179.
So, from this error analysis, it can be inferred that the derivative-free class (7) and the particular case (10) are extendable to procedures with memory.
Remark 4. Taking into account the variable used in the weight functions and the expressions of the different denominators that appear in the iterative formula, it is not possible to extend our parametric family to the multidimensional case for solving nonlinear systems.

The Iterative Methods with Memory
To extend the proposed optimal class of methods (7) to the with-memory one, we choose the involved parameters  1 ,  2 ,  3 , and  4 in such a way that the optimal order of convergence is increased, as it has been introduced in the previous remarks.If we choose  1 = −1/  (),  2 = − 2 ,  3 =   () 3 , and  4 =   () 4 , the order of the method increases up to sixteen.Therefore, the process can be improved by selecting the accelerating parameters at each iterative step as where  4 (),  5 (),  6 (), and  7 () are Newton's interpolating polynomials of fourth, fifth, sixth, and seventh degree, respectively, passing through best saved points as follows: for any  ≥ 1.Now, we define the three-step with-memory extensions of ( 7) and (10) as follows: where  1,0 ,  2,0 ,  3,0 , and  4,0 should be chosen suitably.The weight functions (  ) and (  ) have the same properties as in ( 7) and (10) and 6 (  )/6, and  4, =  V 7 (  )/24 for  = 1, 2, . .., then the following holds: where and Proof.The proof is similar to Lemma 3.1 in [23].
Theorem 6.Let () be a nonlinear function which is sufficiently differentiable and  0 be an initial approximation sufficiently close to its simple root .If the parameters  1, ,  2, ,  3, , and  4, are recursively computed by the formulae given in (21), then the -order of convergence of class ( 23) is at least 15.5156 with efficiency index 15.51561/4 ≈ 1.9847.
Proof.We consider that sequence {  }, generated by method (23), converges to root  with at least order R. Then we can write where   =   − .We are going to determine the value of R. Hence, We assume that -order of the iterative sequences {  }, {  }, and {  } is at least , , and , respectively; that is, Considering ( 30), ( 31), (32), and Lemma 5, we get where   and   are given by ( 26) and ( 27), respectively.
Remark 7. It can be observed that we have obtained the minimum -order 15.5156, which gives the highest efficiency index 15.51561/4 ≈ 1.9847 for the presented family with memory (23).Similarly, as ( 24) is a particular case of (23), their -order and efficiency index coincide.

Some Dynamical Aspects of the New Methods
It is known (see, e.g., [24][25][26]) that the analysis of the dynamical behavior of iterative methods by means of complex dynamics tools (on low-degree polynomials) is an important resource that helps us to understand the reliability of the methods.In fact, these techniques cannot be applied to iterative methods with memory that require multidimensional real analysis (see the recent papers [27,28]).However, the extremely high degree of the polynomials involved in the resulting rational functions makes this analysis for the proposed methods in this paper not possible.
The dynamical behavior of the rational function associated with an iterative method on low-degree polynomials gives us important information about its stability and reliability.This analysis applied to a family of iterative schemes allows us to select the more stable members of the family and refuse those that have chaotic behavior.Now, we are going to recall some dynamical concepts (see [29]) that we use in this work.Let  : Ĉ → Ĉ be a rational function, where Ĉ is the Riemann sphere.The orbit of a point  0 ∈ Ĉ is defined as { 0 ,  ( 0 ) ,  2 ( 0 ) , . . .,   ( 0 ) , . ..} . (43) The phase plane of  is analyzed for classifying the starting points from the asymptotic behavior of their orbits.
Then, the basin of attraction of an attractor  is defined as We are going to show the dynamical planes of the proposed method on quadratic and cubic polynomials (see Figures 1 and 2, resp.).These planes are obtained as follows: in the area [−2, 2] × [−2, 2] of the complex plane, a mesh of 200 × 200 initial estimations is defined.From each of these initial points, the first estimation is calculated by using an initial value of 0.01 for all the accelerating parameters.Then, the following elements of the orbit are obtained from these two initial guesses.If the sequence generated by the iterative method reaches a root of the polynomial (superattracting fixed point) with an error estimation lower than 10 −5 and a maximum of 5 iterations, we decide that the initial point is in the basin of attraction of this root and we paint it in a color previously selected for this root.The roots of each polynomial are marked with a white star.Black color denotes lack of convergence to any of the roots (with the maximum of iterations established) or convergence to the infinity.These dynamical planes have been generated by using the software shown in [30], implemented in Matlab R2014a.
Let us observe that the basin of the roots is wide, in general, specially on the real axis (see Figures 1 and 2).This fact justifies the good convergence behavior of the methods when the initial estimation is not very close to the solution, as what happens in the numerical section.

Numerical Experiments
In this section, we compare the proposed methods with some existing ones of similar kind.All the numerical computations are carried out on the computer algebra system Maple 16.As far as we know, there exist few iterative methods with memory of so high order of convergence.We compare our methods M1 and M2 with the triparametric with-memory methods of Soleymani et al. [20] (SM) and Lotfi et al. [19] (LM1) and four-parametric with-memory method of Lotfi and Assari [31] (LM2) given as follows: SM: LM1: ) , LM2: where and  0 ,  0 ,  0 ,  0 , and  0 are given.
We have considered three iterations for the comparison of different methods with memory by applying 2000-fixedfloating-point arithmetic.In most of the applications, there is no need for so much digits, but when higher order methods are compared and checked, it is necessary to distinguish them on the basis of high precision.
Table 1 shows the nonlinear test functions used for the comparison along with their exact roots  and initial guess  0 .
We see that these examples have applications in engineering.Let us describe the phenomena as follows.
Example 1 (the Shockley diode equation and electric circuit).Let us consider the equation describing the exact current flowing through a diode, given the voltage drop across the junction, the temperature of the junction, and the physical constants involved as follows: where  is the diode current in amperes,   is saturation current (10 An approximation of the root of this equation is 0.6714453666. Example 2 (the beam positioning problem).We consider a beam positioning problem (see [32]) where a 4-meter-long beam is leaning against the edge of the cubical box with sides of length 1 meter each such that one of its ends touches the wall and the other touches the floor as shown in Figure 3.
What should be the distance along the floor from the base of the wall to the bottom of the beam?Let  be the distance in meters along the beam from the floor to the edge of the box and let  be the distance in meters from the bottom of the box to the bottom of the beam.Then, we have the following equation: The positive solutions of the equation, 0.3621999926 and 2.7609056329, are the solutions to the beam positioning problem.Example 3 (continuous stirred tank reactor (CSTR)).Consider the isothermal continuous stirred tank reactor (CSTR).
Components A and R are fed to the reactor at rates of  and -, respectively.The following reaction scheme develops in the reactor (see [33]): The problem was analyzed by Douglas [34] in order to design simple feedback control systems.In the analysis, he gave the following equation for the transfer function of the reactor: where   is the gain of the proportional controller.
So, we see that there are two simple roots.We take  = −1.45.
The numerical results displayed in Tables 2-5 illustrate that the proposed iterative methods with memories M1 and M2 have much better accuracy than the methods of Soleymani et al. [20] (SM), Lotfi et al. [19] (LM1), and Lotfi and Assari [31] (LM2).We observe that the CPU time of the proposed methods is less than or equal to the CPU time of the existing methods.For all the compared with-memory methods, we have considered  0 =  0 =  0 =  0 = −0.1, 2,0 =  0 = 0.1 and  1,0 =  3,0 =  4,0 =  0 =  0 = 0.01 to start the initial iteration.It is worth noticing that, for the execution of the proposed with-memory method M1 (41), for  = 0, we obtain the scheme (7) of maximum order 8 and for  ≥ 1, the successive iterates of the method M1 (41) are computed where the recently estimated information is used (which is already saved in memory) to provide the order of convergence of at least 15.5156.Thus, it is observed that, in the first iteration, efficiency of the with-memory methods cannot be seen.This means the correct values of the accelerating parameters are required in the interpolation processes to clearly observe the -order of convergence of with-memory methods.Figure 3 presents the comparison of the real efficiency indices (RE) of without-memory method (7) and with-memory methods (SM), (LM1), (LM2), (M1), and (M2) computed by the following expressions [19]: RE where  is the number of iterations.Figure 4 reveals the superiority of the proposed methods in terms of real efficiency indices.

Conclusions
In this paper, we have developed a new derivative-free without-memory family of optimal iterative methods extendable to with-memory ones.Error analysis is presented to demonstrate that the family of without-memory methods have optimal eighth-order convergence.We have also extended the proposed family to the with-memory one with low computational load.Two special cases, M1 and M2, of the proposed with-memory family are also obtained.It has been shown that the new methods possess a very high computational efficiency index 15.51561/4 ≈ 1.9847 which is even higher than the efficiency of many of the developed with-memory methods and of a two-step withmemory method using two accelerators given by Cordero et al. [16]; that is, 7 1/3 ≈ 1.913.Moreover, the proposed methods can be used even for nonsmooth functions as they do not require any derivatives.Finally, numerical results and dynamical behavior of the new methods are given which illustrate that the proposed with-memory iterative methods have much better accuracy and efficiency than the methods of the respective domain for finding roots of nonlinear functions.

Figure 4 :
Figure 4: Comparison of efficiency indices for different methods.
The control system is stable for values of   that yields roots of the transfer function having negative real part.If we choose   = 0, we get the poles of the open-loop transfer function as roots of the nonlinear equation:

Table 2 :
Comparison table for methods with memory on  1 ().

Table 3 :
Comparison table for methods with memory on  2 ().

Table 4 :
Comparison table for methods with memory on  3 ().

Table 5 :
[35]arison table for methods with memory on  4 ().Tables 2-5 display the errors |  − | of approximations to the sought zeros produced by different methods at the first three iterations, where (−) denotes  × 10 − .The initial approximation  0 for each test function, computational order of convergence (  ), and CPU time are also included in these tables.Computational order of convergence is computed by the following expression[35]: