Solving Nondifferentiable Nonlinear Equations by New Steffensen-Type Iterative Methods with Memory

It is attempted to present two derivative-free Steffensen-type methods with memory for solving nonlinear equations. By making use of a suitable self-accelerator parameter in the existing optimal fourthand eighth-order without memory methods, the order of convergence has been increased without any extra function evaluation.Therefore, its efficiency index is also increased, which is the main contribution of this paper. The self-accelerator parameters are estimated using Newton’s interpolation. To show applicability of the proposed methods, some numerical illustrations are presented.


Introduction
Finding the root of a nonlinear equation frequently occurs in scientific computation.Newton's method is the most wellknown method for solving nonlinear equations and has quadratic convergence.However, the existence of the derivative in the neighborhood of the required root is compulsory for convergence of Newton's method, which restricts its applications in practice.To overcome this difficulty, Steffensen replaced the first derivative of the function in Newton's iterate by forward finite difference approximation.This method also possesses the quadratic convergence and the same efficiency just like Newton's method.Some nice applications of iterative methods can be found in the literature; one can see [1][2][3][4][5][6][7][8].
Kung and Traub are pioneers in constructing optimal general multistep methods without memory.They discussed two general -step methods based on interpolation.Moreover, they conjectured that any multistep method without memory using  function evaluations may reach the convergence order at most 2 −1 [9].Thus, both Newton's and Steffensen's methods are optimal in the sense of Kung and Traub.But the superiority of Steffensen's method over Newton's method is that it is derivative-free.So, it can be applied to nondifferentiable equations also.To compare iterative methods theoretically, Owtrowski [10] introduced the idea of efficiency index given by  1/  , where  is the order of convergence and   is the number of function evaluations per iteration.In other words, we can say that an iterative method with higher efficiency index is more efficient.To improve convergence order as well as efficiency index without any new function evaluations, Traub in his book introduced the method with memory.In fact, he changed Steffensen's method slightly as follows (see [11, pp. 185-187]):  0 ,  0 are given suitably The parameter   is called self-accelerator and method (1) has -order convergence 2.414.The possibility to increase the convergence order cannot be denied by using more suitable parameters.Many authors during the last few years tried to construct iterative methods without memory which support this conjecture with optimal order; one can see [12][13][14][15][16][17][18][19][20][21] and many more.Although construction of optimal methods without memory is still an active field, the authors are shifting their attentions to develop more efficient methods with memory since the last year; for example, see [22][23][24][25].

Mathematical Problems in Engineering
In the convergence analysis of the new method, we employ the notation used in Traub's book [11]: if   and   are null sequences and   /  → , where  is a nonzero constant, we will write   = (  ) or   ∼   .We also use the concept of -order of convergence introduced by Ortega and Rheinboldt [5].Let   be a sequence of approximations generated by an iterative method (IM).If this sequence converges to a zero  of function  with the -order   ((IM), ) ≥ , we will write where  , tends to the asymptotic error constant   of the iterative method (IM) when  → ∞.
The rest of the paper is organized as follows.In Section 2, we describe the existing two-and three-point Steffensentype iterative schemes and then their convergence orders are accelerated from four to six and from eight to twelve, respectively, without any extra evaluation.The proposed methods are obtained by using previous values as well as a suitable parameter.The parameter is calculated using Newton's interpolations.The numerical study was also presented in the next section to confirm the theoretical results.Finally, we give the concluding remarks.

Development and Construction with Memory
Two-step and three-step repeated Newton's method is given by The order of convergence of the methods (3) and ( 4) is four and eight, respectively.By repeating in the same way, one can get more higher-order methods.Definitely these methods have higher convergence order as compared to the standard Newton's method.But there is no improvement in its efficiency index.For example, the efficiency indexes of the above two methods are 4 1/4 = 1.4142 and 8 1/6 = 1.4142, which are the same as the efficiency index of the original Newton's method 2 1/2 = 1.4142.To improve the efficiency index, Cordero et al. in [26] have reduced the number of function evaluations by approximating the derivatives in terms of functions.For this first, they have approximated the derivatives   (  ) by forward difference given by Then, to approximate the other two derivatives, they used rational approximation.In fact,   (  ) is approximated by the rational approximation of the first degree and   (  ) is approximated by the rational approximation of the second degree and they derived that where By using these approximations of the derivatives in ( 3) and (4), they have shown that their methods have the same order of convergence but with reduced number of function evaluations and thus efficiency index has been increased.In fact, the efficiency index becomes 4 1/3 = 1.5874 and 8 1/4 = 1.6818, respectively.For more detail, one can see [26].One more advantage of these methods is that it can be applied to nonsmooth functions also.Now one natural question arises: is it possible to find more efficient methods?Main aim of our paper is to find the answer of this question.
For this purpose, we first introduce a nonzero real parameter in   and consider the following two methods with its error expression.
Method I.For the suitably given  0 , and its error expression is given by Method II.For the suitably given  0 , and its error expression is given by where  1 ,  2 ,  3 , and  4 are defined as in (7) and   =  () ()/!.Now we are concerned with the extension of the above schemes in methods with memory, since its error equations contain the parameter which can be approximated in such a way that it increases the local order convergence.For this purpose, we put  =   and  =   , where   = −1/c 1 and   = −1/ 1 .Here, c1 and  1 are two different approximations of  1 and given by where are Newton's interpolatory polynomial of degrees three and four, respectively.Now, the theoretical order of convergence of the methods is given by the following theorem.
Theorem 1.If an initial approximation  0 is sufficiently close to a simple zero  of () and the parameters   and   in the iterative scheme (8) and (10) are recursively calculated by the forms given in ( 12) and ( 13), respectively, then the -order of convergence of methods ( 8) and (10) with memory is at least six and twelve, respectively.
Proof.Before going to show the main result, we first prove the following two results, which we will use later.
Claim I. Consider Claim II.Consider For this, suppose that there are  nodes  0 ,  1 , . . .,   from the interval  = [, ], where  is the minimum and  is the maximum of these nodes, respectively.Then, for some  ∈ , the error expression of Newton's interpolation polynomial   () of degree  is given by For  = 3, the above equation assumes the form (keeping in the mind  0 =   ,  1 =  −1 ,  2 =  −1 , and  3 =  −1 ) Differentiating (18) with respect to  and putting  =   , we get Now, Similarly, Using these relations in (19) and simplifying, we get And thus or which shows the first part.Similarly, taking  = 4 in ( 17) and proceeding as in the above manner, we can prove the second relation.Now, we will prove the main result.To do this, we first assume that the -order of convergence of sequences   ,   ,   , and   is at least ,  1 ,  2 , and  3 , respectively.Hence, Method I.For method (8), it can be derived that Using the expression of Claim (I) in (29), we have Now comparing the equal powers of  −1 in ( 26)-( 30), ( 27)-(31), and ( 25)-(32), we get the following nonlinear system: After solving these equations, we get  = 6,  2 = 3, and  1 = 2, which confirm the convergence of method (8).

Application to Nonlinear Equations
In this section, we apply the proposed methods to solve some smooth as well as nonsmooth nonlinear equations.Here, we demonstrate the convergence behavior of the methods with and without memory.Numerical computations reported here have been carried out in a Mathematica 8.0 environment.Table 1 shows the absolute value of the difference of the exact root  and approximated root   , where the exact root is computed with 1000 significant digits (digits := 1000).

Conclusion
In this paper, the efficiency of the existing method has been improved by employing information from the current and previous iteration without any additional evaluation of the function.The efficiency index of the proposed methods is 6 1/3 = 1.8171 and 12 1/4 = 1.8612 which are higher than efficiency index 4 1/3 = 1.5874 and 8 1/4 = 1.6818 of the existing methods, respectively.The proposed methods are also tested on some numerical examples.The numerical results show that proposed method is very useful to find an acceptable approximation of the exact solution of nonlinear equations.

Table 1 :
Numerical results for nonlinear equations.