Multistep High-Order Methods for Nonlinear Equations Using Padé-Like Approximants

We present new high-order optimal iterative methods for solving a nonlinear equation, f(x) = 0, by using Padé-like approximants. We compose optimal methods of order 4 with Newton’s step and substitute the derivative by using an appropriate rational approximant, getting optimal methods of order 8. In the same way, increasing the degree of the approximant, we obtain optimal methods of order 16. We also perform different numerical tests that confirm the theoretical results.


Introduction
Many applied problems in different fields of science and technology require to find the solution of a nonlinear equation.Iterative methods are used to approximate its solutions.The performance of an iterative method can be measured by the efficiency index introduced by Ostrowski in [1].In this sense, Kung and Traub conjectured in [2] that a multistep method without memory performing  + 1 functional evaluations per iteration can have at most convergence order 2  , in which case it is said to be optimal.
Recently, different optimal eighth-order methods, with 4 functional evaluations per step, have been published.A very interesting survey can be found in [3].Some of them are a generalization of the well-known Ostrowski's optimal method of order four [4][5][6][7].In [8] the authors start from a third-order method due to Potra-Pták, combine this scheme with Newton's method using "frozen" derivative, and estimate the new functional evaluation.The procedure designed in [9] uses weight-functions and "frozen" derivative for the development of the schemes.As far as we know, beyond the family described by Kung and Traub in [2], only in [10] a general technique to obtain new optimal methods has been presented; the authors use inverse interpolation and methods of sixteenth order have also been obtained.
While computational engineering has achieved significant maturity, computational costs can be extremely large when high accuracy simulations are required.The development of a practical high-order solution method could diminish this problem by significantly decreasing the computational time required to achieve an acceptable error level (see, e.g., [11]).
The existence of an extensive literature on higher order methods (see, e.g., [3,12] and the references therein) reveals that they are only limited by the nature of the problem to be solved: in particular, the numerical solutions of nonlinear equations and systems are needed in the study of dynamical models of chemical reactors [13], or in radioactive transfer [14].Moreover, many of numerical applications use high precision in their computations; in [15], high-precision calculations are used to solve interpolation problems in Astronomy; in [16] the authors describe the use of arbitrary precision computations to improve the results obtained in climate simulations; the results of these numerical experiments show that the high-order methods associated with a multiprecision arithmetic floating point are very useful, because it yields a clear reduction in iterations.A motivation for an arbitrary precision in interval methods can be found in [17], in particular for the calculation of zeros of nonlinear functions.
The objective of this paper is to present a general procedure to obtain optimal order methods for  = 3, 4 starting from optimal order methods for  = 2.The procedure consists in composing optimal methods of order 4 that use two evaluations of the function and one of the derivative, with Newton's step and approximating the derivative in this last step by using an adequate rational function which allows duplicating the convergence order, introducing only one new functional evaluation per iteration.
In Section 2, we describe the process to generate the new eighth-order methods and establish their convergence order.In Section 3, the same procedure is used to obtain sixteenthorder methods by increasing the approximant degree.Finally, in Section 4, we collect several optimal methods of order 4 that are the starting point for our new methods and present numerical experiments that confirm the theoretical results.

Optimal Methods of Order 8
In this section, we describe a procedure that allows us to obtain new optimal methods of order 8, starting from optimal schemes of order 4. Let us denote by Ψ 2  the set of iteration functions corresponding to optimal methods of order 2  .
Consider the three-step method given by where   ∈ Ψ 4 .
In order to simplify the notation, we will omit the argument   in the iterative process, so that we will write   (  ) as   ,  = 1, 2, 3 and  0 =   .
Obviously, this three-step method has order 8, being a composition of schemes of orders 4 and 2, respectively (see [2], Th. 2.4), but the method is not optimal because it introduces two new functional evaluations in the last step.
Thus, to maintain the optimality, we substitute   ( 2 (  )) with the derivative ℎ  2 ( 2 (  )) of the second-degree approximant verifying the conditions From the first condition one has  (2) 0 = ( 0 ).Substituting in ( 4)-( 6) we obtain the following linear system: where, as usual, [, ] denotes the divided difference of order 1, (()−())/(−). Applying Gaussian elimination the following reduced system is obtained In the divided differences with a repeated argument, one places the derivative instead of an undetermined quotient.The coefficients of the approximant are obtained by backward substitution.Then, the derivative of the approximant in  2 is Substituting   ( 2 ) by this value, we obtain an iterative method,  3 , defined by where This method only uses 4 functional evaluations per iteration.Showing that it is of order 8 we will prove that it is optimal in Kung-Traub's sense.Theorem 1.Let  ∈  be a simple root of a function  :  ⊆ R → R sufficiently differentiable in an open interval .For an  0 close enough to , the method defined by ( 11)-( 13) has optimal convergence order 2 3 .
Proof.Let  , be the error of   (  ); that is,  , =   (  ) − ,  = 0, 1, 2, 3, for  = 0, 1, . ...Then, by the definition of each step of the iterative method, we have Consider the expansion of ( 0 ) around where   =  () ()/!, for  = 1, 2, . ..; then, so that using ( 11) and ( 14) Substituting (19) in the expansion of ( 1 ) around  we get Using in (15) Using ( 17), ( 18), (20), and (22) in the determination of the coefficients of the rational approximant and in the expression of its derivative (9) gives Now, Taylor's expansion of   ( 2 ) in  =  gives and the fact that  2 () is of fourth order allows us to establish Using this expression and (23) one can write The order of the method  3 is obtained by computing Using (26) we have From ( 19) it can be deduced that By substituting (30) in (28) and using that   ∈ Ψ 4 one has which proves that method  3 has optimal order 2 3 .

Optimal Methods of Order 16
The idea of this section is to extend the former process performing a new step to obtain optimal methods of order 2  starting from optimal methods of order 2 −1 .For  = 4 the method  4 can be defined as follows: with where   ∈ Ψ 4 and   ∈ Ψ 8 .(See [2,5,6,8,14,16,18] for some optimal eighth-order methods.)Then, we start from a method that, in its first three steps, performs 4 functional evaluations and another additional evaluation in the last step ( 3 ()) that allows us to construct the following rational approximant: The coefficients are determined by imposing the following conditions: Similarly to the former case, = ( 0 ).Substituting in (36)-( 39) we obtain the linear system The remaining coefficients are obtained by reducing the system to triangular form and solving it by backward substitution The derivative of the rational approximant in  3 is As in the previous case, this expression allows us to establish that and taking into account the fact that we get Similarly to the eighth-order case, from this expression it results that  4 has optimal convergence order 2 4 .

Numerical Experiments
First of all, we consider some optimal four-order methods that we have used for developing high-order methods with the procedure described; all of them use Newton's step as a predictor and another evaluation of function .
We have chosen the following examples:  We have performed the computations in MATLAB in variable precision arithmetic with 1000 digits of mantissa.
Tables 1 and 2 show the distance |  − | for the first three iterations of the new order 8 and 16 methods, respectively.The last column, when we know the exact solution , that is, for example, (a), depicts the computational convergence order  (see [20] The results from Tables 3 and 4 correspond to an equation without exact solution, so that | +1 −   | is computed, instead of the actual error.In both cases, the numerical results support the optimality of the new methods, according to the proven theoretical results.

Conclusions
In this paper, we develop high-order iterative methods to solve nonlinear equations.The procedure to obtain the iteration functions is rigorously deduced and can be generalized.There are numerous applications where these schemes are needed because it is necessary to use high precision in their computations, as occurs in dynamical models of chemical reactors and in radioactive transfer and also high-precision calculations are used to solve interpolation problems in Astronomy and so forth.Moreover, the methods presented are optimal in terms of efficiency; this fact makes them very competitive.