On a Family of High-Order Iterative Methods under Kantorovich Conditions and Some Applications

and Applied Analysis 3 2. A Family of High-Order Iterative Methods As was indicated in the introduction, we are interested in the study of the family of iterative methods as follows yn xn − f xn f ′ xn , xn 1 yn − ( 1 Lf xn Lf xn gf xn )fyn ) f ′ xn . 2.1 Note that the method 2.1 is equivalent to iterate the function Mf given by the following Mf x x − f x f ′ x − ( 1 Lf x Lf x gf x )f(x − f x /f ′ x ) f ′ x , 2.2


Introduction
This paper deals with the approximation of nonlinear equations as follows where F : Ω ⊆ X → Y is a nonlinear operator between Banach spaces, using the following family of high-order iterative methods: &y n x n − F x n −1 F x n , &x n 1 y n − I L F x n L F x n 2 G F x n F x n −1 F y n , 1.
Abstract and Applied Analysis where I is the identity operator on X and for each x ∈ X, L F x is the linear operator on Ω ⊆ X defined by the following assuming that F x −1 exists and G F : Ω ⊆ X → L X, X is a given nonlinear operator usually depending on the operator F and its derivatives , here L X, X denotes the space of bounded linear operators from X to X. The second step can be interpreted as an acceleration of the initial one in our case Newton's method . Indeed, this family was introduced for scalar equations f t 0 in 1 , for any initial scheme, Traub's theorem reads: has order of convergence min{p 2, 2p}, where p is the order of Φ x .
In this paper, we consider as the function Φ x the classical Newton method. We have mainly three reasons. First, because we can recover many well-known high-order iterative methods. Second, because the domain of convergence of Newton's method is bigger than high order schemes 2 . Finally, since in practice it is a good strategy to start with a simple method when we are not sufficiently close to the solution 3 .
On the other hand, conditions are imposed on x 0 and on F in order to assure the convergence of {x n } n to a solution x * of F x 0. This analysis, usually known as Kantorovich type, are based on a relationship between the problem in a Banach space and a single nonlinear scalar equation which leads the behavior of the problem. A priori error estimates, depending only on the initial conditions, and, hence, the order of convergence can be obtained by using Kantorovich type theorems.
A review to the amount of literature on high-order iterative methods in the two last decades see for instance 4 and its references, or this incomplete list of recent papers 5-16 may reveal the importance of high-order schemes. The main practical difficulty related to the classical third-order iterative methods is the evaluation of the second-order derivative. For a nonlinear system of m equations and m unknowns, the first Fréchet derivative is a matrix with m 2 entries, while the second Fréchet derivative has m 3 entries. This implies a huge amount of operations in order to evaluate every iteration. However, in some cases, the second derivative is easy to evaluate. Some clear examples of this case are the approximation of Hammerstein equations where the second Fréchet derivative is diagonal by blocks or quadratic equations where it is constant.
The structure of this paper is as follows: in Section 2 we present some particular examples of methods included in the family and in Section 3, we assert convergence and uniqueness theorems Kantorovich type . Finally, some numerical experiments are presented in Section 4. These applications include quadratic Riccati equations and integral Hammerstein equations. In all these problems the proposed methods seem more efficient than second-order methods.
Abstract and Applied Analysis 3

A Family of High-Order Iterative Methods
As was indicated in the introduction, we are interested in the study of the family of iterative methods as follows x n 1 y n − 1 L f x n L f x n 2 g f x n f y n f x n .

2.1
Note that the method 2.1 is equivalent to iterate the function M f given by the following Particular examples of schemes included in the family with nonsmooth functions g f x are 1 Halley 2 Super-Halley 3 Chebyshev 4 Chebyshev like methods. For 0 ≤ α ≤ 2, we consider the following α-methods x n 1 y n − f y n f x n .

2.8
These methods have order of convergence three that is small than the estimate 4 min{2 2, 2 · 2} in Traub's theorem since g f x is nonsmooth . For instance the above twostep method admits Indeed, all these methods have the function f in the denominator.
On the other hand, considering different smooth functions g f x , the following schemes are also particular examples in the family.
1 The two-step method g f x 0 M4 : has order four.
2 The two-step method g f x 2.10 has order five. 3 We should start with other iterative functions Φ x and develop a similar analysis. For instance, starting with Chebyshev's method we can consider the method that has order six 3 . We use this scheme only in the numerical section.
Abstract and Applied Analysis 5

Semilocal Convergence
Several techniques are usually considered to study the convergence of iterative methods, as we can see in the following papers 4, 17-20 . Among these, the two most common are the based on the majorant principle and on recurrence relations.
In this section, we analyze the semilocal convergence of the introduced family 1.2 under a generalization of Kantorovich conditions. Namely, we assume that: Under these hypotheses it is possible to find a cubic polynomial in an interval a, b such that p a > 0 > p b , p t < 0, p t > 0 and p t > 0 in a, t * , with t * the unique simple solution of p t 0, and verifying the following hypotheses: For t 0 ∈ a, b and p t 0 > 0.
Some immediate properties of the polynomial may be obtained from the conditions above imposed: 1 p t is decreasing in the interval a, t * , since p t < 0 in that interval.
3 p t is increasing and p t is convex in a, t * , since we have p t > 0 in a, t * .
4 p t is increasing in a, t * , since p t > 0 in that interval.
From these properties it follows the next: a The Newton map associate to p t , N p t t − p t /p t , is increasing in a, t * , N p t * t * and N p t * 0.
b The function L p t p t p t /p t 2 > 0 in a, t * , since p t and p t are strictly positive in that interval. Furthermore, L p t * 0, since p t * 0 and p t * / 0.
In this paper, as in 21, page 43 , we consider as the function p t the following polynomial:

3.2
If this last condition holds, then the cubic polynomial p t has two roots t * and t * * t * ≤ t * * . We can choose a and b such that 0 < a < t * and b > 2/ Mβ M 2 β 2 2Kβ . Moreover, we need some extra conditions associated to the operator G F and the function g p . We assume: All the methods considered in the above section have associated functions g p that verify the three last conditions. With the two last hypotheses on g p and the definition of p, following 21, Corollary 2.2.2 in page 31 , the next result holds. We are now ready to prove the desired semilocal convergence. .

3.5
If B x 0 , t * − t 0 ⊂ Ω then the sequence 1.2 is well defined and converges to x * the unique Abstract and Applied Analysis 7 where s n t n − p t n p t n , t n 1 s n − 1 L p t n L p t n 2 g p t n p s n p t n .

3.7
Proof. By an induction process, it is possible to verify that ii F x n ≤ p t n , and then, iii L F x n ≤ L p t n , and iv x n 1 − x n ≤ t n 1 − t n .
The case n 0 follows from the initial conditions on x 0 and t 0 . We now assume that the conditions are valid for n and we check them for n 1.
i F x n 1 F x n I − F x n −1 F x n − F x n 1 .

3.8
Applying Taylor's theorem: because p t is increasing 8

Abstract and Applied Analysis
By applying the general invertibility criterion, F x n 1 is invertible, and

3.10
ii Using the following Taylor expansion 3.14 we conclude that F y n ≤ p s n .

3.15
Abstract and Applied Analysis 9 Similarly from the following expansion the definition of the method, the main hypotheses on G F and the induction process, we zobtain, using that F y n ≤ p s n and that 3.17 the desired inequality:

3.18
In this situation, the theorem holds by applying the previous estimates directly to the formulas that describe the methods, we refer 21, page 41-42 for more details.
The estimates given in the present paper are optimal in the sense that the sequence associated to p verifies the inequalities with equalities.

Numerical Experiments
We consider several problems where the presented high-order methods can be considered as a good alternative to second-order methods.

Approximation of Riccati's Equations
In this first example, we consider quadratic equations, therefore the second Fréchet derivative is constant. Particular cases of this type of equations, that appear in many applications, are Riccati's equations 22-24 . For instance, if we consider the problem of calculating feedback controls for systems modeled by partial differential or delay differential equations, a classical controller design objective will be to find a control u t for the state x t such that the following objective function is minimized, where R is a positive defined matrix and the observation C ∈ L X, R d . In practice, the control is calculated through approximation. This leads to solving an algebraic Riccati equation for a feedback operator see 25, 26 for more details.
In the general case, an algebraic Riccati's equation is given by 27 where D, A, C ∈ R n×n are given matrix, D symmetric and X ∈ R n×n is the unknown. In this case,

4.5
In particular, the second derivative is constant. In this case, the Kantorovich conditions for Newton's methods have the compact form Moreover, this hypothesis also gives the convergence for the high-order methods 22 . Then, using a matricial norm

4.7
Given a symmetric initial guess X 0 ∈ R n×n , to obtain R X 0 −1 we solve the equation This equation has solution if DX 0 − A is stable 27 , that is, all its eigenvalues have negative real part. In this following case

4.11
In this case, the algebraic Riccati equation has exact solution Besides, from the aforesaid starting point it follows that DX 0 − A is a stable matrix. Now, considering the stopping criterion X n − X * < 10 −50 in Table 1, we obtain the errors X n − X * . If we now analyze the following computational order of convergence 28 : we observe that method M6 has computationally the order of convergence at least six. See Table 2, where ρ N , ρ CH and ρ M6 denote, respectively, the computational order of convergence of the three last methods.
In comparison with the classical Newton's method, the extra computational cost per iteration of method M6, is only two new evaluations of the operator F, and two extra matrixvector multiplications. Moreover, the same as Newton's method only a LU decomposition is necessary. Thus, M6 is more efficient.
See 29 for more details.

Approximation of Hammerstein Equations
We consider an important special case of integral equation, the following Hammerstein equation   These equations are related with boundary value problems for differential equations. For some of them, high-order methods using second derivatives are useful for their effective discretized solution.
The discrete version of 4.14 is where 0 ≤ t 0 < t 1 < · · · < t m ≤ 1 are the grid points of some quadrature formula 1 0 f t dt ≈ m j 0 γ j f t j , and x i x t i . The second Fréchet derivative of the associated discrete system is diagonal by blocks. Let the following Hammerstein equation The discretization of this equation verifies the Lipschitz condition of our Kantorovich theorem 4 .
We consider m 20 in the quadrature trapezoidal formula and as exact solution the obtained numerically by Newton method. In Table 3, we summarize the numerical results for different methods in the family: Newton, Halley, and M4. We consider as initial guess x 0 s 1.5.
Since the second derivative is diagonal by blocks, its application has a computational cost of order O m 2 . Thus, the computational cost in each iteration of the three schemes is, for m sufficiently big, of the same order O m 3 due to the LU decomposition . Note that we only have to do a factorization in each iteration of the three schemes. As conclusion, the scheme M4 order four is the most efficient for m sufficiently big.
See 30 for other-related problems.

Conclusions
Summing up, in this paper we have studied a family of high-order iterative methods. Mainly, the theoretical analysis we did allows to ensure convergence conditions for all these schemes. We established priori error bounds for them and consequently their order. We have presented different applications where we may add that in these cases the analyzed high-order methods are more efficient than simpler second-order methods.