Convergence and Error Estimate of the Steffensen Method

which is given for solving algebraic equation (only in one dimension) [6]. For example, H. t. Kung and Traub presented a class of multipoint iterative functions without derivative. Chen [7] studied a particular class of these methods which contain Steffensen method in special case, and X. Wu and H. Wu [8] studied a class of quadratic convergence iteration formulae without derivatives. Zheng et al. [9] gave a secondorder parametric Steffensen-like method, which is derivative free and only uses two evaluations of the function in one step; they also suggested a variant of the Steffensen-like method which is still derivative free and uses four evaluations of the function to achieve cubic convergence. Ren et al. [10] derived a one-parameter class of fourth-order methods for solving nonlinear equations. But above all methods are given only for solving algebraic equation. Newton’s method can be extended to high dimension directly, nevertheless the Steffensen method cannot. So the Steffensen method in high dimension is seldom researched. In [6], there are several kinds of forms of Steffenen method in high dimension mentioned. Resently, Amat et al. [11] generalize one form of the Steffensen method and give its convergence theorem and error estimate. Hilout [12] studies the convergence of Steffensen-type algorithms. A class of Steffensen-type algorithms for solving generalized equations on Banach spaces is proposed. Alarcon et al. [13] discussed a modified Steffensens type iterative scheme for the numerical solution of a system of nonlinear equations. In this paper, we discuss another form of the Steffensen method in Rn space. The form is defined as follows:


Introduction
Let X and Y be real or complex Banach space, and let F : D ⊂ X → Y be a nonlinear Fréchet differentiable operator in some convex domain D. The well-known iteration for solving the equation is Newton's method defined as follows: In recent years, some authors [1][2][3][4][5] discuss Newton's method and some modifications and give the convergence theorem and the error estimate.In order to avoid the calculus of derivative of f (x) and avoid f (x) = 0, many authors investigate the Steffensen method: which is given for solving algebraic equation (only in one dimension) [6].For example, H. t.Kung and Traub presented a class of multipoint iterative functions without derivative.Chen [7] studied a particular class of these methods which contain Steffensen method in special case, and X. Wu and H. Wu [8] studied a class of quadratic convergence iteration formulae without derivatives.Zheng et al. [9] gave a secondorder parametric Steffensen-like method, which is derivative free and only uses two evaluations of the function in one step; they also suggested a variant of the Steffensen-like method which is still derivative free and uses four evaluations of the function to achieve cubic convergence.Ren et al. [10] derived a one-parameter class of fourth-order methods for solving nonlinear equations.But above all methods are given only for solving algebraic equation.Newton's method can be extended to high dimension directly, nevertheless the Steffensen method cannot.So the Steffensen method in high dimension is seldom researched.In [6], there are several kinds of forms of Steffenen method in high dimension mentioned.Resently, Amat et al. [11] generalize one form of the Steffensen method and give its convergence theorem and error estimate.Hilout [12] studies the convergence of Steffensen-type algorithms.A class of Steffensen-type algorithms for solving generalized equations on Banach spaces is proposed.Alarcon et al. [13] discussed a modified Steffensens type iterative scheme for the numerical solution of a system of nonlinear equations.
In this paper, we discuss another form of the Steffensen method in R n space.The form is defined as follows: where T , where f i : R n → R and x = (x 1 , x 2 , . . ., x n ) is a vector.
Compared with Newton's iteration, the advantage of this form avoids evaluation of the derivative but has the same convergence order.In the paper, we also establish the convergence theorem under Kantorovich condition and give the error estimate.This paper is organized as follows.In Section 2, the majorizing sequence and its properties are presented.In Section 3, we establish Kantorovich-type theorem for this kind of method by using majorizing function.Moreover, an error estimate is given.In Section 4, the proof of the main theorem is presented.Finally, a numerical comparison with Newton's method is presented.

Majorizing Sequence and Its Properties
Let K, β, and η be nonnegative numbers, and let h be a majorant function defined by Applying the iteration (2) to h, we can get the real sequences {t k }: First, we have the following lemmas.
Lemma 1.When α = Kβη ≤ 1/2, the real function h has two positive roots Lemma 2. Under the assumptions of previous Lemma, the real sequence {t n } is monotonically increasing and tending to the root t * of h.
Lemma 3. Suppose that the sequence t k is produced by the iteration (6). where Thus, we have By (11), we get Since b k = t * * − t * + a k , it is easy to solve a k from (12).The lemma follows.

Main Result
Let F : D ⊂ R n → R n be a nonlinear Frechet differentiable operator in some convex domain D and satisfy For the iteration form (4), we have the following lemma and theorem.
Lemma 4. Let F : R n → R n , and F satisfies the iteration (4), then one has so This ends the proof.

SRX Mathematics 3
For an initial value x 0 ∈ D, suppose that F (x 0 ) −1 exists and F satisfies where β and γ are positive constants.Then we have the following theorem.

Proof of the Main Theorem
Let F satisfy the conditions ( 13), (14), and (17).To prove Theorem 1, we need to prove some useful Lemmas.
Lemma 5.If the sequence x k produced by the iteration (4), then Proof.By (4) and Talyor expansion, the lemma can be verified directly.
where the sequence {t k } is produced by the iteration (6).
Proof.The conclusions (a), (b), and (c) are obviously true for k = 0. Now assume that they remain valid while k ≤ n.At first, by (b), we can see Thus, x n+1 ∈ S(x 0 , t * ).
Proof of Theorem 1.By (b) in Lemma 6, we have that {x n } is a Cauchy sequence.Let x * be the limit of x n , as n → ∞, we have It yields F(x * ) = 0.This completes the proof of Theorem 1.

Numerical Examples
We establish a theorem of convergence for the iteration (4).
The best property of this method is that it does not use any derivative but has the same good properties of convergence with Newton's method.The following example can give the enough explanation.The computations associated with the examples were performed using Maple 9.
Example 1.We consider the following equation in 2D: The initial vector is (1/2, 1/2) T .We solve the above equations by Newton's method and the Steffensen method, respectively.The numerical comparison of the methods is displayed in Table 1.From Table 1, we can find the Steffensen method has the same convergence rate with Newton's method.
Example 2. We consider the following nonlinear boundary value problem of second order [14]: To solve this problem by finite differences, we start by drawing the usual grid line with t j = jh, where h = 1/n and n is an integer.Note that x 0 and x n are given by the boundary conditions, then x 0 = 0 = x n .We first approximate the second derivative at these points: By substituting this expression into the differential equation, we have the following system of nonlinear equations: We therefore have an operator , where . . .
Now we apply the Steffensen method and Newton's method to approximate the solution of F(x) = 0. We choose n = 12, then (29) gives eleven equations.For the initial iterate values chosen as x j (0) = t j (t j − 1), j = 1, 2, . . ., 11, after two iterates, the Steffensen method is better than Newton's method (see Table 2).
Example 3. We consider the following nonlinear boundary value problem of Burgers-Huxley equation: This equation's exact solution is By finite differences replacing the derivative, we get the following system of nonlinear equations: where I is identity matrix, U n = (u where f (u i,n ) = u i,n (1 − u i,n )(u i,n − 0.4), i = 1, 2, . . ., 100.

Conclusion
The derivatives may prevent the application of the methods especially when they are not easy to find.In this paper, we show that the Steffensen method only uses evaluations of the function but maintains quadratic convergence and can converge.The new iterative method seems to work well in our numerical results, since we have obtained optimal order of convergence without any stability problem.With the different choice of H k in the iteration (4), we will get different iterations without any derivative.Discussing these iterations is the future work to do.

Table 1 :
The numerical comparison between Newton's method and Steffensen method.

Table 2 :
The numerical comparison between Newton's method and Steffensen method of F(x(n)) .

Table 3 :
The error of the approximation solution U n and the exact solution U * .