AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 10.1155/2014/705674 705674 Research Article New Mono- and Biaccelerator Iterative Methods with Memory for Nonlinear Equations Lotfi T. 1 Soleymani F. 2 Shateyi S. 3 Assari P. 1 Haghani F. Khaksar 2 Cordero Alicia 1 Department of Mathematics Islamic Azad University Hamedan Branch Hamedan Iran iau.ac.ir 2 Department of Mathematics Islamic Azad University Shahrekord Branch Shahrekord Iran iau.ac.ir 3 Department of Mathematics and Applied Mathematics School of Mathematical and Natural Sciences University of Venda P. Bag X5050, Thohoyandou 0950 South Africa univen.ac.za 2014 2372014 2014 23 05 2014 04 07 2014 24 7 2014 2014 Copyright © 2014 T. Lotfi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Acceleration of convergence is discussed for some families of iterative methods in order to solve scalar nonlinear equations. In fact, we construct mono- and biparametric methods with memory and study their orders. It is shown that the convergence orders 12 and 14 can be attained using only 4 functional evaluations, which provides high computational efficiency indices. Some illustrations will also be given to reverify the theoretical discussions.

1. Introduction

Finding the zeros of nonlinear functions using iterative methods is a challenging problem in computational mathematics with many applications (see, e.g., [1, 2]). The solution α can be obtained as a fixed point of function f : D R R by means of the following fixed-point iteration: (1) x n + 1 = ϕ ( x n ) , n = 0,1 , .

The most widely used method for this purpose is the classical Newton’s method and its derivative-free form known as Steffensen’s scheme . These methods converge quadratically under the conditions that the function f is continuously differentiable and a good initial approximation x 0 is given .

Considering these fundamental methods, many iterative methods without memory possessing optimal convergence order based on the hypothesis of Kung and Traub  have been constructed in the literature; see, for example, [6, 7] and the references therein. For application, refer to [8, 9].

According to the recent trend of researches in this topic, iterative methods with memory (also known as self-accelerating schemes) are worth studying. The iterative method with memory can improve the order of convergence of a without memory method, without any additional functional evaluations, and as a result it has a very high computational efficiency index. There are two kinds of iterative methods with memory, that is, Steffensen-type and Newton-type methods. In this paper, we only consider the Steffensen-type methods with memory.

To review the literature briefly, we remark that optimal Steffensen-type families without memory for solving nonlinear equations were introduced in  in a general form; two-step self-accelerating Steffensen-type methods and their applications in the solution of nonlinear systems and nonlinear differential equations were discussed in .

In 2012, Soleymani et al. in  proposed some multiparametric multistep optimal iterative methods without memory for nonlinear equations. For instance, they proposed (2) y n = x n - f ( x n ) f [ k n , x n ] , k n = x n + β f ( x n ) , β R - { 0 } , n = 0,1 , 2 , , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m c m m m m m + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , m m m m m m m m m m m m m m m m m m m c a 3 R , x n + 1 = z n - f ( z n ) ψ n , γ R , where for simplicity ψ n = f [ x n , z n ] + ( f [ k n , x n , y n ] - f [ k n , x n , z n ] - f [ y n , x n , z n ] ) ( x n - z n ) + γ ( z n - x n ) ( z n - k n ) ( z n - y n ) is used throughout. Equation (2) reads the following error equation: (3) e n + 1 = ( ( ( 1 + β f ( α ) ) 4 c 2 2 ( a 3 + f ( α ) ( c 2 2 - c 3 ) ) m m × ( - γ + a 3 c 2 + f ( α ) ( c 2 3 - c 2 c 3 + c 4 ) ) ( 1 + β f ( α ) ) 4 ) × ( f ( α ) 2 ) - 1 ( 1 + β f ( α ) ) 4 ) e n 8 + O ( e n 9 ) . They also proposed the following scheme: (4) y n = x n - f ( x n ) f [ k n , x n ] , k n = x n - β f ( x n ) , β R - { 0 } , n = 0,1 , 2 , , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m c m m m m m + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , a 3 R , x n + 1 = z n - f ( z n ) ψ n , γ R , with the error equation (5) e n + 1 = ( ( ( - 1 + β f ( α ) ) 4 c 2 2 ( a 3 + f ( α ) ( c 2 2 - c 3 ) ) m m × ( - γ + a 3 c 2 + f ( α ) ( c 2 3 - c 2 c 3 + c 4 ) ) ( - 1 + β f ( α ) ) 4 ) × ( f ( α ) 2 ) - 1 ( - 1 + β f ( α ) ) 4 ) e n 8 + O ( e n 9 ) , and also (6) y n = x n - f ( x n ) f [ k n , x n ] , k n = x n + β f ( x n ) , β R - { 0 } , n = 0,1 , 2 , , z n = y n - f ( y n ) f [ x n , k n ] × ( 1 1 - ( f ( y n ) / f ( x n ) ) - ( f ( y n ) / f ( k n ) ) ) , x n + 1 = z n - f ( z n ) ψ n , γ R , where (7) e n + 1 = ( ( ( 1 + β f ( α ) ) 4 c 2 2 ( c 2 2 - c 3 ) m m × ( - γ + f ( α ) ( c 2 3 - c 2 c 3 + c 4 ) ) ( 1 + β f ( α ) ) 4 ) × ( f ( α ) ) - 1 ( 1 + β f ( α ) ) 4 ) e n 8 + O ( e n 9 ) .

Error relations (3), (5), and (7) play key roles in our study of convergence acceleration in the subsequent sections.

The purpose of this paper is to extend the results of  by providing with memory variants of the above three-step schemes. We aim at contributing two types of memorization, that is, variants using one accelerator and variants using two accelerators. For obtaining a background on such accelerations, one may refer to .

The remaining sections of this paper are organized as follows. Section 2 is devoted to the derivation of new root solvers with memory using one accelerator. Section 3 derives some new methods without and with memory possessing very high computational efficiency index. Computational efficiency index is also discussed to reveal the applicability and efficacy of the proposed approaches. The performance is tested through numerical examples in Section 4. Moreover, theoretical results concerning order of convergence and computational efficiency are confirmed in the examples. It is shown that the presented methods are more efficient than their existing counterparts. Finally, concluding remarks are given in Section 5.

2. Development of Some Monoaccelerator Methods with Memory

Our motivation for constructing methods with memory is directly connected to the basic concept of numerical analysis that any numerical algorithm should give as good as possible output results with minimal computational cost. In other words, it is necessary to search for algorithms of great computational efficiency.

Subsequently, we propose the following monoaccelerator methods at which the parameter β is replaced by β n ( n 1 ) to accelerate the speed of convergence in what follows: (8) y n = x n - f ( x n ) f [ k n , x n ] , k n = x n + β n f ( x n ) , β n = - 1 N 4 ( x n ) , n = 0,1 , 2 , , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m m m m m m c + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , x n + 1 = z n - f ( z n ) ψ n , or the following variant: (9) y n = x n - f ( x n ) f [ k n , x n ] , k n = x n - β n f ( x n ) , β n = 1 N 4 ( x n ) , n = 0,1 , 2 , , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m m m c m m m + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , x n + 1 = z n - f ( z n ) ψ n , and also (10) y n = x n - f ( x n ) f [ k n , x n ] , k n = x n + β n f ( x n ) , β n = - 1 N 4 ( x n ) , n = 0,1 , 2 , , z n = y n - f ( y n ) f [ x n , k n ] × ( 1 1 - ( f ( y n ) / f ( x n ) ) - ( f ( y n ) / f ( k n ) ) ) , x n + 1 = z n - f ( z n ) ψ n .

Note that, throughout this work, N l ( t ) stands for Newton’s interpolating polynomial set through l + 1 available approximations (nodes) from the current and previous iteration(s).

In fact, the main idea in constructing methods with memory consists of the calculation of parameters β = β n as the iteration proceeds by the formulas β n = - 1 / f ~ ( α ) or the similar ones, where f ~ ( α ) is an approximation to f ( α ) . In essence, in this way we minimize the factors involved in the final error equation of families without memory. This automatically affects the speed of convergence by scaling the method and is known as with memorization.

It is also assumed that initial estimates β 0 (or for the similar ones in methods with memory) should be chosen before starting the iterative process.

Now let us remember an important lemma from Traub  in what follows.

Lemma 1.

If β n = - 1 / N 4 ( x n ) , n = 1,2 , , then the estimate (11) 1 + β n f ( α ) ~ D e n - 1 , z e n - 1 , y e n - 1 , k e n - 1 holds, where D is an asymptotic constant.

Theorem 2 determines R -order of the three-step iterative methods with memory (8), (9), and (10). Note that N l ( t ) is chosen in this paper so as to obtain as high as possible of convergence order. Obviously, if fewer nodes are used for the interpolating polynomials, slower acceleration is achieved.

Theorem 2.

If an initial estimation x 0 is close enough to a simple root α of f ( x ) = 0 , f being a sufficiently differentiable function, then the R -order of convergence of the three-step methods with memory (8), (9), and (10) is at least 12.

Proof.

Let { x n } be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a root α of f ( x ) = 0 with the R -order O R ( ( IM ) , α ) R , we will write (12) e n + 1 ~ Ψ n , R e n R , e n = x n - α , where Ψ n , R tends to the asymptotic error constant Ψ R of (IM), when n . Hence (13) e n + 1 ~ Ψ n , R e n R = Ψ n , R ( Ψ n - 1 , R e n - 1 R ) R = Ψ n , R Ψ n - 1 , R R e n - 1 R 2 . We assume that the R -order of the iterative sequences { k n } , { y n } , and { z n } is at least p , q , and s , respectively; that is, (14) e n , k ~ Ψ n , p e n p = Ψ n , p ( Ψ n - 1 , R e n - 1 R ) p = Ψ n , p Ψ n - 1 , R p e n - 1 R p , e n , y ~ Ψ n , q e n q = Ψ n , q ( Ψ n - 1 , R e n - 1 R ) q = Ψ n , q Ψ n - 1 , R q e n - 1 R q , e n , z ~ Ψ n , s e n s = Ψ n , s ( Ψ n - 1 , R e n - 1 R ) s = Ψ n , s Ψ n - 1 , R s e n - 1 R s . By (14) and Lemma 1, we obtain (15) 1 + β n f ( α ) ~ D Ψ n - 1 , p Ψ n - 1 , q Ψ n - 1 , s e n - 1 p + q + s + 1 . Substituting these with e n , w , e n , y , e n , z , and e n + 1 , we have (16) e n , k ~ ( 1 + β n f ( α ) ) e n ~ D Ψ n - 1 , p Ψ n - 1 , q Ψ n - 1 , s Ψ n - 1 , R e n - 1 ( 1 + p + q + s ) + R , e n , y ~ c 2 ( 1 + β n f ( α ) ) e n 2 ~ c 2 D Ψ n - 1 , p Ψ n - 1 , q Ψ n - 1 , s Ψ n - 1 , R 2 e n - 1 ( 1 + p + q + s ) + 2 R , e n , z ~ a n , 4 ( 1 + β n f ( α ) ) 2 e n 4 ~ a n , 4 D 2 Ψ n - 1 , p 2 Ψ n - 1 , q 2 Ψ n - 1 , s 2 Ψ n - 1 , R 4 e n - 1 2 ( 1 + p + q + s ) + 4 R , e n + 1 ~ a n , 8 ( 1 + β n f ( α ) ) 4 e n 8 ~ a n , 8 D 4 Ψ n - 1 , p 4 Ψ n - 1 , q 4 Ψ n - 1 , s 4 Ψ n - 1 , R 8 e n - 1 4 ( 1 + p + q + s ) + 8 R . This gives the following system of linear equations: (17) R p - R - ( p + q + s + 1 ) = 0 , R q - 2 R - ( p + q + s + 1 ) = 0 , R s - 4 R - 2 ( p + q + s + 1 ) = 0 , R 2 - 8 R - 4 ( p + q + s + 1 ) = 0 .

This system has the solution p = 2 , q = 3 , s = 6 , and R = 12 , which specifies the R -order of convergence twelve for the derivative-free schemes with memory (8) and (10). Similar results are valid for (9). The proof is now complete.

Significant acceleration of convergence was attained without any additional functional calculations, which provides very high computational efficiency of the proposed methods. We remark that another advantage is a convenient fact that the proposed methods do not use derivatives.

Following the definition of efficiency index given by ρ 1 / θ , whereas ρ and θ stand for the rate of convergence and the number of functional evaluations per cycle, then the computational efficiency index of the proposed variants with memory reaches 1 2 1 / 4 1.861 which is higher than 1.682 of the families (2), (4), and (6).

3. Biaccelerator Methods with Memory

An accelerating approach, similar to that used in the previous section, will be applied for constructing three-step methods with memory. Calculation of two parameters becomes more complex since more information are needed per iteration.

3.1. The Development of Some Families without Memory

We first apply a free parameter in the denominator of the first substep of (4) to yield the following more general family of eighth-order methods without memory: (18) y n = x n - f ( x n ) f [ k n , x n ] + p f ( k n ) , k n = x n + β f ( x n ) , β R { 0 } , p R , n = 0,1 , 2 , , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m m m m m m c + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , x n + 1 = z n - f ( z n ) ψ n .

Theorem 3.

Assume that function f : D R R has a single root α D , where D is an open interval. Assume furthermore that f ( x ) is a sufficiently differentiable function in the neighborhood of α , that is, D . Then, the order of convergence for the iterative family without memory defined by (18) is eight.

Proof.

The proof of this theorem is similar to the proofs in . Hence it is omitted and we only give the following error equation for (18): (19) e n + 1 = ( 1 + β f ( α ) ) 4 ( p + c 2 ) 2 A 1 e n 8 + O ( e n 9 ) , where A 1 = ( ( a 3 + f ( α ) c 2 ( p + c 2 ) - f ( α ) c 3 ) ( - γ + c 2 ( a 3 + f ( α ) c 2 ( p + c 2 ) - f ( α ) c 3 ) + f ( α ) c 4 ) ) / f ( α ) 2 .

Similarly, we have the following new multiparametric family of methods: (20) y n = x n - f ( x n ) f [ k n , x n ] + p f ( k n ) , k n = x n - β f ( x n ) , β R { 0 } , p R , n = 0,1 , 2 , , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m m m m m m c + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , a 3 R , x n + 1 = z n - f ( z n ) ψ n .

Similarly, we have the following theorem.

Theorem 4.

Assume that function f : D R R has a single root α D , where D is an open interval. Assume furthermore that f ( x ) is a sufficiently differentiable function in the neighborhood of α , that is, D . Then, the order of convergence for the iterative family without memory defined by (20) is eight.

Proof.

The proof of this theorem is similar to the proofs in . Hence it is omitted and we only give the following error equation for (20) by considering a 3 = 1 : (21) e n + 1 = ( - 1 + β f ( α ) ) 4 ( p + c 2 ) 2 A 2 e n 8 + O ( e n 9 ) , where A 2 = ( ( 1 + f ( α ) c 2 ( p + c 2 ) - f ( α ) c 3 ) ( - γ + c 2 ( 1 + f ( α ) c 2 ( p + c 2 ) - f ( α ) c 3 ) + f ( α ) c 4 ) ) / f ( α ) 2 .

In the next subsection, we extend these schemes to methods with memory for solving scalar nonlinear equations.

3.2. The Development of Some Biaccelerator Methods with Memory

Now using suitable Newton’s interpolatory polynomials of appropriate degree passing through all the available nodes, we can propose the following methods with memory with two accelerators: (22) k n = x n + β n f ( x n ) , β n = - 1 N 4 ( x n ) , p n = - N 5 ′′ ( k n ) 2 N 5 ( k n ) , n = 0,1 , 2 , , y n = x n - f ( x n ) f [ k n , x n ] + p n f ( k n ) , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m m m m m m c + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , x n + 1 = z n - f ( z n ) ψ n , (23) k n = x n - β n f ( x n ) , β n = 1 N 4 ( x n ) , p n = - N 5 ′′ ( k n ) 2 N 5 ( k n ) , n = 0,1 , 2 , , y n = x n - f ( x n ) f [ k n , x n ] + p n f ( k n ) , z n = y n - ( f ( y n ) ) ( f [ y n , x n ] + f [ k n , x n , y n ] ( y n - x n ) m m m m m m m m m c + a 3 ( y n - x n ) ( y n - k n ) ) - 1 , x n + 1 = z n - f ( z n ) ψ n .

Similarly, we have the following lemma.

Lemma 5.

If β n = - 1 / N 4 ( x n ) and p n = - N 5 ′′ ( k n ) / ( 2 N 5 ( k n ) ) , n = 1,2 , , then the estimates (24) 1 + β n f ( α ) ~ D 1 e n - 1 , z e n - 1 , y e n - 1 , k e n - 1 , c 2 + p n ~ D 2 e n - 1 , z e n - 1 , y e n - 1 , k e n - 1 hold, where D 1 and D 2 are some asymptotic constants.

Subsequently, the following theorem determines the convergence orders of the three-step iterative with memory methods (22).

Theorem 6.

If an initial estimation x 0 is close enough to a simple root α of f ( x ) = 0 , f being a sufficiently differentiable function, then the R -order of convergence of the three-step method with memory (22) is at least 14.

Proof.

Let { x n } be a sequence of approximations generated by an iterative method (IM). If this sequence converges to a root α of f ( x ) = 0 with the R -order O R ( ( IM ) , α ) R , we will write (25) e n + 1 ~ Ψ n , R e n R , e n = x n - α , where Ψ n , R tends to the asymptotic error constant Ψ R of (IM), when n . Hence (26) e n + 1 ~ Ψ n , R e n R = Ψ n , R ( Ψ n - 1 , R e n - 1 R ) R = Ψ n , R Ψ n - 1 , R R e n - 1 R 2 . We assume that the R -order of the iterative sequences { k n } , { y n } , and { z n } is at least p , q , and s , respectively; that is, (27) e n , k ~ Ψ n , p e n p = Ψ n , p ( Ψ n - 1 , R e n - 1 R ) p = Ψ n , p Ψ n - 1 , R p e n - 1 R p , (28) e n , y ~ Ψ n , q e n q = Ψ n , q ( Ψ n - 1 , R e n - 1 R ) q = Ψ n , q Ψ n - 1 , R q e n - 1 R q , (29) e n , z ~ Ψ n , s e n s = Ψ n , s ( Ψ n - 1 , R e n - 1 R ) s = Ψ n , s Ψ n - 1 , R s e n - 1 R s . By (27), (28), (29), and Lemma 5, we obtain (30) 1 + β n f ( α ) ~ D 1 Ψ n - 1 , p Ψ n - 1 , q Ψ n - 1 , s e n - 1 p + q + s + 1 , c 2 + p n ~ D 2 Ψ n - 1 , p Ψ n - 1 , q Ψ n - 1 , s e n - 1 p + q + s + 1 . Substituting these into e n , k , e n , y , e n , z , and e n + 1 , we obtain (31) e n , k ~ ( 1 + β n f ( α ) ) e n ~ C 3 Ψ n - 1 , p Ψ n - 1 , q Ψ n - 1 , s Ψ n - 1 , R e n - 1 ( 1 + p + q + s ) + R , (32) e n , y ~ c 2 ( 1 + β n f ( α ) ) ( c 2 + p n ) e n 2 ~ C 4 Ψ n - 1 , p Ψ n - 1 , q Ψ n - 1 , s Ψ n - 1 , R 2 e n - 1 2 ( 1 + p + q + s ) + 2 R , (33) e n , z ~ a n , 4 ( 1 + β n f ( α ) ) 2 ( c 2 + p n ) e n 4 ~ C 5 Ψ n - 1 , p 2 Ψ n - 1 , q 2 Ψ n - 1 , s 2 Ψ n - 1 , R 4 e n - 1 3 ( 1 + p + q + s ) + 4 R , (34) e n + 1 ~ a n , 8 ( 1 + β n f ( α ) ) 4 ( c 2 + p n ) 2 e n 8 ~ C 6 Ψ n - 1 , p 4 Ψ n - 1 , q 4 Ψ n - 1 , s 4 Ψ n - 1 , R 8 e n - 1 6 ( 1 + p + q + s ) + 8 R , where C i , i = 3,4 , 5 , 6 , are some asymptotic constants. Equating the powers of error exponents of e k - 1 in pairs of relations (27)–(31), (28)–(32), (29)–(33), and (26)–(34) gives (35) R p - R - ( p + q + s + 1 ) = 0 , R q - 2 R - 2 ( p + q + s + 1 ) = 0 , R s - 4 R - 3 ( p + q + s + 1 ) = 0 , R 2 - 8 R - 6 ( p + q + s + 1 ) = 0 . This system has the solution p = 2 , q = 4 , s = 7 , and R = 14 , which specifies the R -orders of convergence fourteen of the derivative-free schemes with memory (22).

The computational efficiency index of the proposed variants with memory (22) is 1 4 1 / 4 1.934 , which is higher than 1.861 of the families (8), (9), and (10). Note that biparametric acceleration technique by self-correcting parameters in three-step iterative methods was not applied in the literature at present and this clearly reveals the originality of this study.

4. Numerical Reports

The presentation of numerical results in this section serves to point to very high computational efficiency and also to demonstrate fast convergence of the proposed methods.

The errors | x k - α | denote approximations to the sought zeros, and a ( - b ) stands for a × 1 0 - b . Moreover, r c indicates the computational order of convergence (COC) and is computed by (36) r c = log ( | f ( x k ) / f ( x k - 1 ) | ) log ( | f ( x k - 1 ) / f ( x k - 2 ) | ) .

Note that the package Mathematica 9 with multiprecision arithmetic was used.

For comparison, in our numerical experiments we also tested several three-step iterative methods in what follows.

Kung and Traub’s method  is (37) y n = x n - f ( x n ) f [ x n , w n ] , n = 0,1 , 2 , , z n = y n - f ( y n ) f ( w n ) ( f ( w n ) - f ( y n ) ) f [ x n , y n ] , x n + 1 = z n - f ( y n ) f ( w n ) ( y n - x n + f ( x n ) / f [ x n , z n ] ) ( f ( y n ) - f ( z n ) ) ( f ( w n ) - f ( z n ) ) + f ( y n ) f [ y n , z n ] .

Sharma et al.’s method  is (38) y n = x n - f ( x n ) f [ x n , w n ] , n = 0,1 , 2 , , z n = y n - 1 + u n 1 - v n f ( y n ) f [ y n , w n ] , x n + 1 = z n - ( f ( z n ) ) ( f [ z n , y n ] + f [ z n , y n , x n ] ( z n - y n ) m m m m m m m m m c + f [ z n , y n , x n , w n ] ( z n - y n ) m m m m m m m m m c × ( z n - x n ) ) - 1 , where u n = f ( y n ) / f ( x n ) and v n = f ( y n ) / f ( w n ) .

Lotfi and Tavakoli’s method  is (39) y n = x n - f ( x n ) f [ x n , w n ] , n = 0,1 , 2 , , z n = y n - ( 1 + t n ) f ( y n ) f [ y n , w n ] , x n + 1 = z n - ( 1 + t n + s n + 2 t n s n + ( - 1 + ϕ n ) t n 3 ) × ( 1 + s n 2 + υ n 2 ) f ( z n ) f [ z n , w n ] , where t n = f ( y n ) / f ( x n ) , s n = f ( z n ) / f ( y n ) , υ n = f ( z n ) / f ( x n ) , and ϕ n = 1 / ( 1 + β n f [ x n , w n ] ) .

Lotfi and Tavakoli’s method  is (40) y n = x n - f ( x n ) f [ x n , w n ] , n = 0,1 , 2 , , z n = y n - ( 1 + t n ) f ( y n ) f [ y n , w n ] , x n + 1 = z n - ( ( 1 / ( 1 + ϕ n ) ) ( 1 + t n + s n + 2 t n s n ) + t n 2 1 / ( 1 + ϕ n ) + t n 2 ) × ( 1 + s n 2 υ n 2 + 1 ) f ( z n ) f [ z n , w n ] .

Zheng et al.’s method  is (41) y n = x n - f ( x n ) f [ x n , w n ] , n = 0,1 , 2 , , z n = y n - f ( y n ) f [ y n , x n ] + f [ y n , x n , w n ] ( y n - x n ) , x n + 1 = z n - ( f ( z n ) ) ( f [ z n , y n ] + f [ z n , y n , x n ] ( z n - y n ) m m m m m m m m m c + f [ z n , y n , x n , w n ] ( z n - y n ) m m m m m m m m m c × ( z n - x n ) ) - 1 .

In Tables 1, 2, and 3, different test functions are tested as their captions indicated. Results of the second and third iterations are given only for demonstration of convergence speed of the tested methods and in most cases they are not required for practical problems at present. From the tables, we observe extraordinary accuracy of the produced approximations, obtained using only few function evaluations. Such an accuracy is not needed in practice but has a theoretical importance. We emphasize that our primary aim was to construct very efficient three-step methods with memory.

f 1 ( x ) = e ( x 2 - 4 ) + sin ( x - 2 ) - x 4 + 15 , x 0 = 1.67 , α = 2 , β 0 = 0.01 .

Methods | x 1 - α | | x 2 - α | | x 3 - α | r c
Method (8) 1.9962 (−6) 5.7840 (−68) 6.2205 (−808) 12.025
Method (9) 1.6456 (−6) 2.2536 (−68) 1.0563 (−812) 12.032
Method (10) 1.2320 (−5) 2.7753 (−58) 9.2624 (−692) 12.032
Kung-Traub’s method 5.8921 (−4) 2.0646 (−40) 6.4593 (−477) 11.974
Sharma et al.’s method 9.9223 (−4) 4.4897 (−38) 4.8572 (−448) 11.937
Lotfi and Tavakoli’s method 1.3617 (−4) 1.1746 (−50) 3.3485 (−599) 11.908
Lotfi and Tavakoli’s method 2.2254 (−4) 1.0413 (−45) 1.7458 (−540) 11.971
Zheng et al.’s method 2.3735 (−6) 3.9324 (−67) 5.3263 (−798) 12.025
Method (22) 4.3356 (−4) 2.7252 (−53) 1.5758 (−737) 13.907
Method (23) 8.6117 (−4) 3.8041 (−49) 1.6820 (−679) 13.898

f 2 ( x ) = e - x 2 ( 1 + x 3 + x 6 ) ( x - 2 ) , x 0 = 1.8 , α = 2 , β 0 = 0.01 .

Methods | x 1 - α | | x 2 - α | | x 3 - α | r c
Method (8) 3.0652 (−6) 4.8857 (−66) 4.0327 (−783) 11.992
Method (9) 1.6119 (−6) 5.2218 (−70) 4.1933 (−831) 11.988
Method (10) 2.8081 (−6) 1.7460 (−66) 1.7496 (−788) 11.992
Kung-Traub’s method 5.9862 (−6) 1.8189 (−61) 1.7332 (−727) 11.997
Sharma et al.’s method 1.6364 (−6) 1.1449 (−69) 1.4239 (−826) 11.985
Lotfi and Tavakoli’s method 7.8686 (−6) 3.1496 (−60) 7.7390 (−713) 11.997
Lotfi and Tavakoli’s method 1.0173 (−5) 1.2131 (−58) 1.3423 (−693) 11.998
Zheng et al.’s method 7.8832 (−7) 5.4271 (−76) 3.3691 (−903) 11.960
Method (22) 1.1411 (−4) 5.7181 (−57) 1.0777 (−788) 13.991
Method (23) 1.0788 (−4) 2.9171 (−57) 8.7176 (−793) 13.992

f 3 ( x ) = e x 2 + x cos x - 1 sin x + x log ( 1 + x ) , x 0 = 0.6 , α = 0 , β 0 = 0.01 .

Methods | x 1 - α | | x 2 - α | | x 3 - α | r c
Method (8) 1.8675 (−2) 2.3638 (−15) 8.0916 (−170) 11.9949
Method (9) 1.2712 (−2) 2.1749 (−16) 2.0758 (−193) 11.971
Method (10) 1.8079 (−2) 1.6344 (−15) 9.6591 (−172) 11.951
Kung-Traub’s method 3.2701 (−2) 6.3936 (−12) 8.8536 (−128) 11.872
Sharma et al.’s method 3.0811 (−2) 1.5485 (−12) 1.4570 (−135) 11.891
Lotfi and Tavakoli’s method 3.8496 (−2) 2.6256 (−11) 1.2282 (−120) 11.852
Lotfi and Tavakoli’s method 4.2090 (−2) 9.7577 (−11) 1.4135 (−113) 11.823
Zheng et al.’s method 1.8803 (−2) 1.8332 (−15) 2.8135 (−171) 11.949
Method (22) 2.7837 (−1) 2.2953 (−6) 5.4520 (−83) 14.195
Method (23) 3.0041 (−1) 4.6086 (−6) 2.2147 (−79) 14.246
5. Summary

In this paper, we have shown that the three-step families in  can be additionally accelerated without increasing computational cost, which directly improves computational efficiency of the modified methods. The main idea in constructing higher order methods consists of the introduction of another parameter p and the improvement of accelerating technique for the parameter β .

It is evident from Tables 1 to 3 that approximations to the roots possess great accuracy when the proposed methods with memory are applied.

Further researches must be done to extend the resented methods for system of nonlinear equations or to propose with memory version using three or four accelerators. These could be done in the next studies.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The first author “Taher Lotfi” is thankful to “Hamedan Branch, Islamic Azad University” for providing excellent research facilities and partial financial support.

Lotfi T. Soleymani F. Sharifi S. Shateyi S. Khaksar Haghani F. Multipoint iterative methods for finding all the simple zeros in an interval Journal of Applied Mathematics 2014 2014 14 601205 Soleymani F. Tohidi E. Shateyi S. Khaksar Haghani F. Some matrix iterations for computing matrix sign function Journal of Applied Mathematics 2014 2014 9 425654 10.1155/2014/425654 Steffensen J. F. Remarks on iteration Skandinavisk Aktuarietidskrift 1933 16 64 72 Traub J. F. Iterative Methods for the Solution of Equations 1964 New York, NY. USA Prentice-Hall MR0169356 Kung H. T. Traub J. F. Optimal order of one-point and multipoint iteration Journal of the Association for Computing Machinery 1974 21 643 651 10.1145/321850.321860 MR0353657 ZBL0289.65023 2-s2.0-0016115525 Kanwar V. Bhatia S. Kansal M. New optimal class of higher-order methods for multiple roots, permitting f ' x n = 0 Applied Mathematics and Computation 2013 222 564 574 10.1016/j.amc.2013.06.097 MR3115894 2-s2.0-84883208084 Lotfi T. Shateyi S. Hadadi S. Potra-Pták iterative method with memory ISRN Mathematical Analysis 2014 2014 6 697642 10.1155/2014/697642 MR3166555 Soleymani F. Sharifi M. Shateyi S. Approximating the inverse of a square matrix with application in computation of the Moore-Penrose inverse Journal of Applied Mathematics 2014 2014 8 731562 10.1155/2014/731562 MR3193629 Wei Y. He Q. Sun Y. Ji C. Improved power flow algorithm for {VSC}-{HVDC} system based on high-order Newton-type method Mathematical Problems in Engineering 2013 2013 10 235316 MR3068889 10.1155/2013/235316 Cordero A. Torregrosa J. R. Low-complexity root-finding iteration functions with no derivatives of any order of convergence Journal of Computational and Applied Mathematics 2014 10.1016/j.cam.2014.01.024 Zheng Q. Huang F. Guo X. Feng X. Doubly-accelerated Stffensen's methods with memory and their applications on solving nonlinear ODEs Journal of Computational Analysis and Applications 2013 15 5 886 891 MR3087123 2-s2.0-84876824382 Soleymani F. Babajee D. K. R. Shateyi S. Motsa S. S. Construction of optimal derivative-free techniques without memory Journal of Applied Mathematics 2012 2012 24 497023 MR2997269 10.1155/2012/497023 Soleymani F. Some optimal iterative methods and their with memory variants Journal of the Egyptian Mathematical Society 2013 21 2 133 141 10.1016/j.joems.2013.01.002 MR3073051 Sharma J. R. Guha R. K. Gupta P. Some efficient derivative free methods with memory for solving nonlinear equations Applied Mathematics and Computation 2012 219 2 699 707 10.1016/j.amc.2012.06.062 MR2956998 2-s2.0-84864981993 Lotfi T. Tavakoli E. On a new efficient Steffensen-like iterative class by applying a suitable self-accelerator parameter The Scientific World Journal 2014 2014 9 769758 10.1155/2014/769758 Zheng Q. Li J. Huang F. An optimal Steffensen-type family for solving nonlinear equations Applied Mathematics and Computation 2011 217 23 9592 9597 10.1016/j.amc.2011.04.035 MR2811234 2-s2.0-79958150212