ANA Advances in Numerical Analysis 1687-9570 1687-9562 Hindawi Publishing Corporation 957496 10.1155/2013/957496 957496 Research Article Three New Optimal Fourth-Order Iterative Methods to Solve Nonlinear Equations Fernández-Torres Gustavo 1 Vásquez-Aquino Juan 2 Ng Michael 1 Petroleum Engineering Department UNISTMO 70760 Tehuantepec OAX Mexico 2 Applied Mathematics Department UNISTMO 70760 Tehuantepec OAX Mexico 2013 24 3 2013 2013 21 11 2012 24 01 2013 02 02 2013 2013 Copyright © 2013 Gustavo Fernández-Torres and Juan Vásquez-Aquino. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We present new modifications to Newton's method for solving nonlinear equations. The analysis of convergence shows that these methods have fourth-order convergence. Each of the three methods uses three functional evaluations. Thus, according to Kung-Traub's conjecture, these are optimal methods. With the previous ideas, we extend the analysis to functions with multiple roots. Several numerical examples are given to illustrate that the presented methods have better performance compared with Newton's classical method and other methods of fourth-order convergence recently published.

1. Introduction

One of the most important problems in numerical analysis is solving nonlinear equations. To solve these equations, we can use iterative methods such as Newton's method and its variants. Newton's classical method for a single nonlinear equation f(x)=0, where r is a single root, is written as (1)xn+1=xn-f(xn)f(xn), which converges quadratically in some neighborhood of r.

Taking yn=xn-f(xn)/f(xn), many modifications of Newton's method were recently published. In , Noor and Khan presented a fourth-order optimal method as defined by (2)xn+1=xn-f(xn)-f(yn)f(xn)-2f(yn)f(xn)f(xn), which uses three functional evaluations.

In , Cordero et al. proposed a fourth-order optimal method as defined by (3)zn=xn-f(xn)+f(yn)f(xn),wn=zn-f2(yn)(2f(xn)+f(yn))f2(xn)f(xn), which also uses three functional evaluations.

Chun presented a third-order iterative formula  as defined by (4)xn+1=xn-32f(xn)f(xn)+12f(xn)f(ϕ(xn))f2(xn), which uses three functional evaluations, where ϕ is any iterative function of second order.

Li et al. presented a fifth-order iterative formula in  as defined by (5)un+1=xn-f(xn)-f(xn-f(xn)/f(xn))f(xn),xn+1=un+1-f(un+1)f(xn-f(xn)/f(xn)), which uses five functional evaluations.

The main goal and motivation in the development of new methods are to obtain a better computational efficiency. In other words, it is advantageous to obtain the highest possible convergence order with a fixed number of functional evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order considered in the Kung-Traub’s conjecture.

Kung-Traub's Conjecture (see ). Multipoint iterative methods (without memory) requiring n+1 functional evaluations per iteration have the order of convergence at most 2n.

Multipoint methods which satisfy Kung-Traub's conjecture (still unproved) are usually called optimal methods; consequently, p=2n is the optimal order.

The computational efficiency of an iterative method of order p, requiring n function evaluations per iteration, is most frequently calculated by Ostrowski-Traub's efficiency index  E=p1/n.

On the case of multiple roots, the quadratically convergent modified Newton's method  is (6)xn+1=xn-mf(xn)f(xn), where m is the multiplicity of the root.

For this case, there are several methods recently presented to approximate the root of the function. For example, the cubically convergent Halley's method  is a special case of the Hansen-Patrick's method  (7)xn+1=xn-f(xn)((m+1)/2m)f(xn)-f(xn)f′′(xn)/2f(xn). Osada  has developed a third-order method using the second derivative: (8)xn+1=xn-12m(m+1)un+12(m-1)2f(xn)f′′(xn), where un=f(xn)/f(xn).

Another third-order method  based on King's fifth-order method (for simple roots)  is the Euler-Chebyshev's method of order three (9)xn+1=xn-m(3-m)2f(xn)f(xn)+m22f2(xn)f′′(xn)f3(xn).

Recently, Chun and Neta  have developed a third-order method using the second derivative: (10)xn+1=xn-(+(m-1)2f3(xn))-12m2f2(xn)f′′(xn)×(m(3-m)f(xn)f(xn)f′′(xn)+(m-1)2f3(xn))-1).

All previous methods use the second derivative of the function to obtain a greater order of convergence. The objective of the new method is to avoid the use of the second derivative.

The new methods are based on a mixture of Lagrange's and Hermite's interpolations. That is to say not only Hermite’s interpolation. This is the novelty of the new methods. The interpolation process is a conventional tool for iterative methods; see [5, 7]. However, this tool has been applied recently in several ways. For example, in , Cordero and Torregrosa presented a family of Steffensen-type methods of fourth-order convergence for solving nonlinear smooth equations by using a linear combination of divided differences to achieve a better approximation to the derivative. Zheng et al.  proposed a general family of Steffensen-type methods with optimal order of convergence by using Newton's iteration for the direct Newtonian interpolation. In , Petković et al. investigated a general way to construct multipoint methods for solving nonlinear equations by using inverse interpolation. In , Džunić et al. presented a new family of three-point derivative free methods by using a self-correcting parameter that is calculated applying the secant-type method in three different ways and Newton's interpolatory polynomial of the second degree.

The three new methods (for simple roots) in this paper use three functional evaluations and have fourth-order convergence; thus, they are optimal methods and their efficiency index is E=1.587, which is greater than the efficiency index of Newton's method, which is E=1.414. In the case of multiple roots, the method developed here is cubically convergent and uses three functional evaluations without the use of second derivative of the function. Thus, the method has better performance than Newton's modified method and the above methods with efficiency index E=1.4422.

2. Development of the Methods

In this paper, we consider iterative methods to find a simple root rI of a nonlinear equation f(x)=0, where f:I is a scalar function for an open interval I. We suppose that f(x) is sufficiently differentiable and f(x)0 for xI, and since r is a simple root, we can define g=f-1 on I. Taking x0I closer to r and supposing that xn has been chosen, we define (11)zn=xn-f(xn)f(xn),K=f(xn),L=f(zn).

2.1. First Method FAM1

Consider the polynomial (12)p2(y)=A(y-K)(y-L)+B(y-K)+C, with the conditions (13)p2(K)=g(K)=xn,p2(L)=g(L)=zn,      p2(K)=g(K)=1f(xn). Solving simultaneously the conditions (13) and using the common representation of divided differences for Hermite's inverse interpolation, (14)g[f(xn)]=xn,g[f(xn),f(xn),f(zn)]=1/f(xn)f(zn)-f(xn)-(zn-xn)/(f(zn)-f(xn))f(zn)-f(xn),g[f(xn),f(xn)]=1f(xn),g[f(xn),f(zn)]=zn-xnf(zn)-f(xn). Consequently, we find (15)A=g[f(xn),f(xn),f(zn)],B=g[f(xn),f(zn)],C=g[f(xn)]. and the polynomial (12) can be written as (16)p2(y)=g[f(xn),f(xn),f(zn)](y-f(xn))×(y-f(zn))+g[f(xn),f(zn)]×(y-f(xn))+g[f(xn)]. If we are making y=0 in (16) we have a new iterative method (FAM1) (17)xn+1=g[f(xn)]-g[f(xn),f(zn)]f(xn)+g[f(xn),f(xn),f(zn)]f(xn)f(zn). It can be written as (18)xn+1=xn+f(xn)f(xn)[f(zn)-f(xn)-f2(zn)(f(zn)-f(xn))2],zn=xn-f(xn)f(xn), which uses three functional evaluations and has fourth-order convergence.

2.2. Second Method FAM2

Consider the polynomial (19)p3(y)=A(y-K)2(y-L)+B(y-K)(y-L)+C(y-K)+D, with the conditions (20)p3(K)=g(K)=xn,p3(L)=g(L)=zn,p3(K)=g(K)=1f(xn). Taking B=A(L-2K), we have p3(y)=A*y3+B*y+C*. Solving simultaneously the conditions (20) and using the common representation of divided differences for Hermite's inverse interpolation, we find (21)A=g[f(xn),f(xn),f(zn)]f(zn)-2f(xn),B=g[f(xn),f(xn),f(zn)],(22)C=g[f(xn),f(zn)],D=g[f(xn)], and the polynomial (19) can be written as (23)p3(y)=g[f(xn)]+g[f(xn),f(xn),f(zn)]f(zn)-2f(xn)×(y-f(xn))2(y-f(zn))+g[f(xn),f(zn)](y-f(xn))+g[f(xn),f(xn),f(zn)]×(y-f(xn))(y-f(zn)). Then, if we are making y=0 in (23) we have our second iterative method (FAM2) (24)xn+1=g[f(xn)]-g[f(xn),f(zn)]f(xn)+g[f(xn),f(xn),f(zn)]f(xn)f(zn)-g[f(xn),f(xn),f(zn)]f(zn)-2f(xn)f2(xn)f(zn). It can be written as (25)xn+1=xn+f2(xn)(f(zn)-f(xn))f(xn)-f2(zn)f(xn)(f(zn)-3f(xn))(f(zn)-f(xn))2(f(zn)-2f(xn))f(xn),zn=xn-f(xn)f(xn), which uses three functional evaluations and has fourth-order convergence.

2.3. Third Method FAM3

Consider the polynomial (26)p3(y)=A(y-K)2(y-L)+B(y-K)(y-L)+C(y-K)+D, with the conditions (27)p3(K)=g(K)=xn,p3(L)=g(L)=zn,p3(K)=g(K)=1f(xn),(28)p3′′(K)=-2f(zn)f2(xn)f(xn), where we have used an approximation of f′′(xn) in , f′′(xn)2f(zn)f2(xn)/f2(xn). Solving simultaneously the conditions (27) and (28) and using the common representation of divided differences for Hermite's inverse interpolation, we have (29)A=(g[f(xn),f(xn)]f(zn)f2(xn)+g[f(xn),f(xn),f(zn)]f(zn)f2(xn))×(f(zn)-f(xn))-1,(30)B=g[f(xn),f(xn),f(zn)],C=g[f(xn),f(zn)],D=g[f(xn)]. Thus, the polynomial (26) can be written as (31)p3(y)=(g[f(xn),f(xn)](f(zn)/f2(xn))f(zn)-f(xn)+g[f(xn),f(xn),f(zn)]f(zn)-f(xn)g[f(xn),f(xn)](f(zn)/f2(xn))f(zn)-f(xn))×(y-f(xn))2(y-f(zn))+g[f(xn),f(xn),f(zn)]×(y-f(xn))(y-f(zn))+g[f(xn),f(zn)](y-f(xn))+g[f(xn)]. Making y=0 in (31), we have (32)xn+1=g[f(xn)]-g[f(xn),f(zn)]f(xn)+g[f(xn),f(xn),f(zn)]f(xn)f(zn)-(g[f(xn),f(xn)](f(zn)/f2(xn))f(zn)-f(xn)+g[f(xn),f(xn),f(zn)]f(zn)-f(xn)g[f(xn),f(xn)](f(zn)/f2(xn))f(zn)-f(xn))×f2(xn)f(zn). It can be written as (33)xn+1=xn+f(zn)f(xn)-f2(xn)-f2(zn)(f(zn)-f(xn))2f(xn)f(xn)-f3(zn)(f(zn)-2f(xn))(f(zn)-f(xn))3f2(xn)f(xn),zn=xn-f(xn)f(xn), which uses three functional evaluations and has fourth-order convergence.

2.4. Method FAM4 (Multiple Roots)

Consider the polynomial (34)pm(x)=A(x-xn+w)m, where m is the multiplicity of the root and pm(x) verify the conditions (35)pm(xn)=f(xn),pm(zn)=f(zn), with zn=xn-m(f(xn)/f(xn)).

Solving the system, we obtain (36)w=um(zn-xn)1-um, where (37)um=[f(xn)f(yn)]1/m. Thus, we have (38)xn+1=xn-w, that can be written as (39)xn+1=zn+mf(xn)f(xn)11-um,zn=xn-mf(xn)f(xn),um=[f(xn)f(yn)]1/m, which uses three functional evaluations and has third-order convergence.

3. Analysis of Convergence Theorem 1.

Let f:I be a sufficiently differentiable function, and let rI be a simple zero of f(x)=0 in an open interval I, with f(x)0 on I. If x0I is sufficiently close to r, then the methods FAM1, FAM2, and FAM3, as defined by (18), (25), and (33), have fourth-order convergence.

Proof.

Following an analogous procedure to find the error in Lagrange's and Hermite's interpolations, the polynomials (12), (19), and (26) in FAM1, FAM2, and FAM3, respectively, have the error (40)E(y)=g(y)-pn(y)=[g′′′(ξ)3!-βA](y-K)2(y-L), for some ξI. A is the coefficient in (15), (21), (29) that appears in the polynomial pn(y) in (12), (19), and (26), respectively.

Then, substituting y=0 in E(y)(41)r-xn+1=-[g′′′(ξ)3!-βA]K2L=-[g′′′(ξ)3!-βA]f2(xn)f(zn),xn+1-r=[g′′′(ξ)3!-βA]×[f(r)+f(ξ1)(xn-r)]2×[f(r)+f(ξ2)(zn-r)], with xn+1-r=ϵn+1,xn-r=ϵn. Since zn was taken from Newton's method, we know that zn-r=(f′′(ξ3)/2f(ξ4))ξn2+O(ξn+12). Then, we have (42)ϵn+1[g′′′(ξ)3!-βA]f2(ξ1)ξn2f(ξ2)f′′(ξ3)ξn22f(ξ4),ϵn+1[g′′′(ξ)3!-βA]f2(ξ1)f(ξ2)f′′(ξ3)2f(ξ4)ξn4. Now, in FAM1, we take β=0, then, (43)ξn+1g′′′(ξ)3!f2(ξ1)f(ξ2)f′′(ξ3)2f(ξ4)ξn4. In FAM2 and FAM3, we take β=1, then, (44)ξn+1[g′′′(ξ)3!-A]f2(ξ1)f(ξ2)f′′(ξ3)2f(ξ4)ξn4,ξn+1ξn4[g′′′(ξ)3!-A]f2(ξ1)f(ξ2)f′′(ξ3)2f(ξ4). Thus, FAM1, FAM2, and FAM3 have fourth-order convergence.

Theorem 2.

Let f:I be a sufficiently differentiable function, and let rI be a zero of f(x)=0 with multiplicity m in an open interval I. If x0I is sufficiently close to r, then, the method FAM4 defined by (12), (20) is cubically convergent.

Proof.

The proof is based on the error of Lagrange's interpolation. Suppose that xn has been chosen. We can see that (45)f(x)-pm(x)=f′′(ξ1)-pm′′(ξ1)2(x-xn)(x-zn), for some ξ1I.

Taking x=r and expanding pm(r) around xk+1, we have (46)-pm(ξ2)(r-xn+1)=f′′(ξ1)-pm′′(ξ1)2(r-xn)(r-zn), with ξ1,ξ2I.

Since zn=xn-m(f(xn)/f(xn)), we know that (47)zn-r=f′′(ξ3)-pm′′(ξ3)2pm(ξ4)ϵn2, for some ξ3,ξ4I.

Thus, (48)ϵn+1=f′′(ξ1)-pm′′(ξ1)2pm(ξ2)f′′(ξ3)-pm′′(ξ3)2p(ξ4)ϵn3. Therefore, FAM4 has third-order convergence.

Note that pm′′ is not zero for m2 and this fact allows the convergence of limn(ϵn+1/ϵn3).

4. Numerical Analysis

In this section, we use numerical examples to compare the new methods introduced in this paper with Newton's classical method (NM) and recent methods of fourth-order convergence, such as Noor's method (NOM) with E=1.587 in , Cordero's method (CM) with E=1.587 in , Chun's third-order method (CHM) with E=1.442 in , and Li's fifth-order method (ZM) with E=1.379 in  in the case of simple roots. For multiple roots, we compare the method developed here with the quadratically convergent Newton's modified method (NMM) and with the cubically convergent Halley's method (HM), Osada's method (OM), Euler-Chebyshev's method (ECM), and Chun-Neta's method (CNM). Tables 2 and 4 show the number of iterations (IT) and the number of functional evaluations (NOFE). The results obtained show that the methods presented in this paper are more efficient.

All computations were done using MATLAB 2010. We accept an approximate solution rather than the exact root, depending on the precision (ϵ) of the computer. We use the following stopping criteria for computer programs: (i) |xn+1-xn|<ϵ, (ii) |f(xn+1)|<ϵ. Thus, when the stopping criterion is satisfied, xn+1 is taken as the exact computed root r. For numerical illustrations in this section, we used the fixed stopping criterion ϵ=10-18.

We used the functions in Tables 1 and 3.

List of functions for a single root.

 f 1 ( x ) = x 3 + 4 x 2 - 10 , r = 1.3652300134140968457608068290 , f 2 ( x ) = cos ( x ) - x , r = 0.73908513321516064165531208767 , f 3 ( x ) = sin ( x ) - ( 1 / 2 ) x , r = 1.8954942670339809471440357381 , f 4 ( x ) = sin 2 ( x ) - x 2 + 1 , r = 1.4044916482153412260350868178 , f 5 ( x ) = x e x 2 - sin 2 ( x ) + 3 cos ( x ) + 5 , r = 1.2076478271309189270094167584 , f 6 ( x ) = x 2 - e x - 3 x + 2 , r = 0.257530285439860760455367304944 , f 7 ( x ) = ( x - 1 ) 3 - 2 , r = 2.2599210498948731647672106073 , f 8 ( x ) = ( x - 1 ) 2 - 1 , r = 2 , f 9 ( x ) = 10 x e - x 2 - 1 , r = 1.6796306104284499 , f 10 ( x ) = ( x + 2 ) e x - 1 , r = - 0.4428544010023885831413280000 , f 11 ( x ) = e - x + cos ( x ) . r = 1.746139530408012417650703089 .

Comparison of the methods for a single root.

f ( x ) x 0 IT NOFE
NM NOM CHM CM ZM FAM1 FAM2 FAM3 NM NOM CHM CM ZM FAM1 FAM2 FAM3
f 1 ( x ) 1 6 4 5 4 4 3 3 3 12 12 15 12 12 9 9 9
f 2 ( x ) 1 5 3 4 2 3 3 3 2 10 9 12 6 9 9 9 6
f 3 ( x ) 2 5 2 4 3 3 2 3 2 10 6 12 9 9 6 9 6
f 4 ( x ) 1.3 5 3 4 4 3 3 3 2 10 9 12 12 9 9 9 6
f 5 ( x ) -1 6 4 5 4 4 4 4 3 12 12 15 12 12 12 12 9
f 6 ( x ) 2 6 4 5 4 4 3 4 3 12 12 15 12 12 9 12 9
f 7 ( x ) 3 7 4 5 4 4 4 3 3 14 12 15 12 12 12 9 9
f 8 ( x ) 3.5 6 3 5 3 4 3 3 3 12 9 15 9 12 9 9 9
f 9 ( x ) 1 6 4 6 4 4 3 3 3 12 12 18 12 12 9 9 9
f 10 ( x ) 2 9 5 7 6 5 4 4 4 18 15 21 18 15 12 12 12
f 11 ( x ) 0.5 5 4 4 4 3 3 3 3 10 12 12 12 9 9 9 9

List of functions for a multiple root.

 f 1 ( x ) = ( x 3 + 4 x 2 - 10 ) 3 , r = 1.3652300134140968457608068290 , f 2 ( x ) = ( sin 2 ( x ) - x 2 + 1 ) 2 , r = 1.4044916482153412260350868178 , f 3 ( x ) = ( x 2 - e x - 3 x + 2 ) 5 , r = 0.2575302854398607604553673049 , f 4 ( x ) = ( cos ( x ) - x ) 3 , r = 0.7390851332151606416553120876 , f 5 ( x ) = ( ( x - 1 ) 3 - 1 ) 6 , r = 2 , f 6 ( x ) = ( x e x 2 - sin 2 ( x ) + 3 cos ( x ) + 5 ) 4 , r = - 1.207647827130918927009416758 , f 7 ( x ) = ( sin ( x ) - ( 1 / 2 ) x ) 2 . r = 1.8954942670339809471440357381 .

Comparison of the methods for a multiple root.

f ( x ) x 0 IT NOFE
NMM HM OM ECM CNM FAM4 NMM HM OM ECM CNM FAM4
2 5 3 3 3 3 3 10 9 9 9 9 9
f 1 ( x ) 1 5 3 4 3 3 3 10 9 12 9 9 9
−0.3 54 5 5 5 5 4 108 15 15 15 15 12

2.3 6 4 4 4 4 4 12 12 12 12 12 12
f 2 ( x ) 2 6 4 4 4 4 4 12 12 12 12 12 12
1.5 5 4 4 4 3 3 10 12 12 12 9 9

0 4 3 3 3 3 2 8 9 9 9 9 6
f 3 ( x ) 1 4 4 4 4 4 3 8 12 12 12 12 9
- 1 5 4 4 4 4 3 10 12 12 12 12 9

1.7 5 4 4 4 4 3 10 12 12 12 12 9
f 4 ( x ) 1 4 3 3 3 3 3 8 9 9 9 9 9
−3 99 8 8 8 8 7 198 24 24 24 24 21

3 6 4 5 5 4 4 12 12 15 15 12 12
f 5 ( x ) −1 10 11 24 23 5 6 20 33 72 79 15 18
5 8 9 15 15 5 5 16 27 45 45 15 15

- 2 8 5 6 6 6 5 16 15 18 18 18 15
f 6 ( x ) - 1 5 3 4 3 3 5 10 9 12 9 9 15
- 3 14 9 10 10 10 9 28 27 30 30 30 27

1.7 5 3 4 3 4 4 10 9 12 9 12 12
f 7 ( x ) 2 4 3 3 3 3 3 8 9 9 9 9 9
3 5 3 3 3 3 3 10 9 9 9 9 9

The computational results presented in Tables 2 and 4 show that, in almost all of cases, the presented methods converge more rapidly than Newton’s method, Newton’s modified method, and those previously presented for the case of simple and multiple roots. The new methods require less number of functional evaluations. This means that the new methods have better efficiency in computing process than Newton’s method as compared to other methods, and furthermore, the method FAM3 produces the best results. For most of the functions we tested, the obtained methods behave at least with equal performance compared to the other known methods of the same order.

5. Conclusions

In this paper, we introduce three new optimal fourth-order iterative methods to solve nonlinear equations. The analysis of convergence shows that the three new methods have fourth-order convergence; they use three functional evaluations, and thus, according to Kung-Traub's conjecture, they are optimal methods. In the case of multiple roots, the method developed here is cubically convergent and uses three functional evaluations without the use of second derivative. Numerical analysis shows that these methods have better performance as compared with Newton's classical method, Newton's modified method, and other recent methods of third- (multiple roots) and fourth-order (simple roots) convergence.

Acknowledgments

The authors wishes to acknowledge the valuable participation of Professor Nicole Mercier and Professor Joelle Ann Labrecque in the proofreading of this paper. This paper is the result of the research project “Análisis Numérico de Métodos Iterativos Óptimos para la Solución de Ecuaciones No Lineales” developed at the Universidad del Istmo, Campus Tehuantepec, by Researcher-Professor Gustavo Fernández-Torres.

Noor M. A. Khan W. A. Fourth-order iterative method free from second derivative for solving nonlinear equations Applied Mathematical Sciences 2012 6 93–96 4617 4625 MR2950625 Cordero A. Hueso J. L. Martínez E. Torregrosa J. R. New modifications of Potra-Pták's method with optimal fourth and eighth orders of convergence Journal of Computational and Applied Mathematics 2010 234 10 2969 2976 10.1016/j.cam.2010.04.009 MR2652143 ZBL1191.65048 Chun C. A geometric construction of iterative formulas of order three Applied Mathematics Letters 2010 23 5 512 516 10.1016/j.aml.2010.01.001 MR2602400 ZBL1188.65065 Li Z. Peng C. Zhou T. Gao J. A new Newton-type method for solving nonlinear equations with any integer order of convergence Journal of Computational Information Systems 2011 7 7 2371 2378 2-s2.0-79960134965 Kung H. T. Traub J. F. Optimal order of one-point and multipoint iteration Journal of the Association for Computing Machinery 1974 21 643 651 MR0353657 10.1145/321850.321860 ZBL0289.65023 Ostrowski A. M. Solution of Equations and Systems of Equations 1966 New York, NY, USA Academic Press MR0216746 Ralston A. Rabinowitz P. A First Course in Numerical Analysis 1978 McGraw-Hill MR0494814 Halley E. A new, exact and easy method of finding the roots of equations generally and that without any previous reduction Philosophical Transactions of the Royal Society of London 1964 18 136 148 Hansen E. Patrick M. A family of root finding methods Numerische Mathematik 1977 27 3 257 269 MR0433858 10.1007/BF01396176 Osada N. An optimal multiple root-finding method of order three Journal of Computational and Applied Mathematics 1994 51 1 131 133 10.1016/0377-0427(94)00044-1 MR1286425 ZBL0814.65045 Victory H. D. Neta B. A higher order method for multiple zeros of nonlinear functions International Journal of Computer Mathematics 1983 12 3-4 329 335 2-s2.0-0020708956 King R. F. A family of fourth order methods for nonlinear equations SIAM Journal on Numerical Analysis 1973 10 876 879 MR0343585 10.1137/0710072 ZBL0266.65040 Chun C. Neta B. A third-order modification of Newton's method for multiple roots Applied Mathematics and Computation 2009 211 2 474 479 10.1016/j.amc.2009.01.087 MR2526046 ZBL1162.65342 Cordero A. Torregrosa J. R. A class of Steffensen type methods with optimal order of convergence Applied Mathematics and Computation 2011 217 19 7653 7659 10.1016/j.amc.2011.02.067 MR2799778 ZBL1216.65055 Zheng Q. Li J. Huang F. An optimal Steffensen-type family for solving nonlinear equations Applied Mathematics and Computation 2011 217 23 9592 9597 10.1016/j.amc.2011.04.035 MR2811234 ZBL1227.65044 Petković M. S. Džunić J. Neta B. Interpolatory multipoint methods with memory for solving nonlinear equations Applied Mathematics and Computation 2011 218 6 2533 2541 10.1016/j.amc.2011.07.068 MR2838162 ZBL1243.65054 Džunić J. Petković M. S. Petković L. D. Three-point methods with and without memory for solving nonlinear equations Applied Mathematics and Computation 2012 218 9 4917 4927 10.1016/j.amc.2011.10.057 MR2870016 ZBL1248.65049