MPEMathematical Problems in Engineering1563-51471024-123XHindawi Publishing Corporation46951210.1155/2011/469512469512Research ArticleA General Three-Step Class of Optimal Iterations for Nonlinear EquationsSoleymaniFazlollah1Karimi VananiSolat2AfghaniAbtin2Nguyen-XuanHung1Young Researchers ClubIslamic Azad UniversityZahedan BranchZahedan 98168Iraniau.ac.ir2Department of MathematicsIslamic Azad UniversityZahedan BranchZahedan 98168Iraniau.ac.ir201123112011201107082011160920112011Copyright © 2011 Fazlollah Soleymani et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Many of the engineering problems are reduced to solve a nonlinear equation numerically, and as a result, an especial attention to suggest efficient and accurate root solvers is given in literature. Inspired and motivated by the research going on in this area, this paper establishes an efficient general class of root solvers, where per computing step, three evaluations of the function and one evaluation of the first-order derivative are used to achieve the optimal order of convergence eight. The without-memory methods from the developed class possess the optimal efficiency index 1.682. In order to show the applicability and validity of the class, some numerical examples are discussed.

1. Introduction

Numerical solution of nonlinear scalar equations plays a crucial role in many optimization and engineering problems. For example, many engineering systems can be modeled as neutral delay differential equations (NDDEs) that involve a time delay in the derivative of the highest order, which are different from retarded delay differential equations (RDDEs) that do not involve a time delay in the derivative of the highest order. To illustrate more, a system, which consists of a mass mounted on a linear spring to which a pendulum is attached via a hinged massless rod, is used to predict the dynamic response of structures to external forces using a set of actuators, and it is modeled as an NDDE if the delay in actuators is taken into consideration . On the other hand, the stability of a delay differential equation can be investigated on the basis of the root location of the characteristic function. This simple example shows the importance of numerical root solvers in engineering problems.

There are numerical methods, which find one root at a time, such as Newton’s iteration or its variant, and the schemes, which find all the roots at a time, namely, simultaneous methods, such as Weierstrass method. Recently many journals such as Numerical Algorithms, Mathematical Problems in Engineering, Applied Mathematics and Computation, etc., have published new findings; see, for example,  and the references therein in this active topic of study. To shortly provide some of the newest findings in this field, we mention the following.

Noor et al. in  developed the follow-up quartically iterative scheme consisting of three steps and eight numbers of evaluation per full iteration as comes next yn=xn-f(xn)f(xn),zn=yn-4f(yn)f(xn)+2f((xn+yn)/2)+f(yn),xn+1=zn-4f(zn)f(xn)+2f((xn+zn)/2)+f(zn).

In 2010, an eighth-order method is provided in  using Ostrowski's method in the first two steps of a three-step cycle as follows: yn=xn-f(xn)f(xn),zn=yn-f(xn)f(xn)-2f(yn)f(yn)f(xn),xn+1=zn-[1+f(zn)f(xn)+(f(zn)f(xn))2]f[xn,yn]f(zn)f[xn,zn]f[yn,zn], wherein f[x0,x1,,xk] are the divided differences of the function f.

Soleymani and Mousavi in  suggested an iteration without memory scheme including three steps and only four functional evaluations per iteration in what follows: yn=xn-f(xn)f(xn),zn=xn+f(xn)+f(yn)f(xn)-2f(xn)f(xn)f(xn)f(xn)-f(yn),xn+1=zn-f(zn){(1+(f(zn)/f(yn))2)(1+2(f(zn)/f(xn)))(1-6(f(yn)/f(xn))3-A)×B}f[zn,yn]+f[zn,xn,xn](zn-yn), where 𝒜 denotes 9(f(yn)/f(xn))4, and denotes (1+(f(zn)/f(xn))2)(1+(f(yn)/f(xn))3).

For further reading, one may consult , where a complete review of the methods given in literature from 2000 to 2010 was furnished, and also  for obtaining a background on the application use of such root solvers. We here remark that the efficiency of different methods could be assessed by the measure of efficiency index, which could be defined as pn, wherein p is the order of convergence and n is the total number of evaluations per iteration. Now, we should remark that Kung and Traub in  conjectured that an iterative scheme without memory by using n evaluations per cycle can arrive at the maximum order of convergence 2n-1. Any without memory iteration, which satisfies this hypothesis, is named as an optimal method in literature.

After providing a short background of this research in this section, we give the main contribution in Section 2. The convergence study of our general three-step class is also furnished therein. We will also produce different optimal three-step iterations from the contributed class. Section 3 discusses some numerical comparisons with the existing methods in literature, and finally Section 4 draws a conclusion of this research paper.

2. New Class of Iteration Methods

In order to contribute and give a general class of methods consistent with the optimality conjecture of Kung-Traub, an iteration eighth-order scheme without memory in this section should be constructed such that four evaluations per computing step are used. Such schemes are also known as predictor-corrector methods in which the first step is (Newton's step) predictor, while the other two steps correct the obtained solution. To achieve our goal, we consider the following three-step scheme on which the first two steps are the King's fourth-order family with one free parameter in real numbers, β, yn=xn-f(xn)f(xn),zn=yn-f(yn)f(xn)f(xn)+βf(yn)f(xn)+(β-2)f(yn),xn+1=zn-f(zn)f(zn).

Clearly in (2.1), f(zn) should be annihilated as the order of convergence remains at the highest level by the smallest use of number of evaluations per iteration. Toward this end, we approximate it by a polynomial of degree two that fits in f(xn), f(yn), and f(zn). Therefore, we take into account f(t)A(t)=a0+a1(t-yn)+a2(t-yn)2 where A(t)=a1+2a2(t-yn). Subsequently, by considering f(xn)=A(xn), f(yn)=A(yn), and f(zn)=A(zn), we attain a0=f(yn), a1+2a2(xn-yn)=f(xn),a1+a2(zn-yn)=f(zn)-f(yn)zn-yn=f[zn,yn].

Solving the system of two linear equations with two unknowns, (2.2) gives us a1 and a2. Using the obtained relations for the unknowns in the approximation f(zn)A(zn)=a1+2a2(zn-yn) and simplifying, we have f(zn)2f[zn,yn](xn-zn)+(zn-yn)f(xn)2xn-zn-yn.

Considering (2.3) in (2.1) and using weight function approach, we have the following general class of three-step without-memory iteration: yn=xn-f(xn)f(xn),zn=yn-f(yn)f(xn)f(xn)+βf(yn)f(xn)+(β-2)f(yn),xn+1=zn-f(zn)(2xn-zn-yn)2f[zn,yn](xn-zn)+(zn-yn)f(xn){G(t)+H(τ)+Q(γ)}, wherein G(t), H(τ), and Q(γ) are three real-valued weight functions with t=f(z)/f(y), τ=f(z)/f(x), and γ=f(y)/f(x) (without the index n) which should be chosen such that the order of convergence arrives at the optimal level eight. We summarize this in the following theorem.

Theorem 2.1.

Let αD be a simple zero of a sufficiently differentiable function f:D  for an open interval D, which contains x0 as the initial approximation of α, then the three-step iteration (2.4), which includes four evaluations per full cycle, has the optimal convergence rate eight, when G(0)=1,G(0)=0,|G(0)|<,H(0)=0,H(0)=96,|H(0)|<,Q(0)=Q(0)=Q(0)=0,Q(3)(0)=-(9+18β),|Q(4)(0)|<, and satisfies the error equation below en+1=-124((c3-c22(1+2β))2c2((c3-c22(1+2β))2-c3+c22(1+2β))((c3-c22(1+2β))26c2((c3-c22(1+2β))29c2c3-4c4+4c23(5+(4-3β)β))+12(c3-c22(1+2β))2G(0)+c24Q(4)(0)))en8+O(en9).

Proof.

By defining en=xn-α as the error of the iterative scheme in the nth iterate, applying the Taylor's series expansion for (2.4), and taking into account f(α)=0, we have f(xn)=f(α)[en+c2en2+c3en3+c4en4+c5en5+c6en6+c7en7+c8en8+O(en9)], where ck=(1/k!)(f(k)(α)/f(α)),k2. Furthermore, we have f(xn)=f(α)[1+2c2en+3c3en2+4c4en3+5c5en4+6c6en5+7c7en6+8c8en7+O(en8)]. Dividing (2.7) by (2.8) gives us f(xn)/f(xn)=en-c2en2  +  2(c22-c3)en3  +  (7c2c3-4c23-3c4)en4  +    +  O(en8). Again by substituting this relation in the first step of (2.4) and writing the Taylor's series expansion for f(yn), we obtain, respectively, yn=α+c2en2+2(-c22+c3)en3+(-7c2c3+4c23+3c4)en4++O(en8),f(yn)=f(α)[c2en2+2(-c22+c3)en3+(-7c2c3+4c23+3c4)en4++O(en8)]. Furthermore, we find zn-α=(-c2c3+c23(1+2β))en4-2(c32+c2c4-2c22c3(2+3β)+c24(2+β(6+β)))en5++O(en8). Similarly, we have f(zn)(2xn-zn-yn)2f[zn,yn](xn-zn)+(zn-yn)f(xn)=(-c2c3+c23(1+2β))en4-2(c32+c2c4-2c22c3(2+3β)+c24(2+β(6+β)))en5+(-7c3c4+6c22c4(2+3β)-2c23c3(15+42β+8β2)+3c2(-c5+c32(6+8β)))+2c25(5+β(22+β(7+β))))en6++O(en8),zn-f(zn)(2xn-zn-yn)2f[zn,yn](xn-zn)+(zn-yn)f(xn)=α-32(c2c3(-c2c3+c23(1+2β)))en7+(6c2c33+5c22c3c4+c27(1+2β)2-34c23c32(23+32β)-2c24(c4+2c4β)+14c25c3(29+2β(41+6β)))en8+O(en9). We moreover by using (2.11) and (2.5) attain that (f(zn)(2xn-zn-yn)/2f[zn,yn](xn-zn)+(zn-yn)f(xn)){G(f(zn)/f(yn))+H(f(zn)/f(xn))+Q(f(yn)/f(xn))}=(-c2c3+c23(1+2β))en4-2(c32+c2c4-2c22c3(2+3β)+c24(2+β(6+β)))en5+(-7c3c4+6c22c4(2+3β)-2c23c3(15+42β+8β2)+3c2(-c5+c32(6+8β))+2c25(5+β(22+β(7+β))))en6++O(en8). Considering this new relation, (2.12) and (2.5) in the last step of (2.4) will end in en+1=xn+1-α=-124((c3-c22(1+2β))2c2(-c3+c22(1+2β))×((c3-c22(1+2β))26c2(9c2c3-4c4+4c23(5+(4-3β)β))+  12(c3-c22(1+2β))2G(0)+c24Q(4)(0)))en8+O(en9). This concludes the proof. And it shows that our suggested general class of three-step without-memory methods (2.4)-(2.5) possesses the eighth order of convergence.

Remark 2.2.

The class of three-step methods (2.4)-(2.5) requires four evaluations and has the order of convergence eight. Therefore, this class is of optimal order and supports the Kung-Traub conjecture . Hence, the efficiency index of the eighth-order derivative-involved methods from the class is 841.682.Some efficient methods from the contributed optimal three-step class are given below. Per computing step, these methods are free from second or higher order derivative computations. The new contributed methods are yn=xn-f(xn)f(xn),  zn=yn-f(yn)f(xn)2f(xn)-f(yn)2f(xn)-5f(yn),xn+1=zn-f(zn)(2xn-zn-yn)2f[zn,yn](xn-zn)+(zn-yn)f(xn){1+(f(zn)f(yn))3+96f(zn)f(xn)-94(f(yn)f(xn))4}, where en+1=(1/4)c22c3(9c2c3-4c4)en8+O(en9), yn=xn-f(xn)f(xn),zn=yn-f(yn)f(xn)f(xn)f(xn)-2f(yn),xn+1=zn-f(zn)(2xn-zn-yn)2f[zn,yn](xn-zn)+(zn-yn)f(xn)×{1+(f(zn)f(yn))3+96f(zn)f(xn)-96(f(yn)f(xn))3-5(f(yn)f(xn))4}, where en+1=-(1/4)(c22(c22-c3)(9c2c3-4c4))en8+O(en9) is its error equation. We also here mention some of the typical forms of the weight functions G(t), H(τ), and Q(γ) in iteration (2.4), which satisfy (2.5) for making the order optimal. These forms are listed in Table 1.Other than the very efficient methods (2.14) and (2.15) of optimal order eight, many more three-step without-memory iterations can be constructed using Table 1, that is, (2.5) in (2.4) and also with different values for the free parameter β. Thus, now, in order to save the space and also giving some of the other such optimal eighth-order methods according to (2.4) and (2.5), we list the interesting ones in Table 2. Note that we consider first that the weight functions satisfy (2.5), and then we try to make new error equations based on the available data in (2.13).

Future researches in this field of study can now be turned to finding optimal sixteenth-order four-step without-memory iterations based on the general class (2.4)-(2.5). Furthermore, producing with-memory iterations according to this class can also be of researcher's interest for future studies.

Typical forms of the weight functions satisfying (2.5).

Weight functionForm 1Form 2Form 3
G(t)1+t21+μt3,  μ0(1+μt2)1/μ, μ0
H(τ)96τ96τ+μτ2τ(9/6)+μτ
Q(γ)-(9+18β)6γ3-(9+18β)6γ3+μγ4-(9+18β)6γ3+μγ4+μγ5

t=f(z)/f(y), τ=f(z)/f(x), γ=f(y)/f(x) (without the index n), and β.

Interesting choices of β, G(0), and Q(4)(0) in (2.13), which provide very efficient optimal root solvers.

MethodβG′′(0)Q(4)(0)Error equation
1432744 −219 en+1=1264c2(11c22-3c3)(-27c32+88c2c4)en8+O(en9)
243 0 −120 en+1=-112(c22(11c22-3c3)(9c2c3-4c4))en8+O(en9)
3-12 −1 −54 en+1=14c2c3(9c22c3-2c32-4c2c4)en8+O(en9)
4-12 0 0 en+1=14c22c3(9c2(c22+c3)-4c4)en8+O(en9)
50 −2 −96 en+1=-14(c2(c22-c3)(17c22c3-4c32-4c2c4))en8+O(en9)
3. Computational Examples

The contribution given in Section 2 is supported here through numerical works. We check the effectiveness of the novel methods (2.14) and (2.15) from our class of methods. For this reason, we have compared our new methods with Newton's method (NM), (1.1), (1.2), and (1.3). The nonlinear test functions are furnished in Table 3. The results of comparisons are given in Table 4 in terms of the number significant digits for each test function after some specified iterations.

Test function, their roots, and the starting points.

Test functionsRootsGuess
f1=sin(tan(x)+x)-120.25882982733526884439170659569600.4
f2=sin(x)-x+22.554195952837043037829666173791872
f3=x5-2x+2cos(x)−1.25644837202179610636347166071080−1.3
f4=2cos(x)+sin(x)-x1.373188165718033934346031743979561.5
f5=2x3-xcos(x)+1x2−0.94316510784112785413792698729205−0.8
f6=x9+x4-x3+x2+14−1.41582823891434741754690479611753−1.4
f7=x9-14sin(x)−1.33663345183152516454641803558884−1.31
f8=sin(x2+x-3)+x0.9319081903249559056625012317612061.5
f9=cos(x2+x-3)+x30.6712807449288175596507089501292571
f10=xtanx+x−0.785398163397448309615660845819875−0.6
f11=x4+13x-1+tanx0.7141802235451896555800418547218141
f12=x30-x-1+sinx1.0049835641353821068596775821073151

Comparison of different methods for finding the simple roots of test functions.

Test functionsMethodsNM(1.1)(1.2)(1.3)(2.14)(2.15)
f1IN843333
TNE163212121212
|f(x)|0.2e-2730.4e-2710.5e-4150.7e-3690.2e-4480.1e-429

f2IN943333
TNE183212121212
|f(x)|0.1e-4850.8e-2380.6e-3720.6e-3010.4e-4090.1e-378

f3IN943333
TNE183212121212
|f(x)|0.2e-5950.6e-2970.3e-6080.6e-5110.2e-6100.2e-603

f4IN843333
TNI163212121212
|f(x)|0.5e-3930.2e-3930.3e-6830.2e-6450.3e-7290.2e-717

f5IN853333
TNE164012121212
|f(x)|0.1e-7980.1e-4310.3e-6000.3e-5110.8e-6410.3e-641

f6IN943333
TNE183212121212
|f(x)|0.2e-6600.2e-3290.2e-6850.5e-5670.1e-7080.7e-729

f7IN943333
TNE183212121212
|f(x)|0.1e-5380.8e-2690.1e-5620.3e-4440.1e-5800.1e-572

f8IN843333
TNE163212121212
|f(x)|0.2e-2040.3e-1480.3e-2390.7e-1580.6e-2050.5e-198

f9IN943333
TNE183212121212
|f(x)|0.1e-3320.1e-1640.1e-2730.5e-2540.7e-3360.1e-315

f10IN843333
TNE163212121212
|f(x)|0.4e-3290.1e-3290.5e-6890.2e-5790.4e-6960.5e-721

f11IN843333
TNE163212121212
|f(x)|0.1e-3090.6e-2850.2e-2910.7e-3120.3e-2190.2e-255

f12IN943333
TNE183212121212
|f(x)|0.4e-5760.4e-2880.3e-6190.4e-4860.1e-6920.5e-651

All computations in this paper were performed in MATLAB 7.6 using variable precision arithmetic (VPA) to increase the number of significant digits. We have considered the following stopping criterion |f(xn)|10-800. In Table 4, 0.2e-448 shows that the absolute value of the given nonlinear function after three iterations is zero up to 448 decimal places. In Table 4, IN and TNE stand for iteration number and total number of evaluation. As shown in Table 4, the proposed method (2.14) is preferable to Newton’s method and some methods with fourth- and eighth-order of convergences. It is evident that (2.14) is more robust than the other competent from various orders. We also recall an important concern in using multi point iterations, which indicates that the high-order root solvers are very sensitive for initial guesses far from the root. And they are so powerful for starting points in the vicinity of the sought zero and so close.

Remark 3.1.

If we need to solve a lot of equations from a large system of boundary-value problems, then the cost of function evaluations becomes important. Therefore, the proposed class (2.4)-(2.5) is valuable for solving such problems.

4. Concluding Remarks

In recent years, numerous works have been focusing on the development of more advanced and efficient methods for nonlinear scalar equations. Many methods have been developed, which improve the convergence rate of the Newton’s method. One practical drawback of so many methods is their slow rate of convergence. This paper has developed and established a rapid class of eighth-order iteration methods. Per iteration, the methods from our class require three evaluations of the function and one of its first derivatives; and therefore, the efficiency of the methods is equal to 841.682, which is better than that of the classical Newton’s method. Kung and Traub  conjectured that a multipoint iteration without memory based on n evaluations of f or its derivatives could achieve optimal convergence order 2n-1. Newton’s method is an example, which agrees with Kung-Traub’s conjecture for n=2, and the class of methods (2.4)-(2.5) is another example, which agrees with Kung-Traub’s hypothesis for n=4. Thus, the suggested class (2.4)-(2.5) is effective and attracts the attention of researchers.

WangZ. H.Numerical stability test of neutral delay differential equationsMathematical Problems in Engineering200820081069804310.1155/2008/698043ZBL1166.65356MirN. A.MuneerR.JabeenI.Some families of two-step simultaneous methods for determining zeros of nonlinear equationsISRN Applied Mathematics20111181717410.5402/2011/817174NoorM. A.NoorK. I.Al-SaidE.WaseemM.Some new iterative methods for nonlinear equationsMathematical Problems in Engineering2010121989432753972ZBL1207.65054SargolzaeiP.SoleymaniF.Accurate fourteenth-order methods for solving nonlinear equationsNumerical Algorithms20115851352710.1007/s11075-011-9467-4SoleymaniF.SharifiM.MousaviB. S.An improvement of Ostrowski's and King's techniques with optimal convergence order eightJournal of Optimization Theory and Applications. In press10.1007/s10957-011-9929-9SharmaJ. R.SharmaR.A new family of modified Ostrowski's methods with accelerated eighth order convergenceNumerical Algorithms201054444545810.1007/s11075-009-9345-5ZBL1195.65067SoleymaniF.MousaviB. S.A novel computational technique for finding simple roots of nonlinear equationsInternational Journal of Mathematical Analysis2011518131819IlievA.KyurkchievN.Nontrivial Methods in Numerical Analysis (Selected Topics in Numerical Analysis)2010Lambeth AcademyZBL1217.65090AdomianG.Solving Frontier Problems of Physics: The Decomposition Method1994Boston, Mass, USAKluwer Academic1282283ZBL0843.34026KungH. T.TraubJ. F.Optimal order of one-point and multipoint iterationJournal of the Association for Computing Machinery1974216436510353657ZBL0289.65023