Based on Traub-Steffensen method, we present a derivative free three-step family of sixth-order methods for solving systems of nonlinear equations. The local convergence order of the family is determined using first-order divided difference operator for functions of several variables and direct computation by Taylor's expansion. Computational efficiency is discussed, and a comparison between the efficiencies of the proposed techniques with the existing ones is made. Numerical tests are performed to compare the methods of the proposed family with the existing methods and to confirm the theoretical results. It is shown that the new family is especially efficient in solving large systems.
1. Introduction
The problem of finding solution of the system of nonlinear equations F(x)=0, where F:D→D, D is an open convex domain in Rn, by iterative methods is an important and challenging task in numerical analysis and many applied scientific branches. One of the basic procedures for solving nonlinear equations is the quadratically convergent Newton method (see [1, 2]):
(1)x(k+1)=x(k)-[F′(x(k))]-1F(x(k)),k=0,1,2,…,
where [F′(x)]-1 is the inverse of the first Fréchet derivative F′(x) of the function F(x).
In many practical situations, it is preferable to avoid the calculation of derivative F′(x) of the function F(x). In such situations, it is preferable to use only the computed values of F(x) and to approximate F′(x) by employing the values of F(x) at suitable points. For example, a basic derivative free iterative method is the Traub-Steffensen method [3], which also converges quadratically and follows the scheme
(2)x(k+1)=G1,2(x(k))=x(k)-[w(k),x(k);F]-1F(x(k)),
where [w(k),x(k);F]-1 is the inverse of the first-order divided difference [w(k),x(k);F] of F and w(k)=x(k)+γF(x(k));γ is an arbitrary nonzero constant. Throughout this paper, Gi,p is used to denote the ith iteration function of convergence order p. For γ=1, the scheme (2) reduces to the well-known Steffensen method [4].
In recent years, many derivative free higher-order methods of great efficiency are developed for solving scalar equation f(x)=0; see [5–14] and the references therein. For systems of nonlinear equations, however, the construction of efficient higher-order derivative free methods is a difficult task and therefore not many such methods can be found in the literature. Recently, based on Steffensen's scheme, that is, when γ=1 in (2), a family of seventh-order methods has been proposed in [13]. Some important members of this family, as shown in [13], are given as follows:
(3)y(k)=G1,2(x(k)),z(k)=G1,4(x(k),y(k))z(k)=y(k)-([y(k),x(k);F]+[y(k),w(k);F]000000000000-[w(k),x(k);F])-1F(y(k)),x(k+1)=G1,7(x(k),y(k),z(k))x(k+1)=z(k)-([z(k),x(k);F]+[z(k),y(k);F]00000000000000-[y(k),x(k);F])-1F(z(k)),(4)y(k)=G1,2(x(k)),z(k)=G2,4(x(k),y(k))z(k)=y(k)-[y(k),x(k);F]-1z(k)=×([y(k),x(k);F]-[y(k),w(k);F]00000000+[w(k),x(k);F])[y(k),x(k);F]-1F(y(k)),x(k+1)=G2,7(x(k),y(k),z(k))x(k+1)=z(k)-([z(k),x(k);F]+[z(k),y(k);F]00000000000000-[y(k),x(k);F])-1F(z(k)).
Per iteration, both methods use four functions, five first-order divided differences, and three matrix inversions. The notable feature of these algorithms is their simple design which makes them easily implemented to solve systems of nonlinear equations. Here, the fourth-order method G1,4(x(k),y(k)) is the generalization of the method proposed by Ren et al. in [5] and G2,4(x(k),y(k)) is the generalization of the method by Liu et al. [6].
In this paper, our aim is to develop derivative free iterative methods that may satisfy the basic requirements of generating quality numerical algorithms, that is, the algorithms with (i) high convergence speed, (ii) minimum computational cost, and (iii) simple design. In this way, we here propose a derivative free family of sixth-order methods. The scheme is composed of three steps of which the first two steps consist of any derivative free fourth-order method with the base as the Traub-Steffensen iteration (2) whereas the third step is weighted Traub-Steffensen iteration. The algorithm of the present contribution is as simple as the methods (3) and (4) but with an additional advantage that it possesses high computational efficiency, especially when applied for solving large systems of equations.
The rest of the paper is summarized as follows. The sixth-order scheme with its convergence analysis is presented in Section 2. In Section 3, the computational efficiency of new methods is discussed and is compared with the methods which lie in the same category. Various numerical examples are considered in Section 4 to show the consistent convergence behavior of the methods and to verify the theoretical results. Section 5 contains the concluding remarks.
2. The Method and Its Convergence
Based on the above considerations of a quality numerical algorithm, we begin with the following three-step iteration scheme:
(5)y(k)=G1,2(x(k)),z(k)=G4(x(k),y(k)),x(k+1)=G6(x(k),y(k),z(k))x(k+1)=z(k)-(aI+[w(k),x(k);F]-10000000x(k+1)=×(b[y(k),x(k);F]+c[y(k),w(k);F])aI+[w(k),x(k);F]-1)x(k+1)=×[w(k),x(k);F]-1F(z(k)),
where G4(x(k),y(k)) denotes any derivative free fourth-order scheme and a, b, c are some parameters to be determined.
In order to find the convergence order of scheme (5), we first define divided difference operator for multivariable function F (see [15]). The divided difference operator of F is a mapping [·,·;F]:D×D⊂Rn×Rn→L(Rn) defined by
(6)[x+h,x;F]=∫01F′(x+th)dt,∀x,h∈Rn.
Expanding F′(x+th) in Taylor series at the point x and then integrating, we have
(7)[x+h,x;F]=∫01F′(x+th)dt=F′(x)+12F′′(x)h+16F′′′(x)h2+O(h3),
where hi=(h,h,…i,h),h∈Rn.
Let e(k)=x(k)-α. Assuming that Γ=[F′(α)]-1 exists and then developing F(x(k)) and its first three derivatives in a neighborhood of α, we have
(8)F(x(k))=F′(α)(((e(k))6)e(k)+A2(e(k))2+A3(e(k))300000000+A4(e(k))4+A5(e(k))5+O((e(k))6)),(9)F′(x(k))=F′(α)(((e(k))5)I+2A2e(k)+3A3(e(k))2+4A4(e(k))300000000+5A5(e(k))4+O((e(k))5)),(10)F′′(x(k))=F′(α)(((e(k))4)2A2+6A3e(k)+12A4(e(k))200000000+20A5(e(k))3+O((e(k))4)),(11)F′′′(x(k))=F′(α)(((e(k))3)6A3+24A4e(k)+60A5(e(k))200000000+O((e(k))3)),
where Ai=(1/i!)ΓF(i)(α)∈Li(Rn,Rn) and (e(k))i=(e(k),e(k),…i,e(k)),e(k)∈Rn. Ai are symmetric operators that are used later on.
We can now analyze behavior of the scheme (5) through the following theorem.
Theorem 1.
Let the function F:D⊂Rn→Rn be sufficiently differentiable in an open neighborhood D of its zero α and G4(x(k),y(k)) is a fourth-order iteration function which satisfies
(12)ez(k)=z(k)-α=B0(e(k))4+O((e(k))5),
where B0∈L4(Rn,Rn) and e(k)=x(k)-α. If an initial approximation x(0) is sufficiently close to α, then the local order of convergence of method (5) is at least 6 provided a=3, b=-1, and c=-1.
Proof.
Let ew(k)=w(k)-α=x(k)+γF(x(k))-α. Then, using (8), it follows that
(13)ew(k)=(I+γF′(α))e(k)+γF′(α)A2(e(k))2+γF′(α)A3(e(k))3+O((e(k))4).
Employing (7) for x+h=w(k), x=x(k), and h=ew(k)-e(k) and then using (9)–(11), we write
(14)[w(k),x(k);F]=F′(α)×[I+A2(ew(k)+e(k))00000+A3((ew(k))2+(e(k))2+ew(k)e(k))00000+A4((ew(k))3+(e(k))3+(ew(k))2e(k)00000000000+ew(k)(e(k))2(ew(k))3)00000+O((e(k))4)].
Expanding in the formal power developments, the inverse of preceding operator can be written as
(15)[w(k),x(k);F]-1=[((e(k))4)I-A2(ew(k)+e(k))+(A22-A3)0000×((ew(k))2+(e(k))2)0000+(2A22-A3)ew(k)e(k)0000-(A23-A3A2-A2A3+A4)0000×((ew(k))3+(e(k))3)0000-(3A23-2A3A2-2A2A3+A4)0000×((ew(k))2e(k)+ew(k)(e(k))2)0000+O((e(k))4)]Γ.
Using (8) and (15) to the required terms in the first step of (5), we find that
(16)ey(k)=y(k)-α=A2ew(k)e(k)-(A22-A3)×((ew(k))2e(k)+ew(k)(e(k))2)+O((e(k))4).
Equation (7) for x+h=y(k), x=x(k), and h=ey(k)-e(k) yields
(17)[y(k),x(k);F]=F′(α)[((e(k))4)I+A2(ey(k)+e(k))00000000+A3(ey(k)e(k)+(e(k))2)00000000+A4(e(k))3+O((e(k))4)].
Similarly, substituting x+h=y(k), x=w(k), and h=ey(k)-ew(k) in (7), we obtain
(18)[y(k),w(k);F]=F′(α)[((e(k))4)I+A2(ey(k)+ew(k))00000000+A3(ey(k)ew(k)+(ew(k))2)00000000+A4(ew(k))3+O((e(k))4)].
Then, using (15), (17), and (18) to the required terms, we find that
(19)aI+[w(k),x(k);F]-1(b[y(k),x(k);F]+c[y(k),w(k);F])=(a+b+c)I-A2(bew(k)+ce(k))+(A22-A3)(b(ew(k))2+c(e(k))2)+(b+c)(A22-A3)ew(k)e(k)+(b+c)A2ey(k)+O((e(k))3).
Postmultiplying (19) by [w(k),x(k);F]-1 and simplifying
(20)θ=(aI+[w(k),x(k);F]-10000×(b[y(k),x(k);F]+c[y(k),w(k);F])[w(k),x(k);F]-1)×[w(k),x(k);F]-1=[((e(k))3)(a+b+c)I-(a+2b+c)A2ew(k)000-(a+b+2c)A2e(k)000+((a+3b+c)A22-(a+2b+c)A3)(ew(k))2000+((a+b+3c)A22-(a+b+2c)A3)(e(k))2000+(a+2(b+c))(2A22-A3)ew(k)e(k)000+(b+c)A2ey(k)+O((e(k))3)]Γ,
Taylor series of F(z(k)) about α yields
(21)F(z(k))=F′(α)[ez(k)+A2(ez(k))2+O((ez(k))3)].
Then, using (20) and (21) in the third step of (5), we obtain the error equation as
(22)e(k+1)=ez(k)-θF′(α)[ez(k)+O((ez(k))2)]=-(a+b+c-1)ez(k)+(a+2b+c)A2ew(k)ez(k)+(a+b+2c)A2ez(k)e(k)-((a+3b+c)A22-(a+2b+c)A3)(ew(k))2ez(k)-((a+b+3c)A22-(a+b+2c)A3)×ez(k)(e(k))2-(a+2(b+c))×(2A22-A3)ew(k)ez(k)e(k)-(b+c)A2ey(k)ez(k)+O((e(k))7).
In order to find a, b, and c, it will be sufficient to equate the factors a+b+c-1, a+2b+c, and a+b+2c to 0 and then solving the resulting system of equations
(23)a+b+c=1,a+2b+c=0,a+b+2c=0,
we obtain a=3, b=-1, and c=-1.
Thus, for this set of values, the above error equation reduces to
(24)e(k+1)=A22(ew(k)+e(k))2ez(k)+2A2ey(k)ez(k)-A3ew(k)ez(k)e(k)+O((e(k))7)=B0((I+γF′(α))(6A22-A3)000000+γ2(F′(α))2A22)(e(k))6+O((e(k))7),
which shows the sixth order of convergence and hence the result follows.
Finally, the sixth-order family of methods is expressed by
(25)y(k)=G1,2(x(k)),z(k)=G4(x(k),y(k)),x(k+1)=G6(x(k),y(k),z(k))x(k+1)=z(k)-M(x(k),w(k),y(k))F(z(k)),
wherein
(26)M(x(k),w(k),y(k))=(3I-[w(k),x(k);F]-1([y(k),x(k);F]+[y(k),w(k);F]))[w(k),x(k);F]-1.
Thus, the scheme (25) defines a new three-step family of derivative free sixth-order methods with the first two steps as any fourth-order scheme whose base is the Traub-Steffensen method (3). Some simple members of this family are as follows.
Method-I. The first method, which is denoted by G1,6, is given by
(27)y(k)=G1,2(x(k)),z(k)=G1,4(x(k),y(k)),x(k+1)=z(k)-M(x(k),w(k),y(k))F(z(k)),
where G1,4(x(k),y(k)) is the fourth-order method as given in the formula (3). It is clear that this formula uses four functions, three first-order divided differences, and two matrix inversions per iteration.
Method-II. The second method, that we denote by G2,6, is given as
(28)y(k)=G1,2(x(k)),z(k)=G2,4(x(k),y(k)),x(k+1)=z(k)-M(x(k),w(k),y(k))F(z(k)),
where G2,4(x(k),y(k)) is the fourth-order method shown in (4). This method also requires the same evaluations as in the above method.
3. Computational Efficiency
Here, we estimate the computational efficiency of the proposed methods and compare it with the existing methods. To do this, we will make use of efficiency index, according to which the efficiency of an iterative method is given by E=ρ1/C, where ρ is the order of convergence and C is the computational cost per iteration. For a system of n nonlinear equations in n unknowns, the computational cost per iteration is given by (see [16])
(29)C(μ,n,l)=P0(n)μ+P(n,l).
Here, P0(n) denotes the number of evaluations of scalar functions used in the evaluation of F and [x,y;F], and P(n,l) denotes the number of products needed per iteration. The divided difference [x,y;F] of F is an n×n matrix with elements given as (see [17, 18])
(30)[y,x;F]ij=(fi(y1,…,yj-1,yj,xj+1,…,xn)0000-fi(y1,…,yj-1,xj,xj+1,…,xn)0000+fi(x1,…,xj-1,yj,yj+1,…,yn)0000-fi(x1,…,xj-1,xj,yj+1,…,yn))×((2(yj-xj)))-1,1⩽i,j⩽n.
In order to express the value of C(μ,n,l) in terms of products, a ratio μ>0 between products and evaluations of scalar functions and a ratio l⩾1 between products and quotients are required.
To compute F in any iterative function, we evaluate n scalar functions (f1,f2,…,fn), and if we compute a divided difference [x,y;F], then we evaluate 2n(n-1) scalar functions, where F(x) and F(y) are computed separately. We must add n2 quotients from any divided difference, n2 products for multiplication of a matrix with a vector or of a matrix by a scalar, and n products for multiplication of a vector by a scalar. In order to compute an inverse linear operator, we solve a linear system, where we have n(n-1)(2n-1)/6 products and n(n-1)/2 quotients in the LU decomposition and n(n-1) products and n quotients in the resolution of two triangular linear systems.
The computational efficiency of the present sixth-order methods G1,6 and G2,6 is compared with the existing fourth-order methods G1,4 and G2,4 and with the seventh-order methods G1,7 and G2,7. In addition, we also compare the present methods with each other. Let us denote efficiency indices of Gi,p by Ei,p and computational cost by Ci,p. Then, taking into account the above and previous considerations, we have
(31)C1,4=(6n2-3n)μ+n3(2n2+3n-5+3l(4n+1)),E1,4=41/C1,4,(32)C2,4=(6n2-3n)μ+n3(2n2+9n-8+3l(4n+2)),E2,4=41/C2,4,(33)C1,6=(6n2-2n)μ+n3(2n2+12n-8+3l(4n+3)),E1,6=61/C1,6,(34)C2,6=(6n2-2n)μ+n3(2n2+18n-11+12l(n+1)),E2,6=61/C2,6,(35)C1,7=(10n2-6n)μ+n2(2n2+3n-5+l(13n+3)),E1,7=71/C1,7,(36)C2,7=(10n2-6n)μ+n2(2n2+7n-7+l(13n+5)),E2,7=71/C2,7.
3.1. Comparison between Efficiencies
To compare the computational efficiencies of the iterative methods, say Gi,p against Gj,q, we consider the ratio
(37)Ri,p;j,q=logEi,plogEj,q=Cj,qlog(p)Ci,plog(q).
It is clear that if Ri,p;j,q>1, the iterative method Gi,p is more efficient than Gj,q.
G1,4 versus G1,6 Case. For this case, the ratio (37) is given by
(38)R1,6;1,4=log6log42n2+n(18μ+12l+3)-9μ+3l-52n2+n(18μ+12l+12)-6μ+9l-8,
which shows that R1,6;1,4>1 for μ>0, l⩾1, and n⩾9. Thus, we have E1,6>E1,4 for μ>0, l⩾1, and n⩾9.
G2,4 versus G1,6 Case. In this case, the ratio (37) takes the following form
(39)R1,6;2,4=log6log42n2+n(18μ+12l+9)-9μ+6l-82n2+n(18μ+12l+12)-6μ+9l-8.
In this case, it is easy to prove that R1,6;2,4>1 for μ>0, l⩾1, and n⩾2, which implies that E1,6>E2,4.
G1,4 versus G2,6 Case. The ratio (37) yields
(40)R2,6;1,4=log6log42n2+n(18μ+12l+3)-9μ+3l-52n2+n(18μ+12l+18)-6μ+12l-11,
which shows that R2,6;1,4>1 for μ>0, l⩾1, and n⩾19. Thus, we conclude that E2,6>E1,4 for μ>0, l⩾1, and n⩾19.
G2,4 versus G2,6 Case. In this case, the ratio (37) is given by
(41)R2,6;2,4=log6log42n2+n(18μ+12l+9)-9μ+6l-82n2+n(18μ+12l+18)-6μ+12l-11.
With the same range of μ, l as in the previous case and n⩾6, the ratio R2,6;2,4>1, which implies that E2,6>E2,4.
G1,6 versus G2,6 Case. In this case, it is enough to compare the corresponding values of C1,6 and C2,6 from (33) and (34). Thus, we find that E1,6>E2,6 for μ>0, l⩾1, and n⩾2.
G1,6 versus G1,7 Case. In this case, the ratio (37) is given by
(42)R1,6;1,7=log6log73(2n2+n(20μ+13l+3)-12μ+3l-5)2(2n2+n(18μ+12l+12)-6μ+9l-8).
It is easy to show that R1,6;1,7>1 for μ>0, l⩾1, and n⩾4, which implies that E1,6>E1,7 for this range of values of the parameters (μ,n,l).
G1,6 versus G2,7 Case. The ratio (37) is given by
(43)R1,6;2,7=log6log73(2n2+n(20μ+13l+7)-12μ+5l-7)2(2n2+n(18μ+12l+12)-6μ+9l-8).
With the same range of μ, l as in the previous cases and n⩾2, we have R1,6;2,7>1, which implies that E1,6>E2,7.
G2,6 versus G1,7 Case. The ratio (37) yields
(44)R2,6;1,7=log6log73(2n2+n(20μ+13l+3)-12μ+3l-5)2(2n2+n(18μ+12l+18)-6μ+12l-11),
which shows that R2,6;1,7>1 for μ>0, l⩾1, and n⩾11, so it follows that E2,6>E1,7.
G2,6 versus G2,7 Case. For this case, the ratio (37) is given by
(45)R2,6;2,7=log6log73(2n2+n(20μ+13l+7)-12μ+5l-7)2(2n2+n(18μ+12l+18)-6μ+12l-11).
In this case, also it is not difficult to prove that R2,6;2,7>1 for μ>0, l⩾1, and n⩾5, which implies that E2,6>E2,7.
We summarize the above results in the following theorem.
Theorem 2.
For μ>0 and l⩾1, we have the following:
E1,6>E1,4 for n⩾9;
E2,6>E1,4 for n⩾19;
E2,6>E2,4 for n⩾6;
E1,6>E1,7 for n⩾4;
E2,6>E1,7 for n⩾11;
E2,6>E2,7 for n⩾5;
{E1,6>E2,4,E1,6>E2,6,E1,6>E2,7} for n⩾2.
Otherwise, the comparison depends on μ, l, and n.
4. Numerical Results
In this section, some numerical problems are considered to illustrate the convergence behavior and computational efficiency of the proposed methods. The performance is compared with the existing methods G1,4, G2,4, G1,7, and G2,7. All computations are performed using the programming package Mathematica [19] using multiple-precision arithmetic with 4096 digits. For every method, we analyze the number of iterations (k) needed to converge to the solution such that ∥x(k+1)-x(k)∥+∥F(x(k))∥<10-200. In numerical results, we also include the CPU time utilized in the execution of program which is computed by the Mathematica command “TimeUsed[]”. In order to verify the theoretical order of convergence, we calculate the computational order of convergence (ρc) using the formula
(46)ρc=log(∥F(x(k))∥/∥F(x(k-1))∥)log(∥F(x(k-1))∥/∥F(x(k-2))∥),
(see [20]) taking into consideration the last three approximations in the iterative process.
To connect the analysis of computational efficiency with numerical examples, the definition of the computational cost (29) is applied, according to which an estimation of the factor μ is claimed. For this, we express the cost of the evaluation of the elementary functions in terms of products, which depends on the computer, the software, and the arithmetics used (see [21, 22]). In Table 1, the elapsed CPU time (measured in milliseconds) in the computation of elementary functions and an estimation of the cost of the elementary functions in product units are displayed. The programs are performed in the processor, Intel (R) Core (TM) i5-480M CPU @ 2.67 GHz (64-bit Machine) Microsoft Windows 7 Home Basic 2009, and are compiled by Mathematica 7.0 using multiple-precision arithmetic. It can be observed from Table 1 that, for this hardware and the software, the computational cost of quotient with respect to product is l=2.8.
CPU time and estimation of computational cost of the elementary functions, where x=3-1 and y=5.
Functions
x*y
x/y
x
ex
lnx
sinx
cosx
cos-1x
tan-1x
xy
CPU time
0.0466
0.1305
0.0606
3.7746
3.6348
4.7532
4.7534
7.7356
7.5492
8.2948
Cost
1
2.8
1.3
81
78
102
102
166
162
178
The present methods G1,6 and G2,6 are tested by using the values -0.01, 0.01, and 0.5 for the parameter γ. The following problems are chosen for numerical tests.
Problem 1.
Considering the system of two equations,
(47)x12-x2+1=0,x1-cos(πx22)=0.
In this problem, (n,μ)=(2,52) are the values used in (31)–(36) for calculating computational costs and efficiency indices of the considered methods. The initial approximation chosen is x(0)={0.25,0.5}T and the solution is α={0,1}T.
Problem 2.
Consider the system of three equations:
(48)10x1+sin(x1+x2)-1=0,8x2-cos2(x3-x2)-1=0,12x3+sinx3-1=0,
with initial value x(0)={0.8,0.5,0.125}T towards the solution
(49)α={0.0689783491726666…,0.2464424186091830…,000000.0769289119875370…}T.
For this problem, (n,μ)=(3,103.33)are used in (31)–(36) to calculate computational costs and efficiency indices.
Problem 3.
Next, consider the following boundary value problem (see [23]):
(50)y′′+y3=0,y(0)=0,y(1)=1.Assume the following partitioning of the interval [0,1]:
(51)u0=0<u1<u2<⋯<um-1<um=1,uj+1=uj+h,h=1m.
Let us define y0=y(u0)=0,y1=y(u1),…,ym-1=y(um-1),ym=y(um)=1. If we discretize the problem by using the numerical formula for second derivative,
(52)yk′′=yk-1-2yk+yk+1h2,k=1,2,3,…,m-1,
we obtain a system of m-1 nonlinear equations in m-1 variables:
(53)yk-1-2yk+yk+1+h2yk3=0,k=1,2,3,…,m-1.
In particular, we solve this problem for m=5 so that n=4 by selecting y(0)={0.5,0.5,0.5,0.5}T as the initial value. The solution of this problem is
(54)α={0.21054188948074775…,0.42071046387616439…,00000.62790045371805633…,00000.82518822786851363…}T
and concrete values of the parameters are (n,μ)=(4,4).
Problem 4.
Consider the system of fifteen equations (see [16]):
(55)∑j=1,j≠i15xj-e-xi=0,1≤i≤15.
In this problem, the concrete values of the parameters (n,μ) are (15,81). The initial approximation assumed is x(0)={1,1,…,1}T and the solution of this problem is
(56)α={0.066812203179582582…,00000.066812203179582582…,000000000000…,00000.066812203179582582…}T.
Problem 5.
Consider the system of fifty equations:
(57)xi2xi+1-1=0,(i=1,2,…,49)x502x1-1=0.
In this problem, (n,μ)=(50,2) are the values used in (31)–(36) for calculating computational costs and efficiency indices. The initial approximation assumed is x(0)={1.5,1.5,1.5,…,1.5}T for obtaining the solution α={1,1,1,…,1}T.
Problem 6.
Lastly, consider the nonlinear and nondifferentiable integral equation of mixed Hammerstein type (see [24]):
(58)x(s)=1+12∫01G(s,t)(|x(t)|+x(t)2)dt,
where x∈C[0,1]; s,t∈[0,1] and the kernel G is given as follows:
(59)G(s,t)={(1-s)t,t⩽s,s(1-t),s⩽t.
We transform the above equation into a finite-dimensional problem by using Gauss-Legendre quadrature formula given as
(60)∫01f(t)dt≈∑j=1nϖjf(tj),
where the abscissas tj and the weights ϖj are determined for n=8 by Gauss-Legendre quadrature formula. Denoting the approximation of x(ti) by xi(i=1,2,…,8), we obtain the system of nonlinear equations:
(61)2xi-2-∑j=18aij(|xj|+xj2)=0,
where
(62)aij={ϖjtj(1-ti)ifj⩽i,ϖjti(1-tj)ifi<j.
In this problem, (n,μ)=(8,10) are the values used in (31)–(36) for calculating computational costs and efficiency indices. The initial approximation assumed is x(0)={1,1,1,…,1}T for obtaining the solution
(63)α={1.0115010875012980…,1.0546781130247093…,00001.1098925075633296…,1.1481439774759322,…,00001.1481439774759322…,1.1098925075633296…,00001.0546781130247093…,1.0115010875012980…}T.
Table 2 shows the numerical results obtained for the considered problems by various methods. Displayed in this table are the number of iterations (k), the computational order of convergence (ρc), the computational costs (Ci,p) in terms of products, the computational efficiencies (Ei,p), and the mean elapsed CPU time (e-time). Computational cost and efficiency are calculated according to the corresponding expressions given by (31)–(36) by using the values of parameters n and μ as calculated in each problem, while taking l=2.8 in each case. The mean elapsed CPU time is calculated by taking the mean of 50 performances of the program, where we use ∥x(k+1)-x(k)∥+∥F(x(k))∥<10-200 as the stopping criterion in single performance of the program.
Comparison of the performance of methods.
Methods
k
ρc
Ci,p
Ei,p
e-time
Problem 1
G1,4
7
4.000013
992.4
1.0013979
0.24518
G2,4
7
4.000004
1004.0
1.0013817
0.24754
G1,6(γ=-.01)
5
5.999999
1117.6
1.0016045
0.19709
G1,6(γ=.01)
5
5.999999
1117.6
1.0016045
0.19854
G1,6(γ=.5)
5
6.000000
1117.6
1.0016045
0.19773
G2,6(γ=-.01)
5
6.000000
1129.2
1.0015880
0.21427
G2,6(γ=.01)
5
6.000000
1129.2
1.0015880
0.21564
G2,6(γ=.5)
5
6.000001
1129.2
1.0015880
0.21845
G1,7
5
7.120115
1546.2
1.0012593
0.24891
G2,7
5
7.078494
1557.8
1.0012499
0.25246
Problem 2
G1,4
5
4.000000
4781.05
1.0002899
1.31891
G2,4
5
4.000000
4804.45
1.0002886
1.32373
G1,6(γ=-.01)
4
6.000000
5131.84
1.0003492
1.10755
G1,6(γ=.01)
4
6.000000
5131.84
1.0003492
1.11075
G1,6(γ=.5)
4
5.999930
5131.84
1.0003492
1.11752
G2,6(γ=-.01)
4
6.000000
5155.24
1.0003476
1.11191
G2,6(γ=.01)
4
6.000000
5155.24
1.0003476
1.11618
G2,6(γ=.5)
4
5.999906
5155.24
1.0003476
1.11455
G1,7
4
7.000000
7649.20
1.0002544
1.69473
G2,7
4
7.000000
7672.60
1.0002537
1.70182
Problem 3
G1,4
5
4.000123
578.4
1.0023996
0.17903
G2,4
5
4.000035
617.6
1.0022472
0.18164
G1,6(γ=-.01)
4
5.999970
660.8
1.0027152
0.13473
G1,6(γ=.01)
4
5.999968
660.8
1.0027152
0.12900
G1,6(γ=.5)
4
5.999887
660.8
1.0027152
0.12627
G2,6(γ=-.01)
4
5.999957
700.0
1.0025629
0.14182
G2,6(γ=.01)
4
5.999955
700.0
1.0025629
0.14082
G2,6(γ=.5)
4
5.999870
700.0
1.0025629
0.14464
G1,7
4
6.999331
930.0
1.0020946
0.19327
G2,7
4
6.999216
969.2
1.0020098
0.19891
Problem 4
G1,4
5
4.000000
110717.0
1.00001252
6.67681
G2,4
5
4.000000
111194.0
1.00001246
7.69175
G1,6(γ=-.01)
4
6.000000
112676.0
1.00001590
6.16291
G1,6(γ=.01)
4
6.000000
112676.0
1.00001590
5.74189
G1,6(γ=.5)
4
6.000000
112676.0
1.00001590
6.03452
G2,6(γ=-.01)
4
6.000000
113153.0
1.00001584
6.53969
G2,6(γ=.01)
4
6.000000
113153.0
1.00001584
6.62572
G2,6(γ=.5)
4
6.000000
113153.0
1.00001584
6.72312
G1,7
4
7.000000
182793.0
1.00001065
7.29228
G2,7
4
7.000000
183270.0
1.00001062
7.95691
Problem 5
G1,4
7
4.000000
143590.0
1.0000097
8.12436
G2,4
7
4.000000
148680.0
1.0000093
8.32287
G1,6(γ=-.01)
5
6.000000
151420.0
1.0000118
6.08475
G1,6(γ=.01)
5
6.000000
151420.0
1.0000118
5.99152
G1,6(γ=.5)
5
6.000000
151420.0
1.0000118
5.89734
G2,6(γ=-.01)
5
6.000000
156510.0
1.0000114
6.61833
G2,6(γ=.01)
5
6.000000
156510.0
1.0000114
6.45834
G2,6(γ=.5)
5
6.000000
156510.0
1.0000114
6.66583
G1,7
5
7.000000
223735.0
1.0000087
7.84567
G2,7
5
7.000000
228825.0
1.0000085
7.89437
Problem 6
G1,4
5
4.000000
5091.200
1.0002723
2.02945
G2,4
5
4.000000
5233.600
1.0002649
2.13573
G1,6(γ=-.01)
4
6.000000
5408.000
1.0003314
1.96338
G1,6(γ=.01)
4
6.000000
5408.000
1.0003314
1.83227
G1,6(γ=.5)
4
6.000000
5408.000
1.0003314
1.82236
G2,6(γ=-.01)
4
6.000000
5550.400
1.0003229
1.79673
G2,6(γ=.01)
4
6.000000
5550.400
1.0003229
1.80255
G2,6(γ=.5)
4
6.000000
5550.400
1.0003229
1.90609
G1,7
4
8.000000
8298.400
1.0002345
2.34291
G2,7
4
8.000000
8440.800
1.0002306
2.48182
From the numerical results, we can observe that like the existing methods the present methods show consistent convergence behavior. It is seen that some methods do not preserve the theoretical order of convergence, especially when applied for solving some typical type of nonlinear systems. This can be observed in the last problem of nondifferentiable mixed Hammerstein integral equation where the seventh-order methods G1,7 and G2,7 yield the eighth order of convergence. However, for the present methods, the computational order of convergence overwhelmingly supports the theoretical order of convergence. Comparison of the numerical values of computational efficiencies exhibited in the second last column of Table 2 verifies the theoretical results of Theorem 2. As we know, the computational efficiency is proportional to the reciprocal value of the total CPU time necessary to complete running iterative process. This means that the method with high efficiency utilizes less CPU time than the method with low efficiency. The truthfulness of this fact can be judged from the numerical values of computational efficiency and elapsed CPU time displayed in the last two columns of Table 2, which are in complete agreement according to the notion.
5. Concluding Remarks
In the foregoing study, we have proposed iterative methods with the sixth order of convergence for solving systems of nonlinear equations. The schemes are totally derivative free and therefore particularly suited to those problems in which derivatives require lengthy computation. A development of first-order divided difference operator for functions of several variables and direct computation by Taylor's expansion are used to prove the local convergence order of new methods. Comparison of efficiencies of the new schemes with the existing schemes is shown. It is observed that the present methods have an edge over similar existing methods, especially when applied for solving large systems of equations. Six numerical examples have been presented and the relevant performances are compared with the existing methods. Computational results have confirmed the robust and efficient character of the proposed techniques. Similar numerical experimentations have been carried out for a number of problems and results are found to be on a par with those presented here.
We conclude the paper with the remark that in many numerical applications multiprecision in computations is required. The results of numerical experiments justify that the high-order efficient methods associated with a multiprecision arithmetic floating point are very useful, because they yield a clear reduction in the number of iterations to achieve the required solution.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
OrtegaJ. M.RheinboldtW. C.KelleyC. T.TraubJ. F.SteffensenJ. F.Remarks on iterationRenH.WuQ.BiW.A class of two-step Steffensen type methods with fourth-order convergenceLiuZ.ZhengQ.ZhaoP.A variant of Steffensen's method of fourth-order convergence and its applicationsPetkovićM. S.IlićS.DžunićJ.Derivative free two-point methods with and without memory for solving nonlinear equationsPetkovićM. S.PetkovićL. D.Families of optimal multipoint methods for solving nonlinear equations: a surveyPetkovićM. S.DžunićJ.PetkovićL. D.A family of two-point methods with memory for solving nonlinear equationsZhengQ.LiJ.HuangF.An optimal Steffensen-type family for solving nonlinear equationsDunićJ.PetkovićM. S.On generalized multipoint root-solvers with memorySharmaJ. R.GuhaR. K.GuptaP.Some efficient derivative free methods with memory for solving nonlinear equationsWangX.ZhangT.A family of Steffensen type methods with seventh-order convergencePetkovicM. S.NetaB.PetkovicL. D.DzunicJ.Grau-SánchezM.GrauÀ.NogueraM.Frozen divided difference scheme for solving systems of nonlinear equationsGrau-SánchezM.NogueraM.A technique to choose the most efficient method between secant method and some variantsGrau-SánchezM.GrauÀ.NogueraM.On the computational efficiency index and some iterative methods for solving systems of nonlinear equationsPotraF. A.PtakV.WolframS.JayL. O.A note on Q-order of convergenceFousseL.HanrotG.LefevreV.PelissierP.ZimmermannP.MPFR: a multiple-precision binary oating-point library with correct roundinghttp://www.mpfr.org/mpfr-2.1.0/timings.htmlGrau-SánchezM.PerisJ. M.GutiérrezJ. M.Accelerated iterative methods for finding solutions of a system of nonlinear equationsEzquerroJ. A.Grau-SánchezM.HernándezM.Solving non-differentiable equations by a new one-point iterative method with memory