A class of iterative methods without restriction on the computation of Fréchet derivatives including multisteps for solving systems of nonlinear equations is presented. By considering a frozen Jacobian, we provide a class of m-step methods with order of convergence m+1. A new method named as Steffensen-Schulz scheme is also contributed. Numerical tests and comparisons with the existing methods are included.
1. Introduction
In this paper, we take into account the following system of nonlinear equations:
(1)f1(x1,x2,…,xN)=0,f2(x1,x2,…,xN)=0,⋮fN(x1,x2,…,xN)=0,
wherein each function fi maps a vector x=(x1,x2,…,xN)T of the N-dimensional space ℝN into the real line ℝ. This system contains N nonlinear equations with N unknowns and could be expressed in a more simplified form by defining a function F mapping F(x)=(f1(x),f2(x),…,fN(x))T. Hence, the nonlinear system (1) can be written in the form F(x)=0, where the functions f1(x), f2(x),…,fN(x) are the coordinate functions of F; see for more details [1] and also for application issues the work [2].
We assume that F(x) is a smooth function of x in the open convex set D⊆ℝN. There is a strong incentive to use derivative information as well as function values in order to solve traditionally the system (1). The most famous solver for such a problem is Newton's iteration [1], which is defined as
(2)x(n+1)=x(n)-F′(x(n))-1F(x(n)),n=0,1,2,….
In this iterative method, we use the N×N Jacobian matrix, that is, F′(x), with entries F′(x)jk=∂xkfj(x). To be more precise, there are plenty of solvers to tackle the problem (1) or its scalar case, such as those in [3–6]. Among such methods, the third-order iterative methods like the Halley and Chebyshev methods [7] are considered less practically from a computational point of view because they need to compute the expensive second-order Fréchet derivatives in contrast to the quadratically convergent method (2), in which the first-order Fréchet derivative needs to be calculated. As a matter of fact, for the considered problem (1), the first Fréchet derivative is a matrix with N2 entries, while the second-order Fréchet derivative has N3 entries (without considering the symmetry). On this account, it needs a large amount of operations in order to evaluate the second derivative for each iteration.
To avoid using the Jacobian whose computation is time-consuming for large scale systems, in Chapter 7 of [8], authors presented the definition of divided difference in N-dimensional space as an N×N matrix with elements
(3)[x,y;f]i,j=(fi(x1,…,xj,yj+1,…,yN)-fi(x1,…,xj-1,yj,…,yN))×(xj-yj)-1,
to prevent computing the first-order Fréchet derivative. Note that (3) is a component-to-component definition. Traub in the pioneer book [1] introduced another tool named as J(x,H) to estimate the Jacobian matrix and to derive Steffensen's method for nonlinear systems as follows:
(4)x(n+1)=x(n)-J(x(n),H(n))-1F(x(n)),n=0,1,2,…,
wherein
(5)J(x(n),H(n))=(F(x(n)+H(n)e1)-F(x(n)),…,F(x(n)+H(n)eN)-F(x(n)))H(n)-1,
with H(n)=diag(f1(x(n)),…,fN(x(n))).
The primary aim of the present study is to achieve high rate of convergence in order to solve (1) using (5). Hence, we first propose a new iterative method with fourth-order convergence to find both real and complex solutions. The new method does not even need the evaluation of one first-order Fréchet derivative, let alone the higher-order ones. Next, by considering a frozen Jacobian matrix, we suggest a general m-step class of iterative methods with arbitrary order of convergence and higher computational efficiency index.
The rest of this paper is prepared as follows. In Section 2, the construction of a scheme is offered. It also includes the analysis of convergence and shows that the suggested method has fourth order. In Section 3, we will extend the new method to present a multistep class of iterations. Section 4 contains a discussion about the implementation and the efficiency of the iterative methods. Section 5 contains a contribution of this study by introducing Steffensen-Schulz iterative method for the first time. This is followed by Section 6 where some numerical tests will be furnished to illustrate the accuracy and efficiency of the proposed approach. Section 7 ends the paper where short conclusions of the study by pointing out future research aspects are given.
2. Derivation
In [9], authors presented the following iterative method:
(6)y(n)=x(n)-[x(n),w(n);F]-1F(x(n)),n=0,1,2,…,x(n+1)=y(n)-[x(n),y(n);F]-1×([x(n),y(n);F]-[y(n),w(n);F]+[x(n),w(n);F])×[x(n),y(n);F]-1F(y(n)),
wherein w(n)=x(n)+F(x(n)) and it possesses the fourth-order convergence to approximate the simple solution of the system (1). As could be seen, it includes the evaluations F(x(n)),F(y(n)),F(w(n)) and three divided difference operators of order one based on (5), to possess the fourth-order convergence. This process is costly for challenging systems of nonlinear equations.
Now, in order to reach the fourth order of convergence without imposing the computation of even first-order Fréchet derivative or three divided difference operators, we consider a three-step structure with the same correcting factor as follows:
(7)y(n)=x(n)-M(n)-1F(x(n)),z(n)=y(n)-M(n)-1F(y(n)),x(n+1)=z(n)-M(n)-1F(z(n)),
wherein we could use the component-by-component approximation (3) [10]:
(8)M(n)=[x(n),w(n);F],
or the estimation introduced by Traub (5) as follows:
(9)M(n)=J(x(n),H(n)).
Note that (8) and (9) are not equal. Per computing step, the method (6) requires computing F at three different points without the computation of the Fréchet derivatives which are costly for large scale problems.
The following theorem will be demonstrated by means of the N-dimensional Taylor expansion of the functions using the estimation (8) in (7). We here include some of the basic notions which are important in the proof. Let F:D⊆ℝN→ℝN be sufficiently Fréchet differentiable in D. By using the notation introduced in [11], the qth derivative of F at u∈ℝN, q≥1, is the q-linear function F(q)(u):ℝN×⋯×ℝN→ℝN such that F(q)(u)(v1,…,vq)∈ℝN. Thus, we have the following:
F(q)(u)(v1,…,vq-1,·)∈ℒ(ℝN),
F(q)(u)(vσ(1),…,vσ(q))=F(q)(u)(v1,…,vq), for all permutation σ of {1,2,…,q}.
Hence, we take into account F(q)(u)(v1,…,vq)=F(q)(u)v1,…,vq, and also F(q)(u)vq-1F(p)vp=F(q)(u)F(p)(u)vq+p-1. It is well known that, for x*+h∈ℝN lying in a neighborhood of a solution x* of the nonlinear system F(x)=0, Taylor's expansion might be written as follows (assuming that the Jacobian matrix F′(x*) is nonsingular): F(x*+h)=F′(x*)[h+∑q=2p-1Cqhq]+O(hp), where Cq=(1/q!)[F′(x*)]-1F(q)(x*), q≥2. We observe that Cqhq∈ℝN since F(q)(x*)∈ℒ(ℝN×⋯×ℝN,ℝN) and [F′(x*)]-1∈ℒ(ℝN).
In addition, we can express F′ as F′(x*+h)=F′(x*)[I+∑q=2p-1qCqhq-1]+O(hp), wherein I is the identity matrix of the same order to the Jacobian matrix. Therefore, qCqhq-1∈ℒ(ℝN). We also in the sequel denote e(n)=x(n)-x* as the error in the nth iteration. The equation
(10)e(n+1)=Le(n)p+O(e(n)p+1),
where L is a p-linear function L∈ℒ(ℝN×⋯×ℝN,ℝN), is called the error equation and p is the order of convergence.
Remark 1.
e(n)=x(n)-x* which is the error in the nth iteration is a vector and e(n)p=(e(n),e(n),…,e(n)) would be a matrix.
Theorem 2.
Let F:D⊆ℝN→ℝN be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x*∈ℝN, that is, a simple solution of the system F(x)=0. Let one suppose that F′(x) is continuous and nonsingular in x*. Then the sequence {x(n)}n≥0 obtained using the iterative method (7) using (8) converges to x* with convergence rate 4, and the error equation reads
(11)e(n+1)=C23(I+F′(x*))(2I+F′(x*))2e(n)4+O(e(n)5).
Proof.
Similar notation and terminology as in [11], yields to F(x(n))=F′(x*)[e(n)+C2e(n)2+C3e(n)3+C4e(n)4]+O(e(n)5), and, by taking into consideration b(n)=e(n)+F(e(n)), we write
(12)F(w(n))=F′(x*)[b(n)+C2b(n)2+C3b(n)3+C4b(n)4]+O(b(n)5),
where Ck=(1/k!)[F′(x*)]-1F(n)(x*), k=2,3,…. Then,
(13)[x(n),w(n);F]-1F(x(n))=e(n)1-C2(I+F′(x*))e(n)2-(C3(I+F′(x*))(2I+F′(x*))-C22(2I+F′(x*)(2I+F′(x*))))e(n)3+O(e(n)4),
and subsequently the expression for y(n) would be
(14)y(n)-x*=C2(I+F′(x*))e(n)2+⋯+O(e(n)4).
The Taylor expansion in the second step using (14) yields
(15)z(n)-x*=C22(I+F′(x*))(2I+F′(x*))e(n)3+⋯+O(e(n)5).
Therefore,
(16)F(z(n))=F′(x*)[C22(I+F′(x*))(2I+F′(x*))e(n)3hihhhh+⋯+O(e(n)5)].
Next, from an analogous reasoning as in (15) and (16), we obtain the error equation (11). Consequently, taking into account (11), it can be concluded that the order of convergence of the proposed method is four.
Our most important class of methods is to derive a class of iterations free from derivatives using the estimation (5). For example, the method (7) using (9) results in
(17)y(n)=x(n)-J(x(n),H(n))-1F(x(n)),z(n)=y(n)-J(x(n),H(n))-1F(y(n)),x(n+1)=z(n)-J(x(n),H(n))-1F(z(n)).
Note that it could be shown, in a similar way to the previous theorem, that (17) possesses fourth order of convergence. The implementation of (17) depends on the involved linear algebra problems. An interesting point in the new method (17) is that the LU decomposition of J needs to be done only once, and it could effectively be used three times per computing step to increase the rate of convergence without imposing much computational burden.
3. An <inline-formula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M105">
<mml:mrow>
<mml:mi>m</mml:mi></mml:mrow>
</mml:math></inline-formula>-Step Class
This section presents a general class of multistep iteration methods. In fact, the new scheme (17) can simply be improved by considering the Jacobian matrix J(x(n),H(n)) to be frozen. In such a way, we are able to propose a general m-step multipoint class of iterative methods in the following structure:
(18)ϑ1(n)=x(n)-ϖ1(n),ϑ2(n)=ϑ1(n)-ϖ2(n),⋮x(n+1)=ϑm(n)=ϑm-1(n)-ϖm(n),
wherein ϑi(n) is used in the linear system J(x(n),H(n))ϖi(n)=F(ϑi-1(n)), i=1,…,m. We remark that, in this structure, the LU factorization of the Jacobian matrix would be computed only once. This reduces the computational load of the linear algebra problems in implementing (18).
In the iterative process (18) each added step will impose one more N-dimensional function, whose cost is N while the convergence order will be improved to 1+O(m-1), wherein O(m-1) is the order of the previous substeps. Considering the well-known mathematical induction, it would be easy to deduce the following Theorem for (18).
Theorem 3.
Using the same conditions as in Theorem 2, the m-step iterative process (18) has the local convergence order m+1 using m+1 evaluations of the function F and one first-order divided difference operator per full iterations.
Proof.
The proof of this theorem is based on mathematical induction and is straightforward.
As an example, the five-step sixth-order method from the new class has the following structure:
(19)ϑ1(n)=x(n)-ϖ1(n),ϑ2(n)=ϑ1(n)-ϖ2(n),ϑ3(n)=ϑ2(n)-ϖ3(n),ϑ4(n)=ϑ3(n)-ϖ4(n),x(n+1)=ϑ5(n)=ϑ4(n)-ϖ5(n).
4. Complexity
In the iterative method (17), one may solve one linear system of equations per computing step, with three right-hand side vectors F(x(n)), F(y(n)), and F(z(n)), and a similar procedure must be applied for (19). In such a case, one could compute a factorization of the matrix and use it repeatedly. It is known that the cost (number of products/quotients) of solving the associated linear system by LU decomposition is (1/3)N3+N2-(1/3)N (including the LU factorization and two triangular systems), where N is the size of the system. Moreover, if one has k systems with the same matrix, then the final cost is (1/3)N3+kN2-(1/3)N.
The computational cost for (17) is as follows: N evaluations of scalar functions for F(x), N evaluations of scalar functions F(y), and N evaluations of scalar functions for F(w) and N2-N (since we have computed above F(x) and F(w) before) for the estimation (5).
We provide a comparison of efficiency indices for the methods (2) denoted by NM, (4) denoted by SM, (6) denoted by ZM, and the new methods (17) denoted by PM4 and (19) denoted by PM6 based on the computational efficiency index which is also known as operational-efficiency index in the literature [11].
The log plot of the efficiencies according to the definition of this index for an iterative method, which is given by E=p1/C, where p is the order of convergence and C stands for the total computational cost per iteration in terms of the number of functional evaluations and the number of products/quotients in finding the LU decomposition and two triangular systems, is given in Figure 1. It is clearly obvious that the new method (19) for higher N has dominance with respect to the other well-known methods.
The log plot of the efficiency indices for different methods (a) when N=4,…,10 and (b) when N=30,…,50.
Note that ENM=ESM=23/N(2+6N+N2),EZM=43/N(-2+15N+5N2), and for the proposed methods EPM4=43/N(8+12N+N2) and EPM6=63/N(14+18N+N2).
Although the new scheme does not have the drawbacks of NM or ZM in terms of computing Fréchet derivatives or three different divided difference operators, a significant focus should be allocated for large scale nonlinear systems. In fact, for large scale sparse nonlinear systems of equations, we have two main shortcomings, first putting numeric values into the Jacobian matrix for Newton-type methods and second the LU decomposition, which is time-consuming. The suggested method in this work does not need the computations of the Fréchet derivatives and, due to this, it resolves the first problem. For solving the second problem, the new scheme should be coded in the inexact form (see, e.g., [12, 13]) using very efficient solvers for the solution of linear systems of equations such as GMRES.
5. A Multiplication-Rich Scheme
A contribution of this study lies in a simple trick that could be used in this section to produce derivative-free schemes which are inversion-free as well. As a matter of fact, the general class (18) is free from Fréchet derivative per computing step and this makes it nice but we could use a simple trick to make the process inversion-free. That is to say, the computation of the inverse of J in (18) is tough since it could be a dense matrix. Although we use the LU decomposition to proceed per cycle, we could apply an iterative inverse finder to avoid the computation of inverse (or solving a system) by imposing further matrix-matrix multiplications.
Such a procedure could be done using the well-known Schulz-type methods (see, e.g., [14]). Here, we apply the Schulz inverse finder for this purpose, for one step of the matrix iterative method (18) as follows:
(20)Pn=2σ12+σr2Jn*,Pn+1=Pn(2I-JnPn),x(n+1)=x(n)-Pn+1F(x(n)),
wherein Jn=J(x(n),H(n))=(F(x(n)+H(n)e1)-F(x(n)),…,F(x(n)+H(n)eN)-F(x(n)))H(n)-1 and σ1, σr are the largest and smallest singular values of J. Unfortunately, convergence of this iterative scheme happens for initial matrices so close to the zeros and it might not be quadratic. In fact, the Steffensen method (SM) and its variants obtained by the class (18), for instance, the approach (19), should become inversion-free easily and efficiently.
We now illustrate the basins of attraction for SM2 and PM4 in the complex square of [-4,4]×[-4,4], when the stopping criterion is |xn+1-xn|≤10-6. Hopefully, the convergence radius for tackling nonlinear problems can become broadened by introducing the free nonzero parameter β, which would be a constant array in the multivariate case [9]. Introducing β yields the following form of Steffensen's method:
(21)H(n)=diag(βf1(x(n)),…,βfN(x(n))),J(x(n),βH(n))=(F(x(n)+H(n)e1)-F(x(n)),…,F(x(n)+H(n)eN)-F(x(n)))H(n)-1,x(n+1)=x(n)-J(x(n),βH(n))-1F(x(n)).
Figure 2(a) shows the SM without imposing small values for β, that is, with β=1, and Figure 2(b) is the basins of attraction of PM4. Figure 3 shows the basins of attraction of SM and PM4, but with small values of β.
Attraction basins of the polynomial z3-1=0 in the complex plane (shaded according to the number of iterations).
β=1 in SM
β=1 in PM4
Attraction basins of the polynomial z3-1=0 in the complex plane (shaded according to the number of iterations).
β=0.0001 in SM
β=0.0001 in PM4
It is clear that choosing small value for β results in larger basins of attraction [15]. Hence, we revise this idea and propose an inversion-free method for solving nonlinear systems of equation in what follows:
(22)chooseasmallvalueforβ,computex(1)by(21)andsetT1=J(x(0),βH(0))-1,Tn+1=Tn(2I-JnTn),n=1,2,…,x(n+1)=x(n)-Tn+1F(x(n)),
wherein Jn=J(x(n),βH(n)).
We name this multiplication-rich method as the Steffensen-Schulz iteration since it is derivative-free and inversion-free at the same time. It only requires one matrix inversion in the whole process of computing x(1).
6. Numerical Testing
We employ here the second-order method of Newton (2), the second-order scheme of Steffensen (4), the fourth-order scheme of Zheng et al. (6), and the proposed fourth-order derivative-free method (17) to compare the numerical results obtained from these methods in solving test nonlinear systems. We also put (22) into test for Test Problems 1 and 4.
Test 1.
As the first problem, we take into account the following hard system of 10 nonlinear equations with 10 unknowns:
(23)5exp(x1-2)x2+8x3x4-5x63+2x7x10-x9=0,5tan(x1+2)+x23+7x34-2sin(x6)3+cos(x9x10)=0,x12+tan(x2)+2x3x4-5x63-x5x6x7x8x9x10=0,2tan(x12)+2x2+x32-5x53-x6+x8cos(x9)=0,10x12+cos(x2)+x32-5x63-4x9-2x8-x10=0,arccos(x12)sin(x2)+x32-2x54x6x9x10=0,x1x2x7+x35-5x53+x7-x8x10=0,x4sin(x2)+x3-15x52+x7+arccos(x8+x9-10x10)=0,10x1+x32-5x52+10x6x8+2x9-sin(x7)=0,x1sin(x2)-5x6-2x10x8-10x9+x10=0.
In this test problem, the approximate solution up to 5 decimal places is the following vector: x*≈(1.88885+0.20069I,0.57690-2.01025I,1.003311-0.271000I,2.94243+0.83281I,0.841597-0.133319I,-0.471176+0.882220I,0.123992+0.141636I, 1.58763-0.37199I,2.55259+0.18419I,-2.06453+1.58241I)T.
Test 2.
We consider the following nonlinear system:
(24)xixi+1-1=0,i=1,2,…,N-1,xNx1-1=0,
where its solution is the vector x*=(1,…,1)T for odd N.
Test 3.
We consider the following large scale nonlinear system:
(25)(xixi+1)2-3=0,i=1,2,…,N-1,xN(x1)2-1=0,
where its solution is the vector x*=(0.57735,3.0000,0.57735,3.0000,…,0.57735,3.0000)T.
We report the numerical results for solving Tests 1–3 in Tables 1–3 based on the initial values. Note that an important aspect in implementing iterative method for solving nonlinear systems is to find a robust initial guess to guarantee the convergence. Some discussions regarding this are given in [16, 17].
Results of comparisons for different methods in Test 1 using x(0)=(1.88+0.2I, 0.57-2.01I, 1.00-0.27I, 2.94+0.83I, 0.84-0.13I, -0.47+0.88I, 0.12+0.14I, 1.58-0.37I, 2.55+0.18I, -2.06+1.58I)T.
Iterative methods
NM
SM
ZM
PM4
Number of iterations
7
9
4
5
The residual norm
3.73×10-164
1.49×10-153
4.14×10-149
9.37×10-173
COC
2.01
1.99
3.99
4.00
The elapsed time
0.35
1.00
1.23
0.37
Results of comparisons for different methods in Test 2 using x(0)=(2,…,2) and N=99.
Iterative methods
NM
SM
ZM
PM4
Number of iterations
8
8
5
5
The residual norm
2.86×10-121
2.86×10-121
0
0
COC
2.00
2.00
4.00
4.00
The elapsed time
0.70
0.79
2.82
0.65
Results of comparisons for different methods in Test 3 using x(0)=(2,…,2) and N=200.
Iterative methods
NM
SM
ZM
PM4
Number of iterations
9
17
5
7
The residual norm
2.56×10-110
1.24×10-126
5.66×10-75
2.13×10-107
COC
1.97
2.00
2.69
3.97
The elapsed time
5.95
8.59
23.23
4.95
The residual norm along with the number of iterations and computational time using the command Timing in Mathematica 8 [18] is reported in Tables 1–3. The computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, and 3.20 GHz with 4 GB of RAM.
An efficient way to observe the behavior of the order of convergence is to use the local computational order of convergence (COC) that can be defined by ρ≈ln(∥F(x(n+1))∥/∥F(x(n))∥)/ln(∥F(x(n))∥/∥F(x(n-1))∥), in the N-dimensional case. We have used this index in the numerical comparisons listed in Tables 1–3 for each different method to illustrate their numerical orders.
In numerical comparisons, we have chosen the fixed point arithmetic to be 200 using the command SetAccuarcy[expr,200] in the written codes. Note that, for some iterative methods, their residual norm at the specified iteration will exceed the bound of 10-200; thus, we consider such approximations as the exact solution and denoted the residual norm by 0 in some cells of the tables.
Results in Table 1 show that the new scheme can be considered for complex solutions of hard nonlinear systems. In this test, due to the fact that the dimension of the nonlinear system is low, we have used the LU decomposition to prevent the computation of 3 linear systems. Computational time reported in Table 1 verifies the fact that derivative-free methods with few number of divided difference operators are reliable. Note that Figure 4(a) reveals the residual fall of (22) for solving Test 1. It also reveals a quadratic convergence in the obtained residuals per full cycle.
Using β=0.0001 in (22) so as to solve different tests.
Convergence history of (22) for Test 1
Convergence history of (22) for Test 4
In order to tackle large scale nonlinear systems, we have included Tests 2 and 3 in this work. As could be seen from Tables 2 and 3, the cases for 99×99 and 200×200 are considered, respectively. It is obvious that for large scale nonlinear systems we have two difficulties in implementation; first the LU decomposition takes time to be attained and second finding the numeric values of the Jacobian matrix in the Newton-type methods is costly. However, to have a fair comparison, we have not defined the pattern of the Jacobian in the computations and used LU decomposition in solving the considered linear systems.
Test 4.
Consider the mixed Hammerstein integral equation [8]:
(26)x(s)=1+15∫01G(s,t)x(t)3dt,
where x∈C[0,1], s,t∈[0,1] and the kernel G is given by
(27)G(s,t)={(1-s)t,t≤s,s(1-t),t>s.
In order to solve this nonlinear integral equation, we transform the above equation into a finite-dimensional problem by using Gauss-Legendre quadrature formula given as ∫01f(t)dt≈Σj=1twjf(tj), where the abscissas tj and the weights wj are determined for t=50 by Gauss-Legendre quadrature formula. Denoting the approximation of x(ti) by xi (i=1,2,…,t), we obtain the system of nonlinear equations
(28)5xi-5-∑j=1taijxj3=0,
where, for i=1,2,…,t, we have
(29)aij={wjtj(1-ti),ifj≤i,wjti(1-tj),ifi<j,
wherein the abscissas tj and the weights wj are known.
Using the initial approximation x(0)=(0.5,…,0.5)T, we apply the proposed method (22) which is multiplication-rich to find the final solution vector of the nonlinear integral equation (28). Figure 4(b) puts on show the residual fall for solving the nonlinear integral equation (26) using (22) when t=50 is the size of the nonlinear system of equations.
From numerical results in this section, it is clear that the accuracy in successive iterations increases, showing stable nature of the methods. Also, like the existing methods, the presented method shows consistent convergence behavior. From the calculation of computational order of convergence, it is also verified that order of convergence is preserved.
7. Concluding Summary
We have presented a class of iterative methods for finding the solution of nonlinear systems. The construction of the suggested scheme let us achieve high convergence order by avoiding the computation of the Jacobian matrix, whose evaluation takes time for large scale nonlinear systems.
Some different numerical tests have been used to compare the consistency and stability of the proposed iterations in contrast to the existing methods. The numerical results obtained in Section 6 reverified the theoretical aspects of the paper. We have also revealed that the methods can efficiently be used for complex zeros as well.
Further modifications could be done to make the method hybrid so as to have a trust region. In sum, we can conclude that the novel iterative methods have an acceptable performance in solving systems of nonlinear equations.
Conflict of Interests
The authors declare that they do not have any conflict of interests in their submitted paper.
Acknowledgment
The authors wish to thank the anonymous referees for their valuable suggestions on the first version of this paper.
TraubJ. F.AlaidarousE. S.UllahM. Z.AhmadF.Al-FhaidA. S.An efficient higher-order quasilinearization method for solving nonlinear BVPsZhengQ.HuangF.GuoX.FengX.Doubly-accelerated Steffensen's methods with memory and their applications on solving nonlinear ODEsSoleymaniF.SoleimaniF.Novel computational derivative-free methods for simple rootsSoleymaniF.On a novel optimal quartically class of methodsWangW.-X.ShangY.-L.SunW.-G.ZhangY.Finding the roots of system of nonlinear equations by a novel filled function methodDarvishiM. T.ShinB.-C.High-order Newton-Krylov methods to solve systems of nonlinear equationsOrtegaJ. M.RheinboldtW. C.ZhengQ.ZhaoP.HuangF.A family of fourth-order Steffensen-type methods with the applications on solving nonlinear ODEsGrau-SánchezM.NogueraM.AmatS.On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methodsCorderoA.HuesoJ. L.MartínezE.TorregrosaJ. R.A modified Newton-Jarratt's compositionBaiZ.-Z.AnH.-B.A globally convergent Newton-GMRES method for large sparse systems of nonlinear equationsEisenstatS. C.WalkerH. F.Globally convergent inexact Newton methodsSoleymaniF.StanimirovicP. S.A higher order iterative method for computing the Drazin inverseCorderoA.SoleymaniF.TorregrosaJ. R.ShateyiS.Basins of attraction for various Steffensen-type methodsToutounianF.Saberi-NadjafiJ.TaheriS. H.A hybrid of the Newton-GMRES and electromagnetic meta-heuristic methods for solving systems of nonlinear equationsWagonS.TrottM.