JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 705375 10.1155/2014/705375 705375 Research Article A Class of Steffensen-Type Iterative Methods for Nonlinear Systems Soleymani F. 1 Sharifi M. 1 Shateyi S. 2 Haghani F. Khaksar 1 Cordero Alicia 1 Department of Mathematics Islamic Azad University, Shahrekord Branch Shahrekord Iran iauzah.ac.ir 2 Department of Mathematics University of Venda Private Bag Box X5050 Thohoyandou 0950 South Africa univen.ac.za 2014 1042014 2014 23 12 2013 17 02 2014 03 03 2014 10 4 2014 2014 Copyright © 2014 F. Soleymani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A class of iterative methods without restriction on the computation of Fréchet derivatives including multisteps for solving systems of nonlinear equations is presented. By considering a frozen Jacobian, we provide a class of m-step methods with order of convergence m + 1 . A new method named as Steffensen-Schulz scheme is also contributed. Numerical tests and comparisons with the existing methods are included.

1. Introduction

In this paper, we take into account the following system of nonlinear equations: (1) f 1 ( x 1 , x 2 , , x N ) = 0 , f 2 ( x 1 , x 2 , , x N ) = 0 , f N ( x 1 , x 2 , , x N ) = 0 , wherein each function f i maps a vector x = ( x 1 , x 2 , , x N ) T of the N -dimensional space N into the real line . This system contains N nonlinear equations with N unknowns and could be expressed in a more simplified form by defining a function F mapping F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f N ( x ) ) T . Hence, the nonlinear system (1) can be written in the form F ( x ) = 0 , where the functions f 1 ( x ) , f 2 ( x ) , , f N ( x ) are the coordinate functions of F ; see for more details  and also for application issues the work .

We assume that F ( x ) is a smooth function of x in the open convex set D N . There is a strong incentive to use derivative information as well as function values in order to solve traditionally the system (1). The most famous solver for such a problem is Newton's iteration , which is defined as (2) x ( n + 1 ) = x ( n ) - F ( x ( n ) ) - 1 F ( x ( n ) ) , n = 0,1 , 2 , .

In this iterative method, we use the N × N Jacobian matrix, that is, F ( x ) , with entries F ( x ) j k = x k f j ( x ) . To be more precise, there are plenty of solvers to tackle the problem (1) or its scalar case, such as those in . Among such methods, the third-order iterative methods like the Halley and Chebyshev methods  are considered less practically from a computational point of view because they need to compute the expensive second-order Fréchet derivatives in contrast to the quadratically convergent method (2), in which the first-order Fréchet derivative needs to be calculated. As a matter of fact, for the considered problem (1), the first Fréchet derivative is a matrix with N 2 entries, while the second-order Fréchet derivative has N 3 entries (without considering the symmetry). On this account, it needs a large amount of operations in order to evaluate the second derivative for each iteration.

To avoid using the Jacobian whose computation is time-consuming for large scale systems, in Chapter 7 of , authors presented the definition of divided difference in N -dimensional space as an N × N matrix with elements (3) [ x , y ; f ] i , j = ( f i ( x 1 , , x j , y j + 1 , , y N ) - f i ( x 1 , , x j - 1 , y j , , y N ) ) × ( x j - y j ) - 1 , to prevent computing the first-order Fréchet derivative. Note that (3) is a component-to-component definition. Traub in the pioneer book  introduced another tool named as J ( x , H ) to estimate the Jacobian matrix and to derive Steffensen's method for nonlinear systems as follows: (4) x ( n + 1 ) = x ( n ) - J ( x ( n ) , H ( n ) ) - 1 F ( x ( n ) ) , n = 0,1 , 2 , , wherein (5) J ( x ( n ) , H ( n ) ) = ( F ( x ( n ) + H ( n ) e 1 ) - F ( x ( n ) ) , , F ( x ( n ) + H ( n ) e N ) - F ( x ( n ) ) ) H ( n ) - 1 , with H ( n ) = diag ( f 1 ( x ( n ) ) , , f N ( x ( n ) ) ) .

The primary aim of the present study is to achieve high rate of convergence in order to solve (1) using (5). Hence, we first propose a new iterative method with fourth-order convergence to find both real and complex solutions. The new method does not even need the evaluation of one first-order Fréchet derivative, let alone the higher-order ones. Next, by considering a frozen Jacobian matrix, we suggest a general m -step class of iterative methods with arbitrary order of convergence and higher computational efficiency index.

The rest of this paper is prepared as follows. In Section 2, the construction of a scheme is offered. It also includes the analysis of convergence and shows that the suggested method has fourth order. In Section 3, we will extend the new method to present a multistep class of iterations. Section 4 contains a discussion about the implementation and the efficiency of the iterative methods. Section 5 contains a contribution of this study by introducing Steffensen-Schulz iterative method for the first time. This is followed by Section 6 where some numerical tests will be furnished to illustrate the accuracy and efficiency of the proposed approach. Section 7 ends the paper where short conclusions of the study by pointing out future research aspects are given.

2. Derivation

In , authors presented the following iterative method: (6) y ( n ) = x ( n ) - [ x ( n ) , w ( n ) ; F ] - 1 F ( x ( n ) ) , n = 0,1 , 2 , , x ( n + 1 ) = y ( n ) - [ x ( n ) , y ( n ) ; F ] - 1 × ( [ x ( n ) , y ( n ) ; F ] - [ y ( n ) , w ( n ) ; F ] + [ x ( n ) , w ( n ) ; F ] ) × [ x ( n ) , y ( n ) ; F ] - 1 F ( y ( n ) ) , wherein w ( n ) = x ( n ) + F ( x ( n ) ) and it possesses the fourth-order convergence to approximate the simple solution of the system (1). As could be seen, it includes the evaluations F ( x ( n ) ) , F ( y ( n ) ) , F ( w ( n ) ) and three divided difference operators of order one based on (5), to possess the fourth-order convergence. This process is costly for challenging systems of nonlinear equations.

Now, in order to reach the fourth order of convergence without imposing the computation of even first-order Fréchet derivative or three divided difference operators, we consider a three-step structure with the same correcting factor as follows: (7) y ( n ) = x ( n ) - M ( n ) - 1 F ( x ( n ) ) , z ( n ) = y ( n ) - M ( n ) - 1 F ( y ( n ) ) , x ( n + 1 ) = z ( n ) - M ( n ) - 1 F ( z ( n ) ) , wherein we could use the component-by-component approximation (3) : (8) M ( n ) = [ x ( n ) , w ( n ) ; F ] , or the estimation introduced by Traub (5) as follows: (9) M ( n ) = J ( x ( n ) , H ( n ) ) .

Note that (8) and (9) are not equal. Per computing step, the method (6) requires computing F at three different points without the computation of the Fréchet derivatives which are costly for large scale problems.

The following theorem will be demonstrated by means of the N -dimensional Taylor expansion of the functions using the estimation (8) in (7). We here include some of the basic notions which are important in the proof. Let F : D N N be sufficiently Fréchet differentiable in D . By using the notation introduced in , the q th derivative of F at u N , q 1 , is the q -linear function F ( q ) ( u ) : N × × N N such that F ( q ) ( u ) ( v 1 , , v q ) N . Thus, we have the following:

F ( q ) ( u ) ( v 1 , , v q - 1 , · ) ( N ) ,

F ( q ) ( u ) ( v σ ( 1 ) , , v σ ( q ) ) = F ( q ) ( u ) ( v 1 , , v q ) , for all permutation σ of { 1,2 , , q } .

Hence, we take into account F ( q ) ( u ) ( v 1 , , v q ) = F ( q ) ( u ) v 1 , , v q , and also F ( q ) ( u ) v q - 1 F ( p ) v p = F ( q ) ( u ) F ( p ) ( u ) v q + p - 1 . It is well known that, for x * + h N lying in a neighborhood of a solution x * of the nonlinear system F ( x ) = 0 , Taylor's expansion might be written as follows (assuming that the Jacobian matrix F ( x * ) is nonsingular): F ( x * + h ) = F ( x * ) [ h + q = 2 p - 1 C q h q ] + O ( h p ) , where C q = ( 1 / q ! ) [ F ( x * ) ] - 1 F ( q ) ( x * ) , q 2 . We observe that C q h q N since F ( q ) ( x * ) ( N × × N , N ) and [ F ( x * ) ] - 1 ( N ) .

In addition, we can express F as F ( x * + h ) = F ( x * ) [ I + q = 2 p - 1 q C q h q - 1 ] + O ( h p ) , wherein I is the identity matrix of the same order to the Jacobian matrix. Therefore, q C q h q - 1 ( N ) . We also in the sequel denote e ( n ) = x ( n ) - x * as the error in the n th iteration. The equation (10) e ( n + 1 ) = L e ( n ) p + O ( e ( n ) p + 1 ) , where L is a p -linear function L ( N × × N , N ) , is called the error equation and p is the order of convergence.

Remark 1.

e ( n ) = x ( n ) - x * which is the error in the n th iteration is a vector and e ( n ) p = ( e ( n ) , e ( n ) , , e ( n ) ) would be a matrix.

Theorem 2.

Let F : D N N be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of x * N , that is, a simple solution of the system F ( x ) = 0 . Let one suppose that F ( x ) is continuous and nonsingular in x * . Then the sequence { x ( n ) } n 0 obtained using the iterative method (7) using (8) converges to x * with convergence rate 4 , and the error equation reads (11) e ( n + 1 ) = C 2 3 ( I + F ( x * ) ) ( 2 I + F ( x * ) ) 2 e ( n ) 4 + O ( e ( n ) 5 ) .

Proof.

Similar notation and terminology as in , yields to F ( x ( n ) ) = F ( x * ) [ e ( n ) + C 2 e ( n ) 2 + C 3 e ( n ) 3 + C 4 e ( n ) 4 ] + O ( e ( n ) 5 ) , and, by taking into consideration b ( n ) = e ( n ) + F ( e ( n ) ) , we write (12) F ( w ( n ) ) = F ( x * ) [ b ( n ) + C 2 b ( n ) 2 + C 3 b ( n ) 3 + C 4 b ( n ) 4 ] + O ( b ( n ) 5 ) , where C k = ( 1 / k ! ) [ F ( x * ) ] - 1 F ( n ) ( x * ) , k = 2,3 , . Then, (13) [ x ( n ) , w ( n ) ; F ] - 1 F ( x ( n ) ) = e ( n ) 1 - C 2 ( I + F ( x * ) ) e ( n ) 2 - ( C 3 ( I + F ( x * ) ) ( 2 I + F ( x * ) ) - C 2 2 ( 2 I + F ( x * ) ( 2 I + F ( x * ) ) ) ) e ( n ) 3 + O ( e ( n ) 4 ) , and subsequently the expression for y ( n ) would be (14) y ( n ) - x * = C 2 ( I + F ( x * ) ) e ( n ) 2 + + O ( e ( n ) 4 ) . The Taylor expansion in the second step using (14) yields (15) z ( n ) - x * = C 2 2 ( I + F ( x * ) ) ( 2 I + F ( x * ) ) e ( n ) 3 + + O ( e ( n ) 5 ) . Therefore, (16) F ( z ( n ) ) = F ( x * ) [ C 2 2 ( I + F ( x * ) ) ( 2 I + F ( x * ) ) e ( n ) 3 hihhhh + + O ( e ( n ) 5 ) ] . Next, from an analogous reasoning as in (15) and (16), we obtain the error equation (11). Consequently, taking into account (11), it can be concluded that the order of convergence of the proposed method is four.

Our most important class of methods is to derive a class of iterations free from derivatives using the estimation (5). For example, the method (7) using (9) results in (17) y ( n ) = x ( n ) - J ( x ( n ) , H ( n ) ) - 1 F ( x ( n ) ) , z ( n ) = y ( n ) - J ( x ( n ) , H ( n ) ) - 1 F ( y ( n ) ) , x ( n + 1 ) = z ( n ) - J ( x ( n ) , H ( n ) ) - 1 F ( z ( n ) ) .

Note that it could be shown, in a similar way to the previous theorem, that (17) possesses fourth order of convergence. The implementation of (17) depends on the involved linear algebra problems. An interesting point in the new method (17) is that the LU decomposition of J needs to be done only once, and it could effectively be used three times per computing step to increase the rate of convergence without imposing much computational burden.

3. An <inline-formula> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M105"> <mml:mrow> <mml:mi>m</mml:mi></mml:mrow> </mml:math></inline-formula>-Step Class

This section presents a general class of multistep iteration methods. In fact, the new scheme (17) can simply be improved by considering the Jacobian matrix J ( x ( n ) , H ( n ) ) to be frozen. In such a way, we are able to propose a general m -step multipoint class of iterative methods in the following structure: (18) ϑ 1 ( n ) = x ( n ) - ϖ 1 ( n ) , ϑ 2 ( n ) = ϑ 1 ( n ) - ϖ 2 ( n ) , x ( n + 1 ) = ϑ m ( n ) = ϑ m - 1 ( n ) - ϖ m ( n ) , wherein ϑ i ( n ) is used in the linear system J ( x ( n ) , H ( n ) ) ϖ i ( n ) = F ( ϑ i - 1 ( n ) ) , i = 1 , , m . We remark that, in this structure, the LU factorization of the Jacobian matrix would be computed only once. This reduces the computational load of the linear algebra problems in implementing (18).

In the iterative process (18) each added step will impose one more N -dimensional function, whose cost is N while the convergence order will be improved to 1 + O ( m - 1 ) , wherein O ( m - 1 ) is the order of the previous substeps. Considering the well-known mathematical induction, it would be easy to deduce the following Theorem for (18).

Theorem 3.

Using the same conditions as in Theorem 2, the m -step iterative process (18) has the local convergence order m + 1 using m + 1 evaluations of the function F and one first-order divided difference operator per full iterations.

Proof.

The proof of this theorem is based on mathematical induction and is straightforward.

As an example, the five-step sixth-order method from the new class has the following structure: (19) ϑ 1 ( n ) = x ( n ) - ϖ 1 ( n ) , ϑ 2 ( n ) = ϑ 1 ( n ) - ϖ 2 ( n ) , ϑ 3 ( n ) = ϑ 2 ( n ) - ϖ 3 ( n ) , ϑ 4 ( n ) = ϑ 3 ( n ) - ϖ 4 ( n ) , x ( n + 1 ) = ϑ 5 ( n ) = ϑ 4 ( n ) - ϖ 5 ( n ) .

4. Complexity

In the iterative method (17), one may solve one linear system of equations per computing step, with three right-hand side vectors F ( x ( n ) ) , F ( y ( n ) ) , and F ( z ( n ) ) , and a similar procedure must be applied for (19). In such a case, one could compute a factorization of the matrix and use it repeatedly. It is known that the cost (number of products/quotients) of solving the associated linear system by LU decomposition is ( 1 / 3 ) N 3 + N 2 - ( 1 / 3 ) N (including the LU factorization and two triangular systems), where N is the size of the system. Moreover, if one has k systems with the same matrix, then the final cost is ( 1 / 3 ) N 3 + k N 2 - ( 1 / 3 ) N .

The computational cost for (17) is as follows: N evaluations of scalar functions for F ( x ) , N evaluations of scalar functions F ( y ) , and N evaluations of scalar functions for F ( w ) and N 2 - N (since we have computed above F ( x ) and F ( w ) before) for the estimation (5).

We provide a comparison of efficiency indices for the methods (2) denoted by NM, (4) denoted by SM, (6) denoted by ZM, and the new methods (17) denoted by PM4 and (19) denoted by PM6 based on the computational efficiency index which is also known as operational-efficiency index in the literature .

The log plot of the efficiencies according to the definition of this index for an iterative method, which is given by E = p 1 / C , where p is the order of convergence and C stands for the total computational cost per iteration in terms of the number of functional evaluations and the number of products/quotients in finding the LU decomposition and two triangular systems, is given in Figure 1. It is clearly obvious that the new method (19) for higher N has dominance with respect to the other well-known methods.

The log plot of the efficiency indices for different methods (a) when N = 4 , , 10 and (b) when N = 30 , , 50 .

Note that E NM = E SM = 2 3 / N ( 2 + 6 N + N 2 ) , E ZM = 4 3 / N ( - 2 + 15 N + 5 N 2 ) , and for the proposed methods E PM 4 = 4 3 / N ( 8 + 12 N + N 2 ) and E PM 6 = 6 3 / N ( 14 + 18 N + N 2 ) .

Although the new scheme does not have the drawbacks of NM or ZM in terms of computing Fréchet derivatives or three different divided difference operators, a significant focus should be allocated for large scale nonlinear systems. In fact, for large scale sparse nonlinear systems of equations, we have two main shortcomings, first putting numeric values into the Jacobian matrix for Newton-type methods and second the LU decomposition, which is time-consuming. The suggested method in this work does not need the computations of the Fréchet derivatives and, due to this, it resolves the first problem. For solving the second problem, the new scheme should be coded in the inexact form (see, e.g., [12, 13]) using very efficient solvers for the solution of linear systems of equations such as GMRES.

5. A Multiplication-Rich Scheme

A contribution of this study lies in a simple trick that could be used in this section to produce derivative-free schemes which are inversion-free as well. As a matter of fact, the general class (18) is free from Fréchet derivative per computing step and this makes it nice but we could use a simple trick to make the process inversion-free. That is to say, the computation of the inverse of J in (18) is tough since it could be a dense matrix. Although we use the LU decomposition to proceed per cycle, we could apply an iterative inverse finder to avoid the computation of inverse (or solving a system) by imposing further matrix-matrix multiplications.

Such a procedure could be done using the well-known Schulz-type methods (see, e.g., ). Here, we apply the Schulz inverse finder for this purpose, for one step of the matrix iterative method (18) as follows: (20) P n = 2 σ 1 2 + σ r 2 J n * , P n + 1 = P n ( 2 I - J n P n ) , x ( n + 1 ) = x ( n ) - P n + 1 F ( x ( n ) ) , wherein J n = J ( x ( n ) , H ( n ) ) = ( F ( x ( n ) + H ( n ) e 1 ) - F ( x ( n ) ) , , F ( x ( n ) + H ( n ) e N ) - F ( x ( n ) ) ) H ( n ) - 1 and σ 1 , σ r are the largest and smallest singular values of J . Unfortunately, convergence of this iterative scheme happens for initial matrices so close to the zeros and it might not be quadratic. In fact, the Steffensen method (SM) and its variants obtained by the class (18), for instance, the approach (19), should become inversion-free easily and efficiently.

We now illustrate the basins of attraction for SM2 and PM4 in the complex square of [ - 4,4 ] × [ - 4,4 ] , when the stopping criterion is | x n + 1 - x n | 1 0 - 6 . Hopefully, the convergence radius for tackling nonlinear problems can become broadened by introducing the free nonzero parameter β , which would be a constant array in the multivariate case . Introducing β yields the following form of Steffensen's method: (21) H ( n ) = diag ( β f 1 ( x ( n ) ) , , β f N ( x ( n ) ) ) , J ( x ( n ) , β H ( n ) ) = ( F ( x ( n ) + H ( n ) e 1 ) - F ( x ( n ) ) , , F ( x ( n ) + H ( n ) e N ) - F ( x ( n ) ) ) H ( n ) - 1 , x ( n + 1 ) = x ( n ) - J ( x ( n ) , β H ( n ) ) - 1 F ( x ( n ) ) .

Figure 2(a) shows the SM without imposing small values for β , that is, with β = 1 , and Figure 2(b) is the basins of attraction of PM4. Figure 3 shows the basins of attraction of SM and PM4, but with small values of β .

Attraction basins of the polynomial z 3 - 1 = 0 in the complex plane (shaded according to the number of iterations).

β = 1 in SM

β = 1 in PM4

Attraction basins of the polynomial z 3 - 1 = 0 in the complex plane (shaded according to the number of iterations).

β = 0.0001 in SM

β = 0.0001 in PM4

It is clear that choosing small value for β results in larger basins of attraction . Hence, we revise this idea and propose an inversion-free method for solving nonlinear systems of equation in what follows: (22) choose a small value for β , compute x ( 1 ) by ( 21 ) and set T 1 = J ( x ( 0 ) , β H ( 0 ) ) - 1 , T n + 1 = T n ( 2 I - J n T n ) , n = 1,2 , , x ( n + 1 ) = x ( n ) - T n + 1 F ( x ( n ) ) , wherein J n = J ( x ( n ) , β H ( n ) ) .

We name this multiplication-rich method as the Steffensen-Schulz iteration since it is derivative-free and inversion-free at the same time. It only requires one matrix inversion in the whole process of computing x ( 1 ) .

6. Numerical Testing

We employ here the second-order method of Newton (2), the second-order scheme of Steffensen (4), the fourth-order scheme of Zheng et al. (6), and the proposed fourth-order derivative-free method (17) to compare the numerical results obtained from these methods in solving test nonlinear systems. We also put (22) into test for Test Problems 1 and 4.

Test 1.

As the first problem, we take into account the following hard system of 10 nonlinear equations with 10 unknowns: (23) 5 exp ( x 1 - 2 ) x 2 + 8 x 3 x 4 - 5 x 6 3 + 2 x 7 x 10 - x 9 = 0 , 5 tan ( x 1 + 2 ) + x 2 3 + 7 x 3 4 - 2 sin ( x 6 ) 3 + cos ( x 9 x 10 ) = 0 , x 1 2 + tan ( x 2 ) + 2 x 3 x 4 - 5 x 6 3 - x 5 x 6 x 7 x 8 x 9 x 10 = 0 , 2 tan ( x 1 2 ) + 2 x 2 + x 3 2 - 5 x 5 3 - x 6 + x 8 cos ( x 9 ) = 0 , 10 x 1 2 + cos ( x 2 ) + x 3 2 - 5 x 6 3 - 4 x 9 - 2 x 8 - x 10 = 0 , arccos ( x 1 2 ) sin ( x 2 ) + x 3 2 - 2 x 5 4 x 6 x 9 x 10 = 0 , x 1 x 2 x 7 + x 3 5 - 5 x 5 3 + x 7 - x 8 x 10 = 0 , x 4 sin ( x 2 ) + x 3 - 15 x 5 2 + x 7 + arccos ( x 8 + x 9 - 10 x 10 ) = 0 , 10 x 1 + x 3 2 - 5 x 5 2 + 10 x 6 x 8 + 2 x 9 - sin ( x 7 ) = 0 , x 1 sin ( x 2 ) - 5 x 6 - 2 x 10 x 8 - 10 x 9 + x 10 = 0 .

In this test problem, the approximate solution up to 5 decimal places is the following vector: x * ( 1.88885 + 0.20069 I , 0.57690 - 2.01025 I , 1.003311 - 0.271000 I , 2.94243 + 0.83281 I , 0.841597 - 0.133319 I , - 0.471176 + 0.882220 I , 0.123992 + 0.141636 I , 1.58763 - 0.37199 I , 2.55259 + 0.18419 I , - 2.06453 + 1.58241 I ) T .

Test 2.

We consider the following nonlinear system: (24) x i x i + 1 - 1 = 0 , i = 1,2 , , N - 1 , x N x 1 - 1 = 0 , where its solution is the vector x * = ( 1 , , 1 ) T for odd N .

Test 3.

We consider the following large scale nonlinear system: (25) ( x i x i + 1 ) 2 - 3 = 0 , i = 1,2 , , N - 1 , x N ( x 1 ) 2 - 1 = 0 , where its solution is the vector x * = ( 0.57735,3.0000,0.57735,3.0000 , , 0.57735,3.0000 ) T .

We report the numerical results for solving Tests 13 in Tables 13 based on the initial values. Note that an important aspect in implementing iterative method for solving nonlinear systems is to find a robust initial guess to guarantee the convergence. Some discussions regarding this are given in [16, 17].

Results of comparisons for different methods in Test 1 using x ( 0 ) = ( 1.88 + 0.2 I , 0.57 - 2.01 I , 1.00 - 0.27 I , 2.94 + 0.83 I , 0.84 - 0.13 I , - 0.47 + 0.88 I , 0.12 + 0.14 I , 1.58 - 0.37 I , 2.55 + 0.18 I , - 2.06 + 1.58 I ) T .

Iterative methods NM SM ZM PM4
Number of iterations 7 9 4 5
The residual norm 3.73 × 1 0 - 164 1.49 × 1 0 - 153 4.14 × 1 0 - 149 9.37 × 1 0 - 173
COC 2.01 1.99 3.99 4.00
The elapsed time 0.35 1.00 1.23 0.37

Results of comparisons for different methods in Test 2 using x ( 0 ) = ( 2 , , 2 ) and N = 99 .

Iterative methods NM SM ZM PM4
Number of iterations 8 8 5 5
The residual norm 2.86 × 1 0 - 121 2.86 × 1 0 - 121 0 0
COC 2.00 2.00 4.00 4.00
The elapsed time 0.70 0.79 2.82 0.65

Results of comparisons for different methods in Test 3 using x ( 0 ) = ( 2 , , 2 ) and N = 200 .

Iterative methods NM SM ZM PM4
Number of iterations 9 17 5 7
The residual norm 2.56 × 1 0 - 110 1.24 × 1 0 - 126 5.66 × 1 0 - 75 2.13 × 1 0 - 107
COC 1.97 2.00 2.69 3.97
The elapsed time 5.95 8.59 23.23 4.95

The residual norm along with the number of iterations and computational time using the command Timing in Mathematica 8  is reported in Tables 13. The computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, and 3.20 GHz with 4 GB of RAM.

An efficient way to observe the behavior of the order of convergence is to use the local computational order of convergence (COC) that can be defined by ρ ln ( F ( x ( n + 1 ) ) / F ( x ( n ) ) ) / ln ( F ( x ( n ) ) / F ( x ( n - 1 ) ) ) , in the N -dimensional case. We have used this index in the numerical comparisons listed in Tables 13 for each different method to illustrate their numerical orders.

In numerical comparisons, we have chosen the fixed point arithmetic to be 200 using the command SetAccuarcy[expr,200] in the written codes. Note that, for some iterative methods, their residual norm at the specified iteration will exceed the bound of 1 0 - 200 ; thus, we consider such approximations as the exact solution and denoted the residual norm by 0 in some cells of the tables.

Results in Table 1 show that the new scheme can be considered for complex solutions of hard nonlinear systems. In this test, due to the fact that the dimension of the nonlinear system is low, we have used the LU decomposition to prevent the computation of 3 linear systems. Computational time reported in Table 1 verifies the fact that derivative-free methods with few number of divided difference operators are reliable. Note that Figure 4(a) reveals the residual fall of (22) for solving Test 1. It also reveals a quadratic convergence in the obtained residuals per full cycle.

Using β = 0.0001 in (22) so as to solve different tests.

Convergence history of (22) for Test 1

Convergence history of (22) for Test 4

In order to tackle large scale nonlinear systems, we have included Tests 2 and 3 in this work. As could be seen from Tables 2 and 3, the cases for 99 × 99 and 200 × 200 are considered, respectively. It is obvious that for large scale nonlinear systems we have two difficulties in implementation; first the LU decomposition takes time to be attained and second finding the numeric values of the Jacobian matrix in the Newton-type methods is costly. However, to have a fair comparison, we have not defined the pattern of the Jacobian in the computations and used LU decomposition in solving the considered linear systems.

Test 4.

Consider the mixed Hammerstein integral equation : (26) x ( s ) = 1 + 1 5 0 1 G ( s , t ) x ( t ) 3 d t , where x C [ 0,1 ] , s , t [ 0,1 ] and the kernel G is given by (27) G ( s , t ) = { ( 1 - s ) t , t s , s ( 1 - t ) , t > s . In order to solve this nonlinear integral equation, we transform the above equation into a finite-dimensional problem by using Gauss-Legendre quadrature formula given as 0 1 f ( t ) d t Σ j = 1 t w j f ( t j ) , where the abscissas t j and the weights w j are determined for t = 50 by Gauss-Legendre quadrature formula. Denoting the approximation of x ( t i ) by x i ( i = 1,2 , , t ), we obtain the system of nonlinear equations (28) 5 x i - 5 - j = 1 t a i j x j 3 = 0 , where, for i = 1,2 , , t , we have (29) a i j = { w j t j ( 1 - t i ) , if    j i , w j t i ( 1 - t j ) , if    i < j , wherein the abscissas t j and the weights w j are known.

Using the initial approximation x ( 0 ) = ( 0.5 , , 0.5 ) T , we apply the proposed method (22) which is multiplication-rich to find the final solution vector of the nonlinear integral equation (28). Figure 4(b) puts on show the residual fall for solving the nonlinear integral equation (26) using (22) when t = 50 is the size of the nonlinear system of equations.

From numerical results in this section, it is clear that the accuracy in successive iterations increases, showing stable nature of the methods. Also, like the existing methods, the presented method shows consistent convergence behavior. From the calculation of computational order of convergence, it is also verified that order of convergence is preserved.

7. Concluding Summary

We have presented a class of iterative methods for finding the solution of nonlinear systems. The construction of the suggested scheme let us achieve high convergence order by avoiding the computation of the Jacobian matrix, whose evaluation takes time for large scale nonlinear systems.

Some different numerical tests have been used to compare the consistency and stability of the proposed iterations in contrast to the existing methods. The numerical results obtained in Section 6 reverified the theoretical aspects of the paper. We have also revealed that the methods can efficiently be used for complex zeros as well.

Further modifications could be done to make the method hybrid so as to have a trust region. In sum, we can conclude that the novel iterative methods have an acceptable performance in solving systems of nonlinear equations.

Conflict of Interests

The authors declare that they do not have any conflict of interests in their submitted paper.

Acknowledgment

The authors wish to thank the anonymous referees for their valuable suggestions on the first version of this paper.

Traub J. F. Iterative Methods for the Solution of Equations 1964 New York, NY, USA Prentice-Hall Prentice-Hall Series in Automatic Computation MR0169356 Alaidarous E. S. Ullah M. Z. Ahmad F. Al-Fhaid A. S. An efficient higher-order quasilinearization method for solving nonlinear BVPs Journal of Applied Mathematics 2013 2013 11 259371 MR3133968 10.1155/2013/259371 Zheng Q. Huang F. Guo X. Feng X. Doubly-accelerated Steffensen's methods with memory and their applications on solving nonlinear ODEs Journal of Computational Analysis and Applications 2013 15 5 886 891 MR3087123 ZBL1275.65027 Soleymani F. Soleimani F. Novel computational derivative-free methods for simple roots Fixed Point Theory 2012 13 1 247 258 MR2953688 ZBL1272.65043 Soleymani F. On a novel optimal quartically class of methods Far East Journal of Mathematical Sciences 2011 58 2 199 206 MR2907695 ZBL1252.41015 Wang W.-X. Shang Y.-L. Sun W.-G. Zhang Y. Finding the roots of system of nonlinear equations by a novel filled function method Abstract and Applied Analysis 2011 2011 9 209083 10.1155/2011/209083 MR2851988 ZBL1242.49069 Darvishi M. T. Shin B.-C. High-order Newton-Krylov methods to solve systems of nonlinear equations Journal of the Korean Society for Industrial and Applied Mathematics 2011 15 1 19 30 MR2793158 ZBL06256361 Ortega J. M. Rheinboldt W. C. Iterative Solution of Nonlinear Equations in Several Variables 1970 New York, NY, USA Academic Press MR0273810 Zheng Q. Zhao P. Huang F. A family of fourth-order Steffensen-type methods with the applications on solving nonlinear ODEs Applied Mathematics and Computation 2011 217 21 8196 8203 10.1016/j.amc.2011.01.095 MR2802229 ZBL1223.65034 Grau-Sánchez M. Noguera M. Amat S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods Journal of Computational and Applied Mathematics 2013 237 1 363 372 10.1016/j.cam.2012.06.005 MR2966912 ZBL06097311 Cordero A. Hueso J. L. Martínez E. Torregrosa J. R. A modified Newton-Jarratt's composition Numerical Algorithms 2010 55 1 87 99 10.1007/s11075-009-9359-z MR2679752 ZBL1251.65074 Bai Z.-Z. An H.-B. A globally convergent Newton-GMRES method for large sparse systems of nonlinear equations Applied Numerical Mathematics 2007 57 3 235 252 10.1016/j.apnum.2006.02.007 MR2292433 ZBL1123.65040 Eisenstat S. C. Walker H. F. Globally convergent inexact Newton methods SIAM Journal on Optimization 1994 4 2 393 422 10.1137/0804022 MR1273766 ZBL0814.65049 Soleymani F. Stanimirovic P. S. A higher order iterative method for computing the Drazin inverse The Scientific World Journal 2013 2013 11 708647 10.1155/2013/708647 Cordero A. Soleymani F. Torregrosa J. R. Shateyi S. Basins of attraction for various Steffensen-type methods Journal of Applied Mathematics 2014 2014 17 539707 10.1155/2014/539707 Toutounian F. Saberi-Nadjafi J. Taheri S. H. A hybrid of the Newton-GMRES and electromagnetic meta-heuristic methods for solving systems of nonlinear equations Journal of Mathematical Modelling and Algorithms 2009 8 4 425 443 10.1007/s10852-009-9117-1 MR2563195 ZBL1179.65054 Wagon S. Mathematica in Action 2010 3rd New York, NY, USA Springer Trott M. The Mathematica Guide-Book for Numerics 2006 New York, NY, USA Springer