On Some Efficient Techniques for Solving Systems of Nonlinear Equations

We present iterative methods of convergence order three, five, and six for solving systems of nonlinear equations. Third-order method is composed of two steps, namely, Newton iteration as the first step and weighted-Newton iteration as the second step. Fifth and sixth-ordermethods are composed of three steps of which the first two steps are same as that of the third-ordermethodwhereas the third is again a weighted-Newton step. Computational efficiency in its general form is discussed and a comparison between the efficiencies of proposed techniques with existing ones is made. The performance is tested through numerical examples. Moreover, theoretical results concerning order of convergence and computational efficiency are verified in the examples. It is shown that the present methods have an edge over similar existing methods, particularly when applied to large systems of equations.

In quest of efficient methods without using second Fréchet derivative, a variety of third and higher order methods have been proposed in recent years.For example, Frontini and Sormani in [9] and Homeier in [10] developed some third order methods each requiring the evaluations of one function, two first-order derivatives, and two matrix inversions per iteration.Darvishi and Barati [11] presented a fourth-order method which uses two functions, three first derivatives, and two matrix inversions.Cordero and Torregrosa [12] developed two variants of Newton's method with third-order convergence; one of the variants requires the evaluations of one function, three first derivatives, and two

The Fifth-Order Method
Based on the two-step scheme (20), we propose the following three-step scheme: where  1 and  2 are some parameters that are to be determined.
From the above analysis, we can state the following theorem.
Theorem 3. Let the function  :  ⊆   →   be sufficiently differentiable in an open neighborhood  of its zero .If an initial approximation  (0) is sufficiently close to , then the local order of convergence of method (21) The proposed algorithm (21) now can be written as It is clear that this formula uses the evaluations of two functions, two derivatives and only one matrix inversion in all.

The Sixth-Order Method
Here, with the first two steps of the scheme (20) we consider the following three-step scheme: where  1 ,  2 , and  3 are some arbitrary constants.
In the following lines we obtain the local order of convergence of the above proposed scheme.Using ( 13), ( 17), (22), and (23) in the third step of ( 27), it follows that Combining ( 15), ( 22), and (28), it is easy to prove that for the parameters  1 = 7/2,  2 = −4, and  3 = 3/2, error equation (28) produces the maximum error.For this set of values the above equation reduces to which shows sixth-order of convergence.Thus, based on the above discussion, we can formulate the following theorem.Finally, the proposed scheme ( 27) is given by Like fifth-order scheme, this formula also uses two function evaluations, two derivatives evaluations, and only one matrix inversion in all.

Computational Efficiency
To obtain an assessment of the efficiency of the proposed methods we shall make use of efficiency index, according to which the efficiency of an iterative method is given by  =  1/ , where  is the order of convergence and  is the computational cost per iteration.In order to do this, we must consider all possible factors which contribute to the total cost of computation.For a system of  nonlinear equations in  unknowns, the computational cost per iteration is given by (see [17]) Here  0 () represents the number of evaluations of scalar functions ( 1 ,  2 , . . .,   ) used in the evaluation of ,  1 () is the number of evaluations of scalar functions of   , say   /  , 1 ⩽ ,  ⩽ , () represents the number of products or quotients needed per iteration, and  0 and  1 are ratios between products and evaluations required to express the value of ( 0 ,  1 , ) in terms of products.We suppose that a quotient is equivalent to  products.
To compute  in any iterative method, we evaluate  scalar functions, whereas the number of scalar evaluations is  2 for any new derivative   .In addition, we must include the amount of computational work required to evaluate inverse of a matrix.Instead of computing the inverse operator, we solve a linear system, where we have ( − 1)(2 − 1)/6 products and (−1)/2 quotients in the LU decomposition and (−1) products and  quotients in the resolution of two triangular linear systems.Moreover, we must add  products for the multiplication of a vector by a scalar, while  2 products for the same operation in case of a matrix.
These boundaries are the lines with positive slopes, where  1,3 >  2,3 on the above and  2,3 >  1,3 on the below of each line.
1,5 versus  1,6 Case.For this case the boundary  1,6;1,5 = 1 is given by where  = ln(5) and  = ln( 6).Here we also draw boundaries in ( 1 ,  0 )-plane with the same set of values of  and .The boundaries   are the lines with negative slopes, where  1,6 >  1,5 on the above and  1,5 >  1,6 on the below of each line (see Figure 6).graphically in Figure 8 by using the same values of ( 0 ,  1 , ) that are used in the previous case.
We summarize the above results in following theorem.

Numerical Results
In this section, some numerical problems are considered to illustrate the convergence behavior and computational efficiency of the proposed methods.The performance is compared with the existing methods  2,3 ,  3,3 ,  4,3 ,  5,3 ,  2,5 , and  2,6 that we have introduced in previous section.All computations are performed in the programming package Mathematica [19] using multiple-precision arithmetic with 4096 digits.For every method, we analyze the number of iterations () needed to converge to the solution such that (|| (+1) −  () || + ||( () )||) < 10 −200 .In numerical results, we also include CPU time utilized in the execution of program which is computed by the Mathematica command "TimeUsed[ ]".To verify the theoretical order of convergence, we calculate the computational order of convergence (  ) using the following formula [20]:  taking into consideration the last three approximations in the iterative process.
To connect the analysis of computational efficiency with numerical examples, we apply the definition of the computational cost (31), according to which an estimation of the factors  0 and  1 is claimed.For this, we express the cost of the evaluation of the elementary functions in terms of products, which depends on the computer, the software, and the arithmetics used (see [21,22]).In Table 1, an estimation of the cost of the elementary functions in product units is shown, wherein the running time of one product is measured in milliseconds.For the hardware and the software used in the present numerical work, the computational cost of quotient with respect to product is  = 2.8 (see Table 1).
The numbers of iterations (), the computational order of convergence (  ), the computational costs ( , ), in terms of products, the computational efficiencies ( , ), and the mean CPU time (CPUtime) for each method are displayed in Table 2. Computational cost and efficiency are calculated according to the corresponding expressions given by ( 38)-(46) by using the values of parameters ,  0 , and  1 as shown at the end of each problem, while taking  = 2.8 in each case.The mean CPU time is calculated by taking the mean of 50 performances of the program, where we use From numerical results, we can observe that like the existing methods the present methods show consistent convergence behavior.It is also clear that the computational order of convergence overwhelmingly supports the theoretical order of convergence.As far as the verification of the results of Theorem 5 is concerned, it is simple to check the theoretical results of statement (i) using the numerical values of the efficiency indices displayed in the second last column of Table 2.However, the results of statement (ii) are not quite obvious to verify.In order to do this, we first find  1 ,  2 ,  3 , and  4 using the values of , , and  1 obtained for each numerical problem and then we compare the efficiencies as per the rules (a)-(d) of statement (ii).The results are displayed in Table 3. From Table 2 we can see that the numerical values of  1,3 ,  2,3 ,  1,5 , and  1,6 also confirm the results as shown in Table 3.
Comparison of numerical results shows that  2,3 is more efficient among third-order methods.However, the present third-order method  1,3 is efficient among the rest of third-order methods in majority of the problems.The present fifth-order  1,5 and sixth-order  1,6 methods are more efficient than the existing methods of same and inferior order for larger systems.This behavior can be observed in the numerical results of Problems 5-7.From the results of last two columns in Table 2, one can conclude that the more is the efficiency of a method the lesser is the computing time of that method.This shows that the efficiency results are in complete agreement with the CPU time utilized in the execution of program.

Concluding Remarks
In the foregoing study, we have developed iterative of third, fifth and sixth-order of convergence for solving systems of nonlinear equations.The computational efficiency in its general form is discussed.Then a comparison between the efficiencies of proposed methods with existing methods is made.It is proved that the present methods are at least competitive with existing methods of similar nature; in particular, these are especially efficient for larger systems.
To illustrate the new techniques, seven numerical examples are presented and completely solved.The performance is compared with some known methods of similar character.The theoretical order of convergence and the analysis of computational efficiency are verified in the considered examples.
The numerical results have confirmed the robust and efficient nature of the proposed techniques.

Theorem 4 .
Let the function  :  ⊆   →   be sufficiently differentiable in an open neighborhood  of its zero .If an initial approximation  (0) is sufficiently close to , then the local order of convergence of method (27) is at least 6, provided  1 = 7/2,  2 = −4 and  3 = 3/2.

Table 2 :
Performance of methods.