IJEM International Journal of Engineering Mathematics 2314-6109 Hindawi Publishing Corporation 828409 10.1155/2014/828409 828409 Research Article Several New Third-Order and Fourth-Order Iterative Methods for Solving Nonlinear Equations http://orcid.org/0000-0001-7365-5540 Singh Anuradha Jaiswal J. P. Popov Viktor Department of Mathematics Maulana Azad National Institute of Technology Bhopal 462051 India manit.ac.in 2014 2322014 2014 17 08 2013 12 12 2013 31 12 2013 23 2 2014 2014 Copyright © 2014 Anuradha Singh and J. P. Jaiswal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In order to find the zeros of nonlinear equations, in this paper, we propose a family of third-order and optimal fourth-order iterative methods. We have also obtained some particular cases of these methods. These methods are constructed through weight function concept. The multivariate case of these methods has also been discussed. The numerical results show that the proposed methods are more efficient than some existing third- and fourth-order methods.

1. Introduction

Newton’s iterative method is one of the eminent methods for finding roots of a nonlinear equation: (1) f ( x ) = 0 . Recently, researchers have focused on improving the order of convergence by evaluating additional functions and first derivative of functions. In order to improve the order of convergence and efficiency index, many modified third-order methods have been obtained by using different approaches (see ). Kung and Traub  presented a hypothesis on the optimality of the iterative methods by giving 2 n - 1 as the optimal order. It means that the Newton iteration by two function evaluations per iteration is optimal with 1.414 as the efficiency index. By using the optimality concept, many researchers have tried to construct iterative methods of optimal higher order of convergence. The order of the methods discussed above is three with three function evaluations per full iteration. Clearly its efficiency index is 3 1 / 3 1.442 , which is not optimal. Very recently, the concept of weight functions has been used to obtain different classes of third- and fourth-order methods; one can see  and the references therein.

This paper is organized as follows. In Section 2, we present a new class of third-order and fourth-order iterative methods by using the concept of weight functions, which includes some existing methods and also provides some new methods. We have extended some of these methods for multivariate case. Finally, we employ some numerical examples and compare the performance of our proposed methods with some existing third- and fourth-order methods.

2. Methods and Convergence Analysis

First we give some definitions which we will use later.

Definition 1.

Let f ( x ) be a real valued function with a simple root α and let x n be a sequence of real numbers that converge towards α . The order of convergence m is given by (2) lim n x n + 1 - α ( x n - α ) m = ζ 0 , where ζ is the asymptotic error constant and m R + .

Definition 2.

Let n be the number of function evaluations of the new method. The efficiency of the new method is measured by the concept of efficiency index [8, 9] and defined as (3) m 1 / n , where m is the order of convergence of the new method.

2.1. Third-Order Iterative Methods

To improve the order of convergence of Newton’s method, some modified methods are given by Grau-Sánchez and Díaz-Barrero in , Weerakoon and Fernando in , Homeier in , Chun and Kim in , and so forth. Motivated by these papers, we consider the following two-step iterative method: (4) y n = x n - a f ( x n ) f ( x n ) , x n + 1 = x n - A ( t ) f ( x n ) f ( x n ) , where t = f ( y n ) / f ( x n ) and a is a real constant. Now we find under what conditions it is of order three.

Theorem 3.

Let α be a simple root of the function f and let f have sufficient number of continuous derivatives in a neighborhood of α . The method (4) has third-order convergence, when the weight function A ( t ) satisfies the following conditions: (5) A ( 1 ) = 1 , A ( 1 ) = - 1 2 a , | A ′′ ( 1 ) | + .

Proof.

Suppose e n = x n - α is the error in the n th iteration and c h = f ( h ) ( α ) / h ! f ( α ) , h 1 . Expanding f ( x n ) and f ( x n ) around the simple root α with Taylor series, then we have (6) f ( x n ) = f ( α ) × [ e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + c 5 e n 5 + O ( e n 6 ) ] , f ( x n ) = f ( α ) × [ 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + 5 c 5 e n 4 + O ( e n 5 ) ] . Now it can be easily found that (7) f ( x n ) f ( x n ) = e n - c 2 e n 2 + ( 2 c 2 2 - 2 c 3 ) e n 3 + O ( e n 4 ) . By using (7) in the first step of (4), we obtain (8) y n = α + ( 1 - a ) e n + a c 2 e n 2 + 2 a ( c 3 - c 2 2 ) e n 3 + O ( e n 4 ) . At this stage, we expand f ( y n ) around the root by taking (8) into consideration. We have (9) f ( y n ) = f ( α ) [ 1 + 2 ( 1 - a ) c 2 e n + ( 2 a c 2 2 + 3 ( 1 - a ) 2 c 3 ) e n 2 = f ( α ) l + ( 6 ( 1 - a ) a c 2 c 3 + 4 a c 2 ( - c 2 2 + c 3 ) + 4 ( 1 - a ) 3 c 4 ) = f ( α ) l × e n 3 + O ( e n 4 ) ] . Furthermore, we have (10) f ( y n ) f ( x n    ) = 1 + { - 2 c 2 + 2 ( 1 - a ) c 2 } e n + { 4 c 2 2 - 4 ( 1 - a ) c 2 2 + 2 a c 2 2 - 3 c 3 + 3 ( 1 - a ) 2 c 3 } × e n 2 + + O ( e n 4 ) . By virtue of (10) and (4), we get (11) A ( t ) × f ( x n ) f ( x n ) = e n - [ ( 3 a 2 - 1 ) c 3 - 2 c 2 2 ( - 1 + a 2 A ′′ ( 1 ) ) ] e n 3 + O ( e n 4 ) .

Hence, from (11) and (4) we obtain the following general equation, which has third-order convergence: (12) e n + 1 = x n + 1 - α = x n - A ( t ) × f ( x n ) f ( x n ) - α = [ ( 3 a 2 - 1 ) c 3 - 2 c 2 2 ( - 1 + a 2 A ′′ ( 1 ) ) ] e n 3 + O ( e n 4 ) . This proves the theorem.

Particular Cases. To find different third-order methods we take a = 2 / 3 in (4).

Case 1.

If we take A ( t ) = ( 7 - 3 t ) / 4 in (4), then we get the formula: (13) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - ( 7 4 - 3 4 f ( y n ) f ( x n ) ) f ( x n ) f ( x n ) , and its error equation is given by (14) 2 c 2 2 e n 3 + ( - 9 c 2 3 + 7 c 2 c 3 + c 4 9 ) e n 4 + O ( e n 5 ) .

Case 2.

If we take A ( t ) = 4 t / ( 7 t - 3 ) in (4), then we get the formula: (15) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - ( 4 f ( y n ) 7 f ( y n ) - 3 f ( x n ) ) f ( x n ) f ( x n ) , and its error equation is given by (16) - 1 3 c 2 2 e n 3 + 1 9 ( 17 c 2 3 - 21 c 2 c 3 + c 4 ) e n 4 + O ( e n 5 ) .

Case 3.

If we take A ( t ) = 4 / ( 1 + 3 t ) in (4), then we get the formula: (17) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - 4 f ( x n ) f ( x n ) + 3 f ( y n ) , and its error equation is given by (18) c 2 2 e n 3 + ( - 3 c 2 3 + 3 c 2 c 3 + c 4 9 ) e n 4 + O ( e n 5 ) .

Case 4.

If we take A ( t ) = ( t + 7 ) / ( 1 + 7 t ) in (4), then we get the formula: (19) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - ( f ( y n ) + 7 f ( x n ) f ( x n ) + 7 f ( y n ) ) f ( x n ) f ( x n ) , and its error equation is given by (20) 5 6 c 2 e n 3 + 1 36 ( - 79 c 2 3 + 84 c 2 c 3 + 4 c 4 ) e n 4 + O ( e n 5 ) .

Case 5.

If we take A ( t ) = ( t + 3 ) / 4 t in (4), then we get the formula: (21) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - f ( x n ) 4 ( 1 f ( x n ) + 3 f ( y n ) ) , which is Huen’s formula .

Remark 4.

By taking different values of a and weight function A ( t ) in (4), one can get a number of third-order iterative methods.

2.2. Optimal Fourth-Order Iterative Methods

The order of convergence of the methods obtained in the previous subsection is three with three function evaluations (one function and two derivatives) per step. Hence its efficiency index is 3 1 / 3 1.442 , which is not optimal. To get optimal fourth-order methods we consider (22) y n = x n - a f ( x n ) f ( x n ) , x n + 1 = x n - { A ( t ) × B ( t ) } f ( x n ) f ( x n ) , where A ( t ) and B ( t ) are two real-valued weight functions with t = f ( y n ) / f ( x n ) and a is a real constant. The weight functions should be chosen in such a way that the order of convergence arrives at optimal level four without using additional function evaluations. The following theorem indicate the required conditions for the weight functions and constant a in (22) to get optimal fourth-order convergence.

Theorem 5.

Let α be a simple root of the function f and let f have sufficient number of continuous derivatives in a neighborhood of α . The method (22) has fourth-order convergence, when a = 2 / 3 and the weight functions A ( t ) and B ( t ) satisfy the following conditions: (23) A ( 1 ) = 1 , A ( 1 ) = - 3 4 , | A ( 3 ) ( 1 ) | + , B ( 1 ) = 1 , B ( 1 ) = 0 , B ′′ ( 1 ) = 9 4 - A ′′ ( 1 ) , | B ( 3 ) ( 1 ) | + .

Proof.

Using (6) and putting a = 2 / 3 in the first step of (22), we have (24) y n = α + e n 3 + 2 c 2 e n 2 3 + 4 ( c 3 - c 2 2 ) e n 3 3 + + O ( e n 5 ) . Now we expand f ( y n ) around the root by taking (24) into consideration. Thus, we have (25) f ( y n ) = f ( α ) [ 1 + 2 c 2 e n 3 + ( 4 c 2 2 + c 3 ) e n 2 3 + + O ( e n 5 ) ] . Furthermore, we have (26) f ( y n ) f ( x n ) = 1 - 4 c 2 3 e n + ( 4 c 2 2 - 8 c 3 3 ) e n 2 + + O ( e n 5 ) . By virtue of (26) and (22), we obtain (27) { A ( t ) × B ( t ) } f ( x n ) f ( x n ) = e n - 1 81 [ ( 243 + 72 A ′′ ( 1 ) + 32 A ′′′ ( 1 ) + 32 B ′′′ ( 1 ) ) c 2 3 - 81 c 2 c 3 + 9 c 4 lllllllllllllllll + ( 243 + 72 A ′′ ( 1 ) + 32 A ′′′ ( 1 ) + 32 B ′′′ ( 1 ) ) c 2 3 ] × e n 4 + O ( e n 5 ) . Finally, from (27) and (22) we can have the following general equation, which reveals the fourth-order convergence: (28) e n + 1 = x n + 1 - α = x n - { A ( t ) × B ( t ) } f ( x n ) f ( x n ) - α = 1 81 [ - 81 c 2 c 3 + 9 c 4 + ( 243 + 72 A ′′ ( 1 ) bbbbb + 32 A ′′′ ( 1 ) + 32 B ′′′ ( 1 ) ) c 2 3 ] e n 4 + O ( e n 5 ) . It proves the theorem.

Particular Cases

Method 1.

If we take A ( t ) = ( t + 3 ) / 4 t and B ( t ) = ( ( 11 / 8 ) - ( 3 / 4 ) t + ( 3 / 8 ) t 2 ) , where t = f ( y ) / f ( x ) , then the iterative method is given by (29) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - [ 11 8 - 3 4 f ( y n ) f ( x n ) + 3 8 ( f ( y n ) f ( x n ) ) 2 ] × ( 1 f ( x n ) + 3 f ( y n ) ) f ( x n ) 4 , and its error equation is given by (30) e n + 1 = 1 9 [ 23 c 2 3 - 9 c 2 c 3 + c 4 ] e n 4 + O ( e n 5 ) .

Method 2.

If we take A ( t ) = ( 7 - 3 t ) / 4 and B ( t ) = ( ( 17 / 8 ) - ( 9 / 4 ) t + ( 9 / 8 ) t 2 ) , where t = f ( y ) / f ( x ) , then the iterative method is given by (31) y n = x n - 2 3 f ( x n ) f ( x n )    , x n + 1 = x n - [ 17 8 - 9 4 f ( y n ) f ( x n ) + 9 8 ( f ( y n ) f ( x n ) ) 2 ] × ( 7 4 - 3 4 f ( y n ) f ( x n ) ) f ( x n ) f ( x n ) , and its error equation is given by (32) e n + 1 = [ 3 c 2 3 - c 2 c 3 + c 4 9 ] e n 4 + O ( e n 5 ) .

Method 3.

If we take A ( t ) = 4 t / ( 7 t - 3 ) and B ( t ) = ( ( 13 / 16 ) + ( 3 / 8 ) t - ( 3 / 16 ) t 2 ) , where t = f ( y ) / f ( x ) , then the iterative method is given by (33) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n    - [ 13 16 + 3 8 f ( y n ) f ( x n ) - 3 16 ( f ( y n ) f ( x n ) ) 2 ] × ( 4 f ( y n ) 7 f ( y n ) - 3 f ( x n ) ) f ( x n ) f ( x n ) , and its error equation is given by (34) e n + 1 = 1 9 [ - c 2 3 - 9 c 2 c 3 + c 4 ] e n 4 + O ( e n 5 ) .

Method 4.

If we take A ( t ) = 4 / ( 1 + 3 t ) and B ( t ) = ( ( 25 / 8 ) - ( 9 / 8 ) t + ( 9 / 16 ) t 2 ) , where t = f ( y ) / f ( x ) , then the iterative method is given by (35) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n    - [ 25 16 - 9 8 f ( y n ) f ( x n ) + 9 16 ( f ( y n ) f ( x n ) ) 2 ] × [ 4 f ( x n ) f ( x n ) + 3 f ( y n ) ] , and its error equation is (36) e n + 1 = [ 3 c 2 3 - c 2 c 3 + c 4 9 ] e n 4 + O ( e n 5 ) .

Method 5.

If we take A ( t ) = 4 / ( 1 + 3 t ) and B ( t ) = 1 + ( 9 / 16 ) ( t - 1 ) 2 , where t = f ( y ) / f ( x ) , then the iterative method is given by (37) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - [ 1 + 9 16 ( f ( y n ) f ( x n ) - 1 ) 2 ] × [ 4 f ( x n ) f ( x n ) + 3 f ( y n ) ] , which is same as the formula (11) of .

Method 6.

If we take A ( t ) = ( t + 7 ) / ( 1 + 7 t ) and B ( t ) = ( ( 47 / 32 ) - ( 15 / 16 ) t - ( 15 / 32 ) t 2 ) , where t = f ( y ) / f ( x ) , then the iterative method is given by (38) y n = x n - 2 3 f ( x n ) f ( x n ) , x n + 1 = x n - [ 47 32 - 15 16 f ( y n ) f ( x n ) + 15 32 ( f ( y n ) f ( x n ) ) 2 ] × ( 7 f ( x n ) + f ( y n ) f ( x n ) + 7 f ( y n ) ) f ( x n ) f ( x n ) , and its error equation is (39) e n + 1 = [ 101 c 2 3 36 - c 2 c 3 + c 4 9 ] e n 4 + O ( e n 5 ) .

Remark 6.

By taking different values of A ( t ) and B ( t ) in (22), one can obtain a number of fourth-order iterative methods.

3. Further Extension to Multivariate Case

In this section, we extend some third- and fourth-order methods from our proposed methods to solve the nonlinear systems. Similarly we can extend other methods also. The multivariate case of our third-order method (15) is given by (40) Y ( k ) = X ( k ) - 2 3 [ F ( X ( k ) ) ] - 1 F ( X ( k ) ) , X ( k + 1 ) = X ( k ) - 4 [ 7 F ( Y ( k ) ) - 3 F ( X ( k ) ) ] - 1 × F ( Y ( k ) ) { [ F ( X ( k ) ) ] - 1 F ( X ( k ) ) } , where X ( k ) = [ x 1 ( k ) , x 2 ( k ) , , x n ( k ) ] T , ( k = 0,1 , 2 , ) ; similarly Y ( k ) ; I is n × n identity matrix; F ( X ( k ) ) = [ f 1 ( x 1 ( k ) , x 2 ( k ) , , x n ( k ) ) , f 2 ( x 1 ( k ) , x 2 ( k ) , , x n ( k ) ) , , f n ( x 1 ( k ) , x 2 ( k ) , , x n ( k ) ) ] ; and F ( X ( k ) ) is the Jacobian matrix of F at X ( k ) . Let ξ + H n be any point of the neighborhood of exact solution ξ n of the nonlinear system F ( X ) = 0 . If Jacobian matrix F ( ξ ) is nonsingular, then Taylor’s series expansion for multivariate case is given by (41) F ( ξ + H ) = F ( ξ ) [ H + C 2 H 2 + C 3 H 3 + + C p - 1 H p - 1 ] + O ( H p ) , where C i = [ F ( ξ ) ] - 1 ( F ( i ) ( ξ ) / i ! ) , i 2 and (42) F ( ξ + H ) = F ( ξ ) [ I + 2 C 2 H + 3 C 3 H 2 + + ( p - 1 ) C p - 1 H p - 2 ] + O ( H p - 1 ) , where I is an identity matrix. From the previous equation we can find (43) [ F ( ξ + H ) ] - 1 = [ F ( ξ ) ] - 1 [ I + L 1 H + L 2 H 2 + L 3 H 3 + + L p - 2 H p - 2 ] + O ( H p - 1 ) , where L 1 = - 2 C 2 , L 2 = 4 C 2 2 - 3 C 3 , and L 3 = - 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 - 4 C 4 . Here we denote the error in k th iteration by E ( k ) , that    is , E ( k ) = X ( k ) - ξ . The order of convergence of method (40) can be proved by the following theorem.

Theorem 7.

Let F : D n n be sufficiently Frechet differentiable in a convex set D , containing a root ξ of F ( X ) = 0 . Let one suppose that F ( X ) is continuous and nonsingular in D and X ( 0 ) is close to ξ . Then the sequence { X ( k ) } k 0 obtained by the iterative expression (40) converges to ξ with order three.

Proof.

For the convenience of calculation, we replace 2 / 3 by β in the first step of (40). From (41), (42), and (43), we have (44) F ( X ( k ) ) = F ( ξ ) × [ E ( k ) + C 2 E ( k ) 2 + C 3 E ( k ) 3 + C 4 E ( k ) 4 + C 5 E ( k ) 5 ] + O ( E ( k ) 6 ) , (45) F ( X ( k ) ) = F ( ξ ) × [ I + 2 C 2 E ( k ) + 3 C 3 E ( k ) 2 + 4 C 4 E ( k ) 3 + 5 C 5 E ( k ) 4 ] + O ( E ( k ) 5 ) , (46) [ F ( X ( k ) ) ] - 1 = [ F ( ξ ) ] - 1 { E ( k ) 3 I - 2 C 2 E ( k ) kkkkkkkkkkkkk + ( 4 C 2 2 - 3 C 3 ) E ( k ) 2 kkkkkkkkkkkkk + ( - 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 - 4 C 4 ) kkkkkkkkkkkkk × E ( k ) 3 } + O ( E ( k ) 4 ) , where C i = [ F ( ξ ) ] - 1 ( F ( i ) ( ξ ) / i ! ) , i 2 . Now from (46) and (44), we can obtain (47) S = [ F ( X ( k ) ) ] - 1 F ( X ( k ) ) = G 1 E ( k ) + G 2 E ( k ) 2 + G 3 E ( k ) 3 + G 4 E ( k ) 4 + O ( E ( k ) 5 ) ,

where (48) G 1 = I , G 2 = - C 2 , G 3 = - 2 C 3 + 2 C 2 2 , G 4 = - 3 C 4 - 4 C 2 C 3 + 3 C 3 C 2 - 4 C 2 3 .

By virtue of (47) the first step of the method (40) becomes (49) Y ( k ) = ( 1 - β ) E ( k ) + β C 2 E ( k ) 2 + β ( - 2 C 2 2 + 2 C 3 ) E ( k ) 3 + β ( 4 C 2 3 - 4 C 2 C 3 - 3 C 3 C 2 + 3 C 4 ) E ( k ) 4 + O ( E ( k ) 5 ) . Taylor’s series expansion for Jacobian matrix F ( Y ( k ) ) can be given as (50) F ( Y ( k ) ) = F ( ξ ) [ E ( k ) 4 I + 2 C 2 ( 1 - β ) E ( k ) k k k k k k + ( 2 β C 2 2 + 3 C 3 ( 1 - β ) 2 ) E ( k ) 2 k k k k k k + ( - 4 β C 2 3 + 4 β C 2 C 3 + 6 β ( 1 - β ) C 3 C 2 k k k k k K K k + 4 C 4 ( 1 - β ) 3 ) E ( k ) 3 k k k k K K k k + ( 8 β C 2 4 - 8 β C 2 2 C 3 - 6 β C 2 C 3 C 4 k k k k k k k k k k k + 6 β C 2 C 4 - 3 β 2 C 3 C 2 k k k k k k k k k k k - 12 β ( 1 - β ) C 3 C 2 2 k k k k k k k k k k k + 12 β ( 1 - β ) C 3 2 k k k k k k k k k k k + 12 β ( 1 - β ) 2 C 4 C 2 ) k k k k k k k k k k k + 5 C 5 ( 1 - β ) 4 ) E ( k ) 4 ] + O ( E ( k ) 5 ) . Now (51) [ 7 F ( Y ( k ) ) - 3 F ( X ( k ) ) ] = 4 [ F ( ξ ) ] × [ I + 1 4 [ A 1 E ( k ) + A 2 E ( k ) 2 + A 3 E ( k ) 3 ] ] + O ( E ( k ) 4 ) , where (52) A 1 = C 2 ( 8 - 14 β ) , A 2 = 14 β C 2 2 + 21 C 3 ( 1 - β ) 2 - 9 C 3 , A 3 = - 28 β C 2 3 + 28 β C 2 C 3 + 42 β ( 1 - β ) C 3 C 2 + 28 C 4 ( 1 - β ) 3 - 12 C 4 . Taking inverse of both sides of (51), we get (53) 4 [ 7 F ( Y ( k ) ) - 3 F ( X ( k ) ) ] - 1 = [ F ( ξ ) ] - 1 [ I + B 1 E ( k ) + B 2 E ( k ) 2 + B 3 E ( k ) 3 ] + O ( E ( k ) 4 ) , where (54) B 1 = - A 1 4 , B 2 = ( - A 2 4 + A 1 2 16 ) , B 3 = ( - A 3 4 - A 1 3 64 + A 1 A 2 16 + A 2 A 1 16 ) . By multiplying (53) and (50), we get (55) 4 [ 7 F ( Y ( k ) ) - 3 F ( X ( k ) ) ] - 1 [ F ( Y ( k ) ) ] = ( I + E 1 E ( k ) + E 2 E ( k ) 2 + E 3 E ( k ) 3 ) + O ( E ( k ) 4 ) , where (56) E 1 = B 1 + D 1 , E 2 = B 1 D 1 + B 2 + D 2 , E 3 = B 2 D 1 + B 1 D 2 + B 3 + D 3 , and the values of D 1 , D 2 , and D 3 are mentioned below: (57) D 1 = 2 C 2 ( 1 - β ) , D 2 = 2 β C 2 2 + 3 C 3 ( 1 - β ) 2 , D 3 = - 4 β C 2 3 + 4 β C 2 C 3 + 6 β ( 1 - β ) C 3 C 2 + 4 C 4 ( 1 - β ) 3 . From multiplication of (47) and (55), we achieve (58) 4 [ 7 F ( Y ( k ) ) - 3 F ( X ( k ) ) ] - 1 [ F ( Y ( k ) ) ] S = [ G 1 E ( k ) + { G 2 + E 1 G 1 } E ( k ) 2 + { E 1 G 2 + E 2 G 1 + G 3 } E ( k ) 3 ] + O ( E ( k ) 4 ) . After replacing the value of the above equation in second part of (40), we get (59) E ( k + 1 ) = { I - G 1 } E ( k ) - { G 2 + E 1 G 1 } E ( k ) 2 - { E 1 G 2 + E 2 G 1 + G 3 } E ( k ) 3 + O ( E ( k ) 4 ) . The final error equation of method (40) is given by (60) E ( k + 1 ) = - ( C 2 2 3 ) E ( k ) 3 + O ( E ( k ) 4 ) . Thus, we end the proof of Theorem 7.

The multivariate case of (33) is given by (61) Y ( k ) = X ( k ) - 2 3 [ [ F ( X ( k ) ) ] - 1 F ( X ( k ) ) ] , X ( k + 1 ) = X ( k ) - [ - 3 16 ( [ F ( X ( k ) ) ] - 1 F ( Y ( k ) ) ) 2 13 16 I + 3 8 ( [ F ( X ( k ) ) ] - 1 F ( Y ( k ) ) ) LLLLLLLL - 3 16 ( [ F ( X ( k ) ) ] - 1 F ( Y ( k ) ) ) 2 ] · 4 [ 7 F ( Y ( k ) ) - 3 F ( X ( k ) ) ] - 1 F ( Y ( k ) ) · [ [ F ( X ( k ) ) ] - 1 F ( X ( k ) ) ] . The following theorem shows that this method has fourth-order convergence.

Theorem 8.

Let F : D n n be sufficiently Frechet differentiable in a convex set D , containing a root ξ of F ( X ) = 0 . Let one suppose that F ( X ) is continuous and nonsingular in D and X ( 0 ) is close to ξ . Then the sequence { X ( k ) } k 0 obtained by the iterative expression (61) converges to ξ with order four.

Proof.

For the convenience of calculation we replace 2 / 3 by β and put a 1 = 13 / 16 , a 2 = 3 / 8 , and a 3 = - 3 / 16 in (61). From (46) and (50), we have (62) t = [ F ( X ( k ) ) ] - 1 F ( Y ( k ) ) = I - 2 β C 2 E ( k ) + { 6 β C 2 2 + 3 C 3 ( β 2 - 2 β ) } E ( k ) 2 + { - 16 β C 2 3 + ( - 6 β 2 + 16 β ) C 2 C 3 + 6 β ( 2 - β ) C 3 C 2 + ( 4 ( 1 - β ) 3 - 4 ) C 4 } E ( k ) 3 + O ( E ( k ) 4 ) . From the above equation we have (63) t 2 = ( [ F ( X ( k ) ) ] - 1 F ( Y ( k ) ) ) 2 = I - 4 β C 2 E ( k ) + { ( 12 β + 4 β 2 ) C 2 2 + 6 ( β 2 - 2 β ) C 3 } E ( k ) 2 + { ( - 32 β - 24 β 2 ) C 2 3 + ( - 6 β 3 + 32 β ) C 2 C 3 + ( - 6 β 3 + 24 β ) C 3 C 2 + 2 ( 4 ( 1 - β ) 3 - 4 ) } E ( k ) 3 + O ( E ( k ) 4 ) . With the help of (62) and (63), we can obtain (64) a 1 I + a 2 t + a 3 t 2 = ( a 1 + a 2 + a 3 ) I + ( - 2 β a 2 - 4 β a 3 ) C 2 E ( k ) + { ( 3 ( β 2 - 2 β ) a 2 + 6 ( β 2 - 2 β ) a 3 ) C 3 + ( 6 β a 2 + ( 12 β + 4 β 2 ) a 3 ) C 2 2 } E ( k ) 2 + { ( - 16 β a 2 + ( - 32 β - 24 β 2 ) a 3 ) C 2 3 + ( ( - 6 β 2 + 16 β ) a 2 + ( - 6 β 3 + 32 β ) a 3 ) C 2 C 3 + ( 6 β ( 2 - β ) a 2 + ( - 6 β 3 + 24 β ) a 3 ) C 3 C 2 + ( ( 4 ( 1 - β ) 3 - 4 ) a 2 + 2 ( 4 ( 1 - β ) 3 - 4 ) a 3 ) C 4 } × E ( k ) 3 + O ( E ( k ) 4 ) . By multiplying (64) to (58), we have (65) ( a 1 I + a 2 t + a 3 t 2 ) 4 [ 7 F ( Y ( k ) ) - 3 F ( X ( k ) ) ] - 1 [ F ( Y ( k ) ) ] S = ( T 1 E ( k ) + T 2 E ( k ) 2 + T 3 E ( k ) 3 + T 4 E ( k ) 4 + O ( E ( k ) 5 ) ) , where (66) T 1 = G 1 ( a 1 + a 2 + a 3 ) , T 2 = G 1 ( - 2 β a 2 - 4 β a 3 ) C 2 + ( a 1 + a 2 + a 3 ) ( G 2 + E 1 G 1 ) , T 3 = ( a 1 + a 2 + a 3 ) ( E 1 G 2 + E 2 G 1 + G 3 ) + ( - 2 β a 2 - 4 β a 3 ) C 2 ( G 2 + E 1 G 1 ) + G 1 [ ( 3 ( β 2 - 2 β ) a 2 + 6 ( β 2 - 2 β ) a 3 ) C 3 l + ( 6 β a 2 + ( 12 β + 4 β 2 ) a 3 ) C 2 2 ] , T 4 = ( G 2 + E 1 G 1 ) ( [ ( 3 ( β 2 - 2 β ) a 2 + 6 ( β 2 - 2 β ) a 3 ) C 3 ] l + [ ( 6 β a 2 + ( 12 β + 4 β 2 ) a 3 ) C 2 2 ] ) + ( E 1 G 2 + E 2 G 1 + G 3 ) [ ( - 2 β a 2 - 4 β a 3 ) C 2 ] + ( a 1 + a 2 + a 3 ) ( E 2 G 2 + E 3 G 1 + E 1 G 3 + G 4 ) + G 1 ( { - 16 β a 2 + ( - 32 β - 24 β 2 ) a 3 } C 2 3 l + { ( - 6 β 2 + 16 β ) a 2 + ( - 6 β 3 + 32 β ) a 3 } C 2 C 3 l + { 6 β ( 2 - β ) a 2 + ( - 6 β 3 + 24 β ) a 3 } C 3 C 2 + { ( 4 ( 1 - β ) 3 - 4 ) a 2 + 2 ( 4 ( 1 - β ) 3 - 4 ) a 3 } C 4 ) . The final error equation of method (61) is given by (67) E ( k + 1 ) = ( - 1 9 C 2 3 + 8 C 2 C 3 - C 3 C 2 + 1 9 C 4 ) E ( k ) 4 + O ( E ( k ) 5 ) , which confirms the theorem.

4. Numerical Testing 4.1. Single Variate Case

In this section, ten different test functions have been considered in Table 1 for single variate case to illustrate the accuracy of the proposed iterative methods. The root of each nonlinear test function is also listed. All computations presented here have been performed in MATHEMATICA 8. Many streams of science and engineering require very high precision degree of scientific computations. We consider 1000 digits floating point arithmetic using “SetAccuracy [ ] ” command. Here we compare the performance of our proposed methods with some well-established third-order and fourth-order iterative methods. In Table 2, we have represented Huen’s method by HN3, our proposed third-order method (15) by M3, fourth-order method (17) of  by SL4, fourth-order Jarratt’s method by JM4, and proposed fourth-order method by M4. The results are listed in Table 2.

Functions and their roots.

f ( x ) α
f 1 ( x ) = [ sin ( x ) ] 2 + x    α 1 = 0
f 2 ( x ) = [ sin ( x ) ] 2 - x 2 + 1    α 2 1.404491648215341226035086817786
f 3 ( x ) = e - x + sin ( x ) - 1 α 3 2.076831274533112613070044244750
f 4 ( x ) = x 2 + sin ( x ) + x α 4 = 0
f 5 ( x ) = sin [ 2 cos ( x ) ] - 1 - x 2 + e sin ( x 3 ) α 5 1.306175201846827825014842909066
f 6 ( x ) = x 6 - 10 x 3 + x 2 - x + 3 α 6 0.658604847118140436763860014710
f 7 ( x ) = x 4 - x 3 + 11 x - 7 α 7 0.803511199110777688978137660293
f 8 ( x ) = x 3 - cos ( x ) + 2 α 8 - 1.172577964753970012673332714868
f 9 ( x ) = x - cos ( x ) α 9 0.641714370872882658398565300316
f 10 ( x ) = log ( x ) - x 3 + 2 sin ( x ) α 10 1.297997743280371847164479238286

Comparison of absolute value of the functions by different methods after fourth iteration (TNFE-12).

| f | Guess HN3 M3 SL4 JM4 M4
| f 1 | 0.3 0.1 e - 57 0.3 e - 93 0.2 e - 172 0.4 e - 162 0.5 e - 199
0.2 0.5 e - 69 0.1 e - 91 0.1 e - 186 0.2 e - 198 0.1 e - 245
0.1 0.1 e - 90 0.6 e - 107 0.5 e - 241 0.2 e - 266 0.6 e - 339
- 0.1 0.2 e - 85 0.1 e - 93 0.1 e - 198 0.1 e - 247 0.7 e - 278
- 0.2 0.3 e - 59 0.9 e - 64 0.1 e - 99 0.8 e - 161 0.5 e - 165

| f 2 | 1.3 0.1 e - 92 0.3 e - 102 0.1 e - 244 0.4 e - 278 0.6 e - 297
1.2 0.1 e - 67 0.7 e - 75 0.2 e - 152 0.2 e - 197 0.7 e - 200
1.1 0.8 e - 53 0.1 e - 57 0.4 e - 94 0.5 e - 147 0.2 e - 132
1.4 0.6 e - 205 0.7 e - 217 0.1 e - 613 0.5 e - 634 0.1 e - 672
1.5 0.3 e - 99 0.9 e - 114 0.1 e - 296 0.2 e - 300 0.7 e - 374

| f 3 | 2.0 0.1 e - 112 0.2 e - 122 0.8 e - 362 0.1 e - 325 0.1 e - 418
2.3 0.1 e - 81 0.1 e - 102 0.1 e - 215 0.1 e - 229 0.5 e - 275
2.1 0.7 e - 157 0.1 e - 169 0.6 e - 493 0.1 e - 466 0.3 e - 543
2.2 0.3 e - 100 0.1 e - 116 0.4 e - 288 0.2 e - 288 0.1 e - 343
1.9 0.1 e - 81 0.5 e - 88 0.1 e - 223 0.1 e - 224 0.3 e - 272

| f 4 | 0.3 0.4 e - 78 0.9 e - 101 0.4 e - 157 0.3 e - 219 0.2 e - 257
0.2 0.1 e - 90 0.3 e - 109 0.5 e - 201 0.4 e - 258 0.2 e - 301
0.1 0.2 e - 113 0.2 e - 128 0.2 e - 279 0.1 e - 328 0.1 e - 379
- 0.2 0.9 e - 84 0.4 e - 90 0.7 e - 223 0.2 e - 229 0.1 e - 286
- 0.1 0.8 e - 110 0.5 e - 119 0.1 e - 285 0.3 e - 314 0.2 e - 483

| f 5 | 1.35 0.1 e - 101 0.6 e - 112 0.3 e - 252 0.1 e - 312 0.2 e - 320
1.31 0.1 e - 77 0.1 e - 86 0.3 e - 170 0.1 e - 236 0.1 e - 233
1.29 0.1 e - 69 0.4 e - 78 0.1 e - 141 0.5 e - 211 0.5 e - 203
1.15 0.8 e - 39 0.2 e - 42 0.7 e - 28 0.8 e - 107 0.1 e - 510
1.20 0.1 e - 46 0.4 e - 52 0.2 e - 54 0.3 e - 135 0.1 e - 101

| f 6 | 0.7 0.7 e - 109 0.1 e - 122 0.1 e - 288 0.2 e - 334 0.1 e - 380
0.6 0.4 e - 94 0.7 e - 104 0.9 e - 229 0.3 e - 286 0.8 e - 300
0.5 0.3 e - 57 0.2 e - 63 0.3 e - 95 0.4 e - 166 0.2 e - 154
0.8 0.1 e - 68 0.2 e - 87 0.2 e - 171 0.3 e - 207 0.2 e - 282
1.2 0.6 e - 36 0.6 e - 52 0.1 e - 151 0.2 e - 97 0.1 e - 112

| f 7 | 0.65 0.2 e - 294 0.1 e - 306 0.2 e - 588 0.2 e - 807 0.8 e - 810
0.75 0.2 e - 177 0.8 e - 187 0.5 e - 250 0.4 e - 462 0.1 e - 457
0.95 0.3 e - 129 0.5 e - 130 0.1 e - 134 0.2 e - 295 0.2 e - 290
0.90 0.1 e - 137 0.3 e - 140 0.2 e - 153 0.3 e - 322 0.9 e - 318
0.80 0.2 e - 160 0.3 e - 167 0.3 e - 207 0.9 e - 399 0.6 e - 395

| f 8 | - 1.0 0.2 e - 64 0.4 e - 72 0.1 e - 112 0.1 e - 193 0.5 e - 182
- 1.1 0.3 e - 96 0.2 e - 106 0.6 e - 224 0.8 e - 297 0.8 e - 301
- 1.2 0.6 e - 132 0.1 e - 144 0.3 e - 345 0.8 e - 412 0.6 e - 431
- 1.5 0.5 e - 50 0.6 e - 71 0.5 e - 100 0.2 e - 155 0.5 e - 275
- 0.9 0.1 e - 47 0.2 e - 52 0.5 e - 46 0.1 e - 135 0.6 e - 105

| f 9 | 0.9 0.1 e - 127 0.6 e - 133 0.1 e - 152 0.5 e - 315 0.2 e - 307
0.7 0.2 e - 178 0.1 e - 186 0.1 e - 315 0.5 e - 455 0.4 e - 451
0.6 0.2 e - 189 0.1 e - 206 0.1 e - 351 0.6 e - 482 0.2 e - 479
0.8 0.6 e - 144 0.1 e - 149 0.5 e - 206 0.9 e - 356 0.3 e - 350
1.0 0.2 e - 117 0.2 e - 123 0.3 e - 117 0.7 e - 295 0.1 e - 284

| f 10 | 1.2 0.4 e - 74 0.2 e - 81 0.3 e - 154 0.1 e - 213 0.8 e - 229
2.0 0.7 e - 26 0.1 e - 52 0.2 e - 75 0.5 e - 76 0.1 e - 107
1.5 0.2 e - 57 0.1 e - 79 0.3 e - 139 0.1 e - 170 0.2 e - 229
1.3 0.9 e - 214 0.7 e - 226 0.1 e - 612 0.4 e - 660 0.7 e - 708
1.8 0.3 e - 33 0.2 e - 76 0.1 e - 83 0.1 e - 97 0.7 e - 134

An effective way to compare the efficiency of methods is CPU time utilized in the execution of the programme. In present work, the CPU time has been computed using the command “TimeUsed [ ] ” in MATHEMATICA. It is well known that the CPU time is not unique and it depends on the specification of the computer. The computer characteristic is Microsoft Windows 8 Intel(R) Core(TM) i5-3210M CPU@ 2.50 GHz with 4.00 GB of RAM, 64-bit operating system throughout this paper. The mean CPU time is calculated by taking the mean of 10 performances of the programme. The mean CPU time (in seconds) for different methods is given in Table 3.

Comparison of CPU time (in seconds) between some existing methods and our proposed methods.

Function CPU time
Guess HN3 M3 SL4 JM4 M4
f 1 0.3 0.2867 0.2644 0.3060 0.2449 0.2449
f 2 1.5 0.2943 0.2510 0.3049 0.2682 0.3043
f 3 2.3 0.3019 0.3658 0.3457 0.3562 0.3483
f 4 0.3 0.3091 0.2850 0.2832 0.2399 0.2428
f 5 1.35 0.3399 0.3694 0.3938 0.4149 0.3940
f 6 0.7 0.2896 0.2708 0.2388 0.2613 0.2550
f 7 0.65 0.2517 0.2356 0.2938 0.2644 0.2880
f 8 −1.00 0.2697 0.2279 0.2739 0.2934 0.2900
4.2. Multivariate Case

Further, six nonlinear systems (Examples 914) are considered for numerical testing of system of nonlinear equations. Here we compare our proposed third-order method (40) (MM3) with Algorithm ( 2.2 ) (NR1) and Algorithm ( 2.3 ) (NR2) of  and fourth-order method (61) (MM4) with ( 22 ) (SH4) of  and method ( 3.4 ) (BB4) of . The comparison of norm of the function for different iterations is given in Table 4.

Norm of the functions by different methods after first, second, third, and fourth iteration.

Example Guess Method | | F ( x ( 1 ) ) | | | | F ( x ( 2 ) ) | | | | F ( x ( 3 ) ) | | | | F ( x ( 4 ) ) | |
Example 9 (5.1, 6.1) NR1 3.8774 e - 4 9.2700 e - 15 7.7652 e - 47 4.0858 e - 143
NR2 3.8774 e - 4 9.2700 e - 15 7.7652 e - 47 4.0858 e - 143
MM3 1.2657 e - 4 1.0705 e - 16 3.9789 e - 53 1.8320 e - 162
BB4 2.1416 e - 5 1.1267 e - 24 4.6477 e - 102 1.2561 e - 411
SH4 1.2923 e - 5 9.2420 e - 26 1.2710 e - 106 4.2184 e - 430
MM4 3.0768 e - 6 3.9419 e - 29 8.7758 e - 121 2.1039 e - 487

Example 10 (1, 0.5, 1.5) NR1 3.0006 e - 2 1.3681 e - 4 1.3174 e - 11 1.1754 e - 32
NR2 2.9765 e - 2 1.3230 e - 4 1.1848 e - 11 8.5484 e - 33
MM3 9.9051 e - 3 3.7473 e - 6 9.1835 e - 17 1.3035 e - 48
BB4 2.1133 e - 2 6.9602 e - 6 7.1401 e - 20 7.7987 e - 76
SH4 1.5676 e - 2 1.1309 e - 6 2.4814 e - 23 5.5195 e - 90
MM4 6.1451 e - 3 8.1169 e - 8 2.3264 e - 28 6.9642 e - 110

Example 11 (0.5, 0.5, 0.5, −0.2) NR1 2.3097 e - 3 7.5761 e - 10 5.6516 e - 30 2.8662 e - 91
NR2 2.3097 e - 3 7.5761 e - 10 5.6516 e - 30 2.8662 e - 91
MM3 9.4336 e - 4 1.6380 e - 11 1.7401 e - 35 2.5268 e - 108
BB4 9.1400 e - 4 1.8627 e - 14 8.7599 e - 59 6.9713 e - 238
SH4 5.3618 e - 4 1.4537 e - 15 2.1746 e - 63 1.7744 e - 256
MM4 7.7084 e - 5 2.1932 e - 20 3.4979 e - 84 3.6487 e - 341

Example 12 (1.0, 2.0) NR1 2.1427 e - 3 1.7987 e - 10 1.0504 e - 31 2.0958 e - 95
NR2 2.1498 e - 3 1.8262 e - 10 1.1001 e - 31 2.4077 e - 95
MM3 7.6174 e - 4 2.3592 e - 12 7.9632 e - 38 3.0435 e - 114
BB4 5.3124 e - 4 8.2104 e - 16 3.8411 e - 63 2.8216 e - 252
SH4 2.9895 e - 4 6.5567 e - 17 1.8332 e - 67 1.5913 e - 269
MM4 1.0131 e - 4 2.8562 e - 19 3.4019 e - 77 2.4842 e - 308

Example 13 (−0.8, 1.1, 1.1) NR1 2.9692 e - 4 5.5149 e - 14 3.8063 e - 40 1.2456 e - 118
NR2 2.9718 e - 5 5.5137 e - 14 3.8044 e - 40 1.2438 e - 118
MM3 9.8775 e - 6 6.7719 e - 16 2.3491 e - 46 9.7596 e - 138
BB4 4.0723 e - 6 1.6873 e - 21 9.1974 e - 83 7.8287 e - 328
SH4 2.1907 e - 6 8.6294 e - 23 3.6734 e - 87 1.1711 e - 349
MM4 1.0838 e - 6 8.4404 e - 25 1.2327 e - 96 4.3437 e - 384

Example 14 (0.5, 1.5) NR1 1.8661 e - 1 7.1492 e - 4 4.5647 e - 11 1.1889 e - 32
NR2 1.7417 e - 1 5.7596 e - 4 2.3870 e - 11 1.7000 e - 33
MM3 1.2770 e - 1 4.6794 e - 5 4.1708 e - 15 3.0222 e - 45
BB4 9.8299 e - 2 9.2624 e - 6 8.3046 e - 22 5.3716 e - 86
SH4 1.0359 e - 1 5.4166 e - 6 4.6302 e - 23 2.4821 e - 91
MM4 1.4490 e - 1 1.0558 e - 5 6.7122 e - 22 1.0964 e - 86
Example 9.

Consider (68) x 1 2 - x 2 - 19 = 0 , - x 1 2 + x 2 3 6 + x 2 - 17 = 0 , with initial guess X ( 0 ) = ( 5.1,6.1 ) T , and one of its solutions is α = ( 5,6 ) T .

Example 10.

Consider (69) - Sin ( x 1 ) + Cos ( x 2 ) = 0 , - 1 x 2 + ( x 3 ) x 1 = 0 , e x 1 - ( x 3 ) 2 = 0 , with initial guess X ( 0 ) = ( 1 , 0.5 , 1.5 ) T , and one of its solutions is α = ( 0.9095 0.6612 1.5758 ) T .

Example 11.

Consider (70) x 2 x 3 + x 4 ( x 2 + x 3 ) = 0 , x 1 x 3 + x 4 ( x 1 + x 3 ) = 0 , x 1 x 2 + x 4 ( x 1 + x 2 ) = 0 , x 1 x 2 + x 1 x 3 + x 2 x 3 = 1 , with initial guess X ( 0 ) = ( 0.5,0.5,0.5 , - 0.2 ) T , and one of its solutions is α ( 0.577350,0.577350,0.577350 , - 0.288675 ) T .

Example 12.

Consider (71) - e x 1 + tan - 1 ( x 2 ) + 2 = 0 , tan - 1 ( x 1 2 + x 2 2 - 5 ) = 0 , with initial guess X ( 0 ) = ( 1.0,2.0 ) T , and one of its solutions is α = ( 1.12906503 1.930080863 ) T .

Example 13.

Consider (72) - e - x 1 + x 2 + x 3 = 0 , - e - x 2 + x 1 + x 3 = 0 , - e - x 3 + x 1 + x 2 = 0 , with initial guess X ( 0 ) = ( - 0.8,1.1,1.1 ) T , and one of its solutions is α = ( - 0.8320 1.1489 1.1489 ) T .

Example 14.

Consider (73) log ( x 2 ) - x 1 2 + x 1 x 2 = 0 , log ( x 1 ) - x 2 2 + x 1 x 2 = 0 , with initial guess X ( 0 ) = ( 0.5,1.5 ) T , and one of its solutions is α = ( 1,1 ) T .

5. Conclusion

In the present work, we have provided a family of third- and optimal fourth-order iterative methods which yield some existing as well as many new third-order and fourth-order iterative methods. The multivariate case of these methods has also been considered. The efficiency of our methods is supported by Table 2 and Table 4.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to express their sincerest thanks to the editor and reviewer for their constructive suggestions, which significantly improved the quality of this paper. The authors would also like to record their sincere thanks to Dr. F. Soleymani for providing his efficient cooperation.

Weerakoon S. Fernando T. G. I. A variant of Newton's method with accelerated third-order convergence Applied Mathematics Letters 2000 13 8 87 93 2-s2.0-0012466757 Homeier H. H. H. On Newton-type methods with cubic convergence Journal of Computational and Applied Mathematics 2005 176 2 425 432 2-s2.0-11244299419 10.1016/j.cam.2004.07.027 Chun C. Kim Y. Several new third-order iterative methods for solving nonlinear equations Acta Applicandae Mathematicae 2010 109 3 1053 1063 2-s2.0-77949269599 10.1007/s10440-008-9359-3 Kung H. T. Traub J. F. Optimal order of one-point and multipoint iteration Journal of Computational and Applied Mathematics 1974 21 4 643 651 2-s2.0-0016115525 Soleymani F. Two new classes of optimal Jarratt-type fourth-order methods Applied Mathematics Letters 2012 25 5 847 853 Sharifi M. Babajee D. K. R. Soleymani F. Finding the solution of nonlinear equations by a class of optimal methods Computers and Mathematics with Applications 2012 63 4 764 774 2-s2.0-84856386546 10.1016/j.camwa.2011.11.040 Khattri S. K. Abbasbandy S. Optimal fourth order family of iterative methods Matematicki Vesnik 2011 63 1 67 72 2-s2.0-78650610142 Gautschi W. Numerical Analysis: An Introduction 1997 Boston, Mass, USA Birkhauser Traub J. F. Iterative Methods for Solution of Equations 1997 New York, NY, USA Chelsea Publishing Grau-Sánchez M. Díaz-Barrero J. L. Zero-finder methods derived using Runge-Kutta techniques Applied Mathematics and Computation 2011 217 12 5366 5376 2-s2.0-79551625234 10.1016/j.amc.2010.11.059 Huen K. Neue methode zur approximativen integration der differentialge-ichungen einer unabhngigen variablen Zeitschrift für angewandte Mathematik und Physik 1900 45 23 38 Soleymani F. Babajee D. K. R. Computing multiple zeros using a class of quartically convergent methods Alexandria Engineering Journal 2013 52 531 541 Noor M. A. Waseem M. Some iterative methods for solving a system of nonlinear equations Computers and Mathematics with Applications 2009 57 1 101 106 2-s2.0-57349148679 10.1016/j.camwa.2008.10.067 Sharma J. R. Guha R. K. Sharma R. An efficient fourth-order weighted-Newton method for systems of nonlinear equations Numerical Algorithms 2013 62 307 323 Babajee D. K. R. Cordero A. Soleymani F. Torregrosa J. R. On a novel fourth-order algorithm for solving systems of nonlinear equations Journal of Applied Mathematics 2012 2012 12 165452 10.1155/2012/165452