IJEM International Journal of Engineering Mathematics 2314-6109 2356-7007 Hindawi Publishing Corporation 10.1155/2016/6390367 6390367 Research Article A New Accurate and Efficient Iterative Numerical Method for Solving the Scalar and Vector Nonlinear Equations: Approach Based on Geometric Considerations http://orcid.org/0000-0002-9661-4757 Antoni Grégory 1 Machado Josè A. Tenereiro Aix-Marseille Université IFSTTAR LBA UMR T24 13016 Marseille France univ-amu.fr 2016 782016 2016 31 03 2016 12 06 2016 782016 2016 Copyright © 2016 Grégory Antoni. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper deals with a new numerical iterative method for finding the approximate solutions associated with both scalar and vector nonlinear equations. The iterative method proposed here is an extended version of the numerical procedure originally developed in previous works. The present study proposes to show that this new root-finding algorithm combined with a stationary-type iterative method (e.g., Gauss-Seidel or Jacobi) is able to provide a longer accurate solution than classical Newton-Raphson method. A numerical analysis of the developed iterative method is addressed and discussed on some specific equations and systems.

1. Introduction

Solving both nonlinear equations and systems is a situation very often encountered in various fields of formal or physical sciences. For instance, solid mechanics is a branch of physics where the resolution of problems governed by nonlinear equations and systems occurs frequently . In most cases, Newton method (also known as Newton-Raphson algorithm) is most commonly used for approximating the solutions of scalar and vector nonlinear equations . But, over the years, several other numerical methods have been developed for providing iteratively the approximate solutions associated with nonlinear equations and/or systems . Some of them present the advantage of having both high accuracy and strong efficiency using a numerical procedure based on an enhanced Newton-Raphson algorithm . In this study, we propose to improve the iterative procedure developed in previous works [27, 28] for finding numerically the solution of both scalar and vector nonlinear equations. This study is decomposed as follows: (i) in Section 2, a new numerical geometry-based root-finding algorithm coupled with a stationary-type iterative method (such as Jacobi or Gauss-Seidel) is presented with the aim of solving any system of nonlinear equations [29, 30]; (ii) in Section 3, the numerical predictive abilities associated with the proposed iterative method are tested on some examples and also compared with other algorithms [31, 32].

2. New Iterative Numerical Method for Solving the Scalar and Vector Nonlinear Equations Based on a Geometric Approach 2.1. Problem Statement

We consider vector-valued function F : X = t r a n s ( { x 1 , x 2 , x 3 , , x n } ) Ω R n F ( X ) R n , which is continuous and infinitely differentiable (i.e., F C ( Ω ) ), checking the following equation: (1) F X = 0 , where X = t r a n s ( { x 1 , x 2 , x 3 , , x n } ) denotes the vector-valued variable (with n N ), x i is i th component associated with vector X (with 1 i n ), t r a n s ( ) is the transpose operator associated with the variable , and C ( Ω ) denotes the class of infinitely differentiable functions in domain Ω . It should be mentioned that: (i) the nonlinear function F = t r a n s ( { F 1 , F 2 , F 3 , , F n } ) R n has a unique solution X on domain Ω which is an open subset of R n , that is, X Ω R n such as F ( X ) = 0 R n ; (ii) the case of scalar equation ( f ) with only one variable ( x ) is obtained when n = 1 , that is, F : X Ω R F ( X ) R f : x Ω R f ( x ) R .

Equation (1) can also rewritten as system ( S n l ) of n -scalar nonlinear equations, that is, (2) S n l :   F 1 x 1 , x 2 , x 3 , , x i , , x n = 0 F 2 x 1 , x 2 , x 3 , , x i , , x n = 0 F 3 x 1 , x 2 , x 3 , , x i , , x n = 0         F i x 1 , x 2 , x 3 , , x i , , x n = 0         F n x 1 , x 2 , x 3 , , x i , , x n = 0 w i t h 1 i n , where F i denotes i th component associated with vector-valued function F (see (1)), that is, i th nonlinear equation of system ( S n l ) . It should be noted that: (i) in the case of i [ 1 , n ] (with n > 1 ), the nonlinear system ( S n l ) (2) has a unique solution set { x 1 , x 2 , , x i , , x n } such as F i ( x 1 , x 2 , , x i , , x n ) = 0 ; (ii) in the case of n = 1 , nonlinear system ( S n l ) (2) is transformed in scalar nonlinear equation ( E n l ) which has a unique solution x such as f ( x ) = 0 ; (iii) (1) and (2) are mathematically equivalent, that is, X = t r a n s ( { x 1 , x 2 , , x i , , x n } ) .

With the aim of numerically solving system ( S n l ) (2), we adopt a Root-Finding Algorithm (RFA) coupled with a Stationary Iterative Procedure (SIP) such as Jacobi or Gauss-Seidel [26, 30]. The use of any SIP allows to reduce the considered nonlinear system ( S n l ) to a successive set of nonlinear equations with only one variable and therefore it can be solved with a RFA . In the present study, we propose an extended version of RFA already developed in [27, 28] and combined with a Jacobi or Gauss-Seidel type iterative procedure for dealing any system of nonlinear equations.

2.2. Stationary Iterative Procedures (SIPs) with Root-Finding Algorithms (RFAs) 2.2.1. Jacobi and Gauss-Seidel Iterative Procedures

In order to solve a system of nonlinear equations, any RFA can be used if it is combined with a SIP (i.e., Jacobi or Gauss-Seidel) [26, 29, 30]. A Jacobi or Gauss-Seidel type procedure applied to nonlinear system ( S n l ) (1) can be described as follows: (3) F i x i k + 1 ; Δ i = 0 i = 1 , , n k = 0 , , m with

in the case of Jacobi procedure: (4) Δ i = x 1 k , x 2 k , x 3 k , , x i - 1 k x i + 1 k , , x n k

in the case of Gauss-Seidel procedure: (5) Δ i = x 1 k + 1 , x 2 k + 1 , x 3 k + 1 , , x i - 1 k + 1 x i + 1 k , , x n k ,

where k (resp., k + 1 ) denotes k th (resp., ( k + 1 ) th) iteration associated with the variable ( m N ), Δ i is the set of kept constant variables, and { } represents one set of variables .

2.3. Used Root-Finding Algorithm (RFA)

In previous works [27, 28], a root-finding algorithm (RFA) has been developed for approximating the solutions of scalar nonlinear equations. The new RFA presented here is an extended version to that previously developed taking into account some geometric considerations. In this paper, we propose to use a RFA coupled with Jacobi and Gauss-Seidel type procedures for iteratively solving nonlinear system ( S n l ) . Hence, we adopt a new RFA for finding approximate solution x i k + 1 (when i is fixed and with the known set Δ i ) associated with each nonlinear equation F i of system ( S n l ) (see Section 2.1). For each nonlinear equation F i , parametrized by the set of variables Δ i , depending only on one variable x i k + 1 , we introduce the exact and inexact local curvature associated with the curve representing the nonlinear equation in question.

The used RFA is based on the following main steps (see  for more details):

In the first step, we consider the iterative tangent and normal straight lines ( T i k , N i k ) = ( T i ( x i ; Θ i ) , N i ( x i ; Θ i ) ) associated with nonlinear function F i at point p i k = ( x i k , F i k ) (see Figure 1): (6) T i k = F i k x i - x i k + F i k N i k = - 1 F i k x i - x i k + F i k F i k 0 i = 1 , , n k = 0 , , m ,

where F i k = F i ( x i k ; Δ i ) (resp., F i k = F i ( x i k ; Δ i ) ) denotes the value (resp., first-order derivative) of function F i at point x i k , Θ i = { Δ i , x i k } is the set of known variables and ( T i , N i ) are two functionals associated with ( T i , N i ) .

In the second step, we introduce the iterative exact and inexact local curvature associated with the curve representing nonlinear function F i k at point p i k = ( x i k , F i k ) (see Figure 1): (7) 1 R i k e x = F i k 1 + F i k 2 3 / 2 , 1 R i k i n = F i k 1 + F i k 2 1 / 2 F i k 0 F i k 0 i = 1 , , n k = 0 , , m ,

where | | denotes the absolute-value function associated with the variable , R i k =    R i ( x i k ; Δ i ) is the exact ( = e x ) or inexact ( = i n ) radius of the osculating C i k at point x i k , R i (with = e x , i n ) is functional associated with R i , and F i k = F i ( x i k ; Δ i ) is the second-order derivative of function F i at point x i k . It should be noted that: (i) we consider the exact radius R i k e x associated with the true osculating circle C i k e x at point p i k = ( x i k , F i k ) (see ); (ii) in line with [27, 28], we consider an inexact radius associated with the osculating circle C i k i n at point p i k = ( x i k , F i k ) , that is, R i k e x R i k i n (see (7)).

In the third step, we define the iterative center C i k = ( X c , i k , Y c , i k ) associated with the exact and inexact osculating circle at point x i = x i k , that is, (see Figure 1) (8) x i k - X c , i k 2 + z i k - Y c , i k 2 = R i k 2 x i k - X c , i k 2 + F i k - Y c , i k 2 = R i k 2 x i k - X c , i k 2 + - 1 F i k X c , i k - x i k 2 = R i k 2 X c , i k - x i k 2 = F i k 2 1 + F i k 2 R i k 2 F i k 0 F i k 0 i = 1 , , n k = 0 , , m .

By taking (7) and (8), the iterative centers C i k = ( X c , i k , Y c , i k ) are (with = e x , i n ) (9) X e x c , i k = x i k + sgn - F i k F i k F i k F i k 1 + F i k 2 Y e x c , i k = - 1 F i k X e x c , i k - x i k + F i k X i n c , i k = x i k + sgn - F i k F i k F i k F i k Y i n c , i k = - 1 F i k X i n c , i k - x i k + F i k F i k 0 F i k 0 i = 1 , , n k = 0 , , m ,

where ( X c , i k , Y c , i k ) = ( X c , i ( x i k ; Δ i ) , Y c , i ( x i k ; Δ i ) ) is the iterative centers associated with the exact and inexact osculating circle C i k (with = e x , i n ) associated with each curve representing nonlinear function F i k at point x i k , ( X c , i , Y c , i ) are two functionals associated with ( X c , i , Y c , i ) and sgn ( ) is sign function (i.e., sgn ( ) = - 1 when < 0 , s g n ( ) = 1 when > 0 , and s g n ( ) = 0 when = 0 ).

In the fourth step, we introduce the iterative point σ i k = W i ( x i k ; Δ i ) such as T i k = 0 , that is, (10) σ i k = x i k - F i k F i k F i k 0 i = 1 , , n k = 0 , , m ,

where W i is a functional associated with W i .

In the fifth step, we define the iterative straight line G i k = G i ( x i ; Ξ i ) passing through two iterative points ( X c , i k , Y c , i k ) and ( σ i k , 0 ) , that is, (with = e x , i n ) (11) G i k = - 1 F i k X c , i k - x i k + F i k x i - σ i k X c , i k - W i k F i k 0 i = 1 , , n k = 0 , , m ,

where Ξ i = { Θ i , X c , i k , σ i k } is the set of known variables.

In the sixth step, we introduce the iterative straight line H i k = H i ( x i ; Ξ i ) passing through the point ( x i k , F i k ) and the iterative perpendicular straight line G i k such as (with = e x , i n ) (12) H i k = σ i k - X c , i k - 1 / F i k X c , i k - x i k + F i k x i - x i k + F i k F i k 0 i = 1 , , n k = 0 , , m ,

where H i is a functional associated with H i .

In the last step, we define the iterative point x i k + 1 = x i which is the solution of the following relation (with = e x , i n ) (13) G i k = H i k - 1 F i k X c , i k - x i k + F i k x i - σ i k X c , i k - W i k = σ i k - X c , i k - 1 / F i k X c , i k - x i k + F i k x i - x i k + F i k x i k + 1 = 1 Φ i k + 1 / Φ i k Φ i k σ i k + x i k Φ i k + F i k

with (14) Φ i k = P i Ξ i = - 1 / F i k X c , i k - x i k + F i k X c , i k - σ i k F i k 0 i = 1 , , n k = 0 , , m ,

where P i is a functional associated with Φ i .

In line with (10), (14) can be rewritten as (15) x i k + 1 = x i k + F i k Φ i k + 1 / Φ i k 1 - Φ i k F i k F i k 0 i = 1 , , n k = 0 , , m .

Schematic diagram with the specific entities used by the new RFA applied on i th component F i associated with system ( S n l ) in the case of monotonically increasing (a) and decreasing (b) evolution with the known set of parameters Δ i .

The new iterative method that we will thereafter appoint as “Adaptative Geometric-based Algorithm” (AGA) enables providing a more convenient approximate solution associated with a system of nonlinear equations of type ( S n l ) , that is, (see Figure 2) (16) x i k + 1 = Γ x i k ; β i i f B C σ i k e l s e i = 1 , , n k = 0 , , m with (17) Γ x i k ; β i = x i k + F i k Φ i k + 1 / Φ i k 1 - Φ i k F i k Φ i k = - 1 / F i k X c , i k - x i k + F i k X c , i k - σ i k , X c , i k = x i k + sgn - F i k F i k F i k F i k M i k F i k 0 F i k 0 i = 1 , , n k = 0 , , m and for conditions [ B C 1 ] , [ B C 2 ] , [ B C 4 ] , and [ B C 5 ] : (18) M i k = 1 + F i k 2 i f B C 1 o r B C 4 1 i f B C 2 o r B C 5 and for conditions [ B C 3 ] and [ B C 6 ] : (19) M i k = 1 + F i k 2 i f = e x 1 i f = i n , where Γ denotes the fixed-point function  used to the considered RFA (i.e., AGA) and β i = { Δ i , X c , i k } is the set of known variables.

Geometric interpretation of the new RFA (i.e., AGA) applied on i th component F i associated with system ( S n l ) in the case of monotonically increasing (a) and decreasing (b) evolution with the known set of parameters Δ i .

The different conditions [ B C ] associated with the proposed RFA (i.e., AGA) are as follows:

First condition [BC1] is (20) sgn x i k = sgn Γ x i k ; X c , i k e x , Δ i , sgn F i x i k ; Δ i = sgn F i Γ x i k ; X c , i k e x , Δ i , F i Γ x i k ; X c , i k e x , Δ i < F i σ i k ; Δ i i = 1 , , n k = 0 , , m .

Second condition [BC2] is (21) sgn x i k = sgn Γ x i k ; X c , i k i n , Δ i , sgn F i x i k ; Δ i = sgn F i Γ x i k ; X c , i k i n , Δ i , F i Γ x i k ; X c , i k i n , Δ i < F i σ i k ; Δ i i = 1 , , n k = 0 , , m .

Third condition [BC3] is (22) sgn x i k = sgn Γ x i k ; X c , i k , Δ i , sgn F i x i k ; Δ i = sgn F i Γ x i k ; X c , i k , Δ i , F i Γ x i k ; X c , i k , Δ i < F i σ i k ; Δ i i = 1 , , n k = 0 , , m .

Fourth condition [BC4] is (23) F i Γ x i k ; X c , i k e x , Δ i < F i σ i k ; Δ i i = 1 , , n k = 0 , , m .

Fifth condition [BC5] is (24) F i Γ x i k ; X c , i k i n , Δ i < F i σ i k ; Δ i i = 1 , , n k = 0 , , m .

Sixth condition [BC6] is (25) F i Γ x i k ; X c , i k , Δ i < F i σ i k ; Δ i i = 1 , , n k = 0 , , m .

3. Numerical Examples 3.1. Preliminary Remarks

In this section, we propose to evaluate the predictive abilities associated with the numerical iterative method developed in Section 2.3 (i.e., AGA) on some examples both in the case of scalar and vector nonlinear equations. Hence, AGA is compared to other iterative Newton-Raphson type methods [27, 28, 3032] coupled with Jacobi (J) and Gauss-Seidel (GS) techniques. All the numerical implementations associated with these presented iterative methods have been made in Matlab software (see [26, 3439]).

The used iterative methods for the different examples are as follows (see [27, 28, 3032]):

Newton-Raphson Algorithm (NRA): (26) X k + 1 = X k - i n v D 1 F X k · F X k k = 0 , , m ,

where D [ 1 ] ( F ( X k ) ) denotes the first-order differential operator associated with nonlinear function F at point X k = ( x 1 k , , x n k ) t and i n v ( ) is the inverse transform operator of . It is important to highlight that NRA can be used if and only if operator “ i n v ( D [ 1 ] ( F ( X k ) ) ” exists, that is, det ( D [ 1 ] ( F ( X k ) ) 0 , X k Ω R n ( det ( ) is determinant of operator ).

Standard Newton’s Algorithm (SNA): (27) x i k + 1 = x i k - F i x i k ; Δ i F i x i k ; Δ i F i x i k ; Δ i 0 i = 1 , , n k = 0 , , m .

Third-order Modified Newton Method (TMNM): (28) x i k + 1 = x i k - F i x i k ; Δ i F i x i k ; Δ i - F i x i k ; Δ i F i x i k ; Δ i 2 2 F i x i k ; Δ i 3 F i x i k ; Δ i 0 i = 1 , , n k = 0 , , m .

In order to stop the iterative process associated with each considered algorithm, we consider three coupled types of criteria for dealing with nonlinear equations:

For scalar-valued equations ( E n l ) :

(C1S) on the iteration number, (29) m K m a x s ,

where K m a x s denotes the maximum number of iterations associated with scalar-valued equations.

(C2S) on the residue error, (30) f x k + 1 ϵ r e s k = 0 , , m ,

where ϵ r e s is the tolerance parameter associated with the residue error criterion for scalar-valued equations and = is the absolute-value norm.

(C3S) on the approximation error, (31) x k + 1 - x k ϵ a e s k = 0 , , m ,

where ϵ a e s is the tolerance parameter associated with the absolute error criterion for scalar-valued equations.

For vector-valued equations ( S n l ) :

(C1V) on the iteration number, we adopt the same condition that (C1S), that is, m K m a x v = K m a x s (where K m a x v denotes the maximum number of iterations associated with vector-valued equations).

(C2V) on the residue error, (32) C 2 V 1 :    F X k 1 ϵ r e v 1 o r C 2 V 2 :   F X k 2 ϵ r e v 2 k = 0 , , m ,

where ϵ r e v 1 (resp., ϵ r e v 2 ) is the tolerance parameter associated with the residue error criterion for vector-valued equations and Υ p = i = 1 n | Υ i | p 1 / p = | Υ 1 | p + | Υ 2 | p + + | Υ i | p + + | Υ n | p ) 1 / p is the vector p -norm (here, p = 1,2 ). It is important to point out that Υ 2 = | Υ 1 | 2 + | Υ 2 | 2 + + | Υ i | 2 + + | Υ n | 2 is so-called Euclidean norm.

(C3V) on the approximation error, (33) C 3 V 1 :    X k + 1 - X k 1 ϵ a e v 1 o r C 3 V 2 :    X k + 1 - X k 2 ϵ a e v 2 k = 0 , , m ,

where ϵ a e is the tolerance parameter associated with the absolute error criterion for vector-valued equations.

Here, for the stopping criteria (C1S, C1V), (C2S, C2V), and (C3S, C3V) associated with the iterative process, we consider: (i) the maximum number of iterations K m a x s = K m a x v = 20 ; (ii) the tolerance parameter ϵ r e s = ϵ a e s = 10 - 30 for the scalar-valued equations; (iii) the tolerance parameter ϵ r e v i = ϵ a e v i = 10 - 30 (with i = 1,2 ) for the vector-valued equations.

3.2. Examples

We consider the following nonlinear equations.

In case n = 1 (i.e., scalar-valued equations), one has the following:

Example 1.

Consider the following: (34) E n l I : f x = 10 x 3 + 5 log x - 2 = 0 .

Example 2.

Consider the following: (35) E n l I I :   f x = 3 exp x + x 3 cos x - 30 = 0 .

In case n = 2 (i.e., vector-valued equations), one has the following:

Example 3.

Consider the following: (36) S n l I :   F 1 x 1 , x 2 = 3 x 1 3 - 2 x 2 - 5 = 0 F 2 x 1 , x 2 = x 1 2 - 2 x 1 x 2 2 = 0 .

Example 4.

Consider the following: (37) S n l I I :   F 1 x 1 , x 2 = x 1 ln x 2 + 4 x 1 3 - 5 = 0 F 2 x 1 , x 2 = x 2 4 + x 1 x 2 - 6 = 0 .

3.3. Results and Discussion

All the numerical results of Examples 14 are shown in Figures 332. For Example 1 (resp., Example 2) with guest start point x 1 0 = 5 (resp., x 1 0 = 10 ), we can see that approximate solutions x 1 k provided by AGA with condition [BC1]/[BC3] (i.e., condition [ B C 1 ] is the same that [ B C 3 ] where = e x ) are better than AGA with condition [ B C 2 ] , NRA/SNA (when n = 1 , NRA and SNA are the same), and TMNM. For Example 1 (resp., Example 2) in the case of guest start point x 1 0 = 10 - 2 (resp., x 1 0 = 1 ), we can see that approximate solutions x 1 k provided by AGA with conditions B C 4 , [ B C 5 ] , and [ B C 6 ] in the first iterations are accurately better than NRA/SNA (when n = 1 , NRA and SNA are the same) and TMNM. For Example 3 with guest start point couple ( x 1 0 , x 1 0 ) = ( 20,20 ) , we can observe that approximate solutions ( x 1 k , x 2 k ) given by AGA using Gauss-Seidel (GS) or Jacobi (J) procedure with: (i) condition [ B C 1 ] are more accurate numerically than NRA, TMNM, and SNA; (ii) conditions [ B C 2 ] and [ B C 3 ] are accurately better than NRA (only in the first iterations) and SNA. In the case of guest start point couple ( x 1 0 , x 1 0 ) = ( 0.42,0.42 ) , we can see that approximate solutions ( x 1 k , x 2 k ) provided by AGA using: (i) Gauss-Seidel (GS) procedure with condition [ B C 6 ] give much greater numerical accuracy than NRA and SNA; (ii) Gauss-Seidel (GS) procedure with conditions [ B C 4 ] and [ B C 5 ] offer much greater numerical accuracy than NRA and SNA; (iii) Jacobi (J) procedure with conditions B C 4 , [ B C 5 ] , and [ B C 6 ] are better than NRA (only in the first iterations) and SNA. For Example 4 with guest start point couple ( x 1 0 , x 1 0 ) = ( 10,10 ) , we can see that approximate solutions ( x 1 k , x 2 k ) given by AGA using: (i) Gauss-Seidel (GS) procedure with conditions [ B C 1 ] and [ B C 3 ] are more accurate numerically than SNA, TMNM, and NRA; (ii) Gauss-Seidel (GS) procedure with condition [ B C 2 ] are accurately better than TMNM and NRA (only in the first iterations) and SNA; (iii) Jacobi (J) procedure with conditions B C 1 , [ B C 2 ] , and [BC3] are more accurate numerically than NRA (only in the first iterations) and both TMNM and SNA. In the case of guest start point couple ( x 1 0 , x 1 0 ) = ( 0.5,0.5 ) , approximate solutions ( x 1 k , x 2 k ) obtained by AGA using: (i) Gauss-Seidel (GS) procedure with conditions B C 5 and [ B C 6 ] are more accurate numerically than SNA and NRA; (ii) Jacobi (J) procedure with the conditions [ B C 4 ] , [ B C 5 ] , and [ B C 6 ] are accurately better than NRA (only in the first iterations) and SNA. Overview of different numerical results shows that the Adaptive Geometry-based Algorithm (AGA) can be able to provide quite accurately the approximate solutions associated with both nonlinear equations and systems and can potentially provide a better or more suitable approximate solution than that of Newton-Raphson Algorithm (NRA).

Evolution of approximate solutions x k associated with ( E n l I ) compared to k th iteration for Example 1 (where x 0 = 5 ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition [ B C 1 ] / [ B C 3 ] (blue solid line with circles) and condition [ B C 2 ] (red solid line with circles).

Evolution of approximate solutions x k associated with ( E n l I ) compared to k th iteration for Example 1 (where x 0 = 10 - 2 ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition [ B C 4 ] (blue solid line with circles) and condition [ B C 5 ] / [ B C 6 ] (magenta solid line with circles).

Evolution of residue error (C2S) and approximation error (C3S) associated with ( E n l I ) compared to k th iteration for Example 1 (where x 0 = 5 ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition [ B C 1 ] / [ B C 3 ] (blue solid line with diamonds and squares) and condition [ B C 2 ] (red solid line with diamonds and squares).

Evolution of residue error (C2S) and approximation error (C3S) associated with ( E n l I ) compared to k th iteration for Example 1 (where x 0 = 10 - 2 ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition [BC4] (blue solid line with diamonds and squares) and condition [ B C 5 ] / [ B C 6 ] (magenta solid line with diamonds and squares).

Evolution of approximate solutions x k associated with ( E n l I I ) compared to k th iteration for Example 2 (where x 0 = 10 ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition [ B C 1 ] / [ B C 3 ] (blue solid line with circles) and condition [ B C 2 ] (red solid line with circles).

Evolution of approximate solutions x k associated with ( E n l I I ) compared to k th iteration for Example 2 (where x 0 = 1 ) with NRA/SNA (black solid line with circles), TMNM (green solid line with circles), and AGA with condition [ B C 4 ] (blue solid line with circles), condition [ B C 5 ] (red solid line with circles), and condition [ B C 6 ] (magenta solid line with circles).

Evolution of residue error (C2S) and approximation error (C2S) associated with ( E n l I I ) compared to k th iteration for Example 1 (where x 0 = 10 ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition [ B C 1 ] / [ B C 3 ] (blue solid line with diamonds and squares) and condition [ B C 2 ] (red solid line with diamonds and squares).

Evolution of residue error (C2S) and approximation error (C3S) associated with ( E n l I I ) compared to k th iteration for Example 2 (where x 0 = 1 ) with NRA/SNA (black solid line with diamonds and squares), TMNM (green solid line with diamonds and squares), and AGA with condition [ B C 4 ] (blue solid line with diamonds and squares), condition [ B C 5 ] (red solid line with diamonds and squares), and condition [ B C 6 ] (magenta solid line with diamonds and squares).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 20,20 ) ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles), TMNM-GS (green solid line with circles), AGA-GS with condition [ B C 1 ] (blue solid line with circles), condition [ B C 2 ] (red solid line with circles), and condition [ B C 3 ] (magenta solid line with circles).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 20,20 ) ) with NRA (black solid line with circles) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles), TMNM-J (green dashed line with circles), and AGA-J with condition [ B C 1 ] (blue dashed line with circles), condition [ B C 2 ] (red dashed line with circles), and condition [ B C 3 ] (magenta dashed line with circles).

Evolution of residue error using (C2V1) and (C2V2) conditions associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 20,20 ) ) with NRA (black solid line with diamonds) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with diamonds), TMNM-GS (green solid line with diamonds), and AGA-GS with condition [ B C 1 ] (blue solid line with diamonds), condition [ B C 2 ] (red solid line with diamonds), and condition [BC3] (magenta solid line with diamonds).

Evolution of residue error using (C2V1) and (C2V2) conditions associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 20,20 ) ) with NRA (black solid line with diamonds) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with diamonds), TMNM-J (green dashed line with diamonds), and AGA-J with condition [ B C 1 ] (blue dashed line with diamonds), condition [ B C 2 ] (red dashed line with diamonds), and condition [ B C 3 ] (magenta dashed line with diamonds).

Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 20,20 ) ) with NRA (black solid line with squares) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with squares), TMNM-GS (green solid line with squares), and AGA-GS with condition [ B C 1 ] (blue solid line with squares), condition [ B C 2 ] (red solid line with squares), and condition [ B C 3 ] (magenta solid line with squares).

Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 20,20 ) ) with NRA (black solid line with squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with squares), TMNM-J (green dashed line with squares), and AGA-J with condition [ B C 1 ] (blue dashed line with squares), condition [ B C 2 ] (red dashed line with squares), and condition [ B C 3 ] (magenta dashed line with squares).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 0.42,0.42 ) ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles) and AGA-GS with condition [ B C 4 ] (blue solid line with circles), condition [ B C 5 ] (red solid line with circles), and condition [ B C 6 ] (magenta solid line with circles).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 0.42,0.42 ) ) with NRA (black solid line with diamonds and squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles) and AGA-J with condition [ B C 4 ] (blue dashed line with circles), condition [ B C 5 ] (red dashed line with circles), and condition [ B C 6 ] (magenta dashed line with circles).

Evolution of residue error using (C2V1) and (C2V2) conditions associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 0.42,0.42 ) ) with NRA (black line) and some other algorithms coupled with Gauss-Seidel (GS) and Jacobi (J) procedures: SNA-GS/SNA-J (cyan line) and AGA-GS/AGA-J with condition [ B C 4 ] (blue line), condition [ B C 5 ] (red line), and condition [ B C 6 ] (magenta line).

Evolution of approximation error for (C3V1) and (C3V2) conditions associated with ( S n l I ) compared to k th iteration for Example 3 (where ( x 1 0 , x 2 0 ) = ( 0.42,0.42 ) ) with NRA (black line) and some other algorithms coupled with Gauss-Seidel (GS) and Jacobi (J) procedures: SNA-GS/SNA-J (cyan line) and AGA-GS/AGA-J with condition [ B C 4 ] (blue line), condition [ B C 5 ] (red line), and condition [ B C 6 ] (magenta line).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 10,10 ) ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles), TMNM-GS (green solid line with circles), and AGA-GS with condition [ B C 1 ] (blue solid line with circles), condition [ B C 2 ] (red solid line with circles), and condition [ B C 3 ] (magenta solid line with circles).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 10,10 ) ) with NRA (black solid line with circles) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles), TMNM-J (green dashed line with circles), and AGA-J with condition [ B C 1 ] (blue dashed line with circles), condition [ B C 2 ] (red dashed line with circles), and condition [ B C 3 ] (magenta dashed line with circles).

Evolution of residue error using (C2V1) and (C2V2) conditions associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 10,10 ) ) with NRA (black solid line with diamonds) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with diamonds), TMNM-GS (green solid line with diamonds), and AGA-GS with condition [ B C 1 ] (blue solid line with diamonds), condition [ B C 2 ] (red solid line with diamonds), and condition [ B C 3 ] (magenta solid line with diamonds).

Evolution of residue error using (C2V1) and (C2V2) conditions associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 10,10 ) ) with NRA (black solid line with diamonds) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with diamonds), TMNM-J (green dashed line with diamonds), and AGA-J with condition [ B C 1 ] (blue dashed line with diamonds), condition [ B C 2 ] (red dashed line with diamonds), and condition [ B C 3 ] (magenta dashed line with diamonds).

Evolution of the approximation error using (C3V1) and (C3V2) associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 10,10 ) ) with NRA (black solid line with squares) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with squares), TMNM-GS (green solid line with squares), and AGA-GS with condition [ B C 1 ] (blue solid line with squares), condition [ B C 2 ] (red solid line with squares), and condition [ B C 3 ] (magenta solid line squares).

Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 10,10 ) ) with NRA (black solid line with squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with squares), TMNM-J (green dashed line with squares), and AGA-J with condition [ B C 1 ] (blue dashed line with squares), condition [ B C 2 ] (red dashed line with squares), and condition [ B C 3 ] (magenta dashed line with squares).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 0.5,0.5 ) ) with NRA (black solid line with circles) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with circles) and AGA-GS with condition [ B C 4 ] (blue solid line with circles), condition [ B C 5 ] (red solid line with circles), and condition [ B C 6 ] (magenta solid line with circles).

Evolution of approximate solutions ( x 1 k , x 2 k ) associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 0.5,0.5 ) ) with NRA (black solid line with circles) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with circles) and AGA-J with condition [ B C 4 ] (blue dashed line with circles), condition [ B C 5 ] (red dashed line with circles), and condition [ B C 6 ] (magenta dashed line with circles).

Evolution of the residue error using (C2V1) and (C2V2) conditions associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 0.5,0.5 ) ) with NRA (black solid line with diamonds) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with diamonds) and AGA-GS with condition [ B C 4 ] (blue solid line with diamonds), condition [ B C 5 ] (red solid line with diamonds), and condition [ B C 6 ] (magenta solid line with diamonds).

Evolution of residue error using (C2V1) and (C2V2) conditions associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 0.5,0.5 ) ) with NRA (black solid line with diamonds) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with diamonds) and AGA-J with condition [ B C 4 ] (blue dashed line with diamonds), condition [ B C 5 ] (red dashed line with diamonds), and condition [ B C 6 ] (magenta dashed line with diamonds).

Evolution of approximation error using (C3V1) and (C3V2) conditions associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 0.42,0.42 ) ) with NRA (black solid line with squares) and some other algorithms coupled with Gauss-Seidel (GS) procedure: SNA-GS (cyan solid line with squares) and AGA-GS with condition [ B C 4 ] (blue solid line with squares), condition [ B C 5 ] (red solid line with squares), and condition [ B C 6 ] (magenta solid line with squares).

Evolution of the approximation error using (C3V1) and (C3V2) conditions associated with ( S n l I I ) compared to k th iteration for Example 4 (where ( x 1 0 , x 2 0 ) = ( 0.42,0.42 ) ) with NRA (black solid line with squares) and some other algorithms coupled with Jacobi (J) procedure: SNA-J (cyan dashed line with squares) and AGA-J with condition [ B C 4 ] (blue dashed line with squares), condition [ B C 5 ] (red dashed line with squares), and condition [ B C 6 ] (magenta dashed line with squares).