Higher-Order Root-Finding Algorithms and Their Basins of Attraction

In this paper, we proposed and analyzed three new root-ﬁnding algorithms for solving nonlinear equations in one variable. We derive these algorithms with the help of variational iteration technique. We discuss the convergence criteria of these newly developed algorithms. The dominance of the proposed algorithms is illustrated by solving several test examples and comparing them with other well-known existing iterative methods in the literature. In the end, we present the basins of attraction using some complex polynomials of diﬀerent degrees to observe the fractal behavior and dynamical aspects of the proposed algorithms.


Introduction
Most of the problems in Mathematics, Physics, and Engineering sciences are linked with the solution of nonlinear equations of the following form: where f: X ⊂ R ⟶ R is a scalar function defined on an open connected set X.
Mostly, the roots of such equations cannot be found directly, and therefore, we need to adopt an iterative method for approximating the roots of such type of equations. In an iterative method, we start the process by choosing an initial guess x 0 which is refined sequentially by means of iterations until the approximated solution is achieved. Some of the basic and classical methods are given in [1][2][3][4][5][6][7][8][9] and the references therein. e most famous and well-known method for finding roots of nonlinear equations is of the following form: which is quadratically convergent Newton's method [10] for the solution of nonlinear equations. To improve its convergence, a large number of iteration schemes have been proposed by means of various types of techniques such as Adomian decomposition method, Taylor's series, perturbation method, quadrature formula, interpolation, and finite difference techniques for removing higher-order derivatives and variational iteration techniques, see [10][11][12][13][14][15][16][17][18][19][20][21][22] and the references therein.
In this paper, we proposed three new algorithms using variational iteration technique by considering two auxiliary functions ϕ(x) and ψ(x). e first one, ϕ(x), behaves like a predictor function having convergence order q, where q ≥ 1 and helps to attain iterative methods of convergence order q + r, where r ≥ 1 is the order of convergence of the second auxiliary function ψ(x). Using variational iteration tech-nique, we develop some new higher order root-finding algorithms with best performance and efficiency. e variational iteration technique was introduced by Inokuti et al. [15] in 1978. Using this technique, Noor and Shah [20,21] proposed some iterative methods for the solution of nonlinear equations. e purpose of this technique was to solve a variety of diverse problems [12][13][14]. Now, we apply the described technique to obtain higherorder root-finding algorithms. New algorithms are very fast using less number of iterations to reach the required root, free from 3rd and higher derivatives with ninth order of convergence which raises the efficiency index of these algorithms. e rest of the paper is divided as follows. e new iteration schemes are described in Section 2. In Section 3, the convergence criteria of the proposed algorithms have been discussed. In Section 4, various test examples have been solved to show their performance as compared to the other similar existing methods in the literature. e basins of attraction for some complex polynomials have been presented in Section 5 which shows the dynamical and fractal behavior of the proposed algorithms. Finally, the conclusion of the paper is given in Section 6.

Construction of Root-Finding Algorithms Using Variational Iteration Technique
In this section, we construct some new root-finding algorithms with the help of variational iteration technique. ese algorithms are multistep iterative methods which involve predictor and corrector steps. e proposed algorithms possess higher order of convergence than one-step methods. By applying variational iteration technique, we derive some new root-finding algorithms of order q + r where q, r ≥ 1 are the orders of convergence of the auxiliary iteration functions ϕ(x) and ψ(x). Now, consider the nonlinear equation of the following form: Suppose that α is the simple root and c is the initial guess sufficiently close to α. For better understanding and to deliver the basic idea, we suppose the approximate solution x n of (3) such that We consider ϕ(x) and ψ(x) as two iteration functions of order q and r, respectively. en where t � q/r, is a recurrence relation which generates iterative methods of order q + r and g(x) is any arbitrary function which later on is converted to g(ψ(x n )) and μ is a parameter, called the "Lagrange's multiplier" and can be determined from (5) by using the optimality criteria as follows: From (5) and (6), we get Now, we are going to apply (7) for constructing a general iterative scheme for iterative methods. For this, suppose that which is well-known Househölder's method with cubic convergence [7]. With the help of (7) and (8), we can write ϕ ′ x n f y n g y n ty n ′ f ′ y n g y n + f y n g ′ y n . (9) Let where which is the two-step iterative method having convergence of order six. Differentiate equation (10) w.r.t "x," we have and from Taylor's series, we can write f z n ≈ f y n + z n − y n f ′ y n + z n − y n e last expression has been obtained by putting the value of z n − y n from (10).
From (11) and (12), we have with the help of (9), (10), and (13), we get x n+1 � z n − 2f z n g y n t g ′ y n f y n + g y n f ′ y n , (14) where t � 6/3 � 2, which is according to the above described technique. en, equation (14) becomes x n+1 � z n − f z n g y n g ′ y n f y n + g y n f ′ y n .
Relation (15) is the main and general iterative scheme, which we use to deduce some new root-finding algorithms by considering some particular cases of the auxiliary function g, keeping in mind that it should not be zero or small for all iterations. If g obtains zero value, then (15) reduces to x n+1 � z n which is similar to (10) and we are unable to derive new algorithms.
Using these values in (15), we obtain the following algorithm.
Algorithm 1. For a given x 0 , compute the approximate solution x n+1 by the following iterative schemes: Using these values in (15), we obtain the following algorithm.
Algorithm 2. For a given x 0 , compute the approximate solution x n+1 by the following iterative schemes: Using these values in (15), we obtain the following algorithm.
Algorithm 3. For a given x 0 , compute the approximate solution x n+1 by the following iterative schemes: To obtain best results in all above algorithms, always choose that values of β which makes the denominator nonzero and largest in magnitude.

Convergence Analysis
In this section, we discuss the convergence criteria of the general iteration scheme described in relation (15).

Theorem 1. Assume that α ∈ I be the simple root of the differentiable function f: I ⊂ R ⟶ R on an open interval I.
If the initial guess x 0 is sufficiently close to α, then the convergence order of the main and general iteration scheme described in relation (15) is at least nine.
Using equations (20)-(38) in general iteration scheme (15), we get the same result as follows: which implies that which shows that the main and general iteration scheme (15) is of ninth order of convergence and all algorithms deduced from it have also the same order of convergence.

Numerical Results
In this section, we included some nonlinear functions to demonstrate the performance of newly proposed algorithms for β � 1. We compare these algorithms with the following well-known iterative methods:

Newton's Method (NM).
For a given x 0 , compute the approximate solution x n+1 by the following iterative scheme: which is well-known Newton's method [10] for finding zeros of nonlinear functions, having quadratic order of convergence.

Halley's Method (HM).
For a given x 0 , compute the approximate solution x n+1 by the following iterative scheme: is is so called Halley's method [2] for root-finding of nonlinear functions, which converges cubically.

Traub's Method (TM).
For a given x 0 , compute the approximate solution x n+1 by the following iterative schemes: which is known as Traub's method [10] for finding roots of nonlinear functions and possesses fourth order of convergence.

Modified Halley's Method (MHM).
For a given x 0 , compute the approximate solution x n+1 by the following iterative schemes: x n+1 � y n − 2f x n f y n f′ y n 2f x n f′ 2 y n − f′ 2 x n f y n + f′ x n f′ y n f y n , (36) which is modified Halley's method [23] for finding the solution of nonlinear functions, having fifth order of convergence.
In order to make numerical comparison of the above described methods with the newly developed algorithms, the following test examples have been solved: as shown in Table 1. Table 1 shows the numerical comparison of our developed algorithms (for β � 1) with Newton's method, Halley's method, Traub's method, and modified Halley's method. e columns represent the number of iterations N, the magnitude |f(x)| of f(x) at the final estimate x n+1 , the approximate root x n+1 , and the computational order of convergence (COC) that can be expressed mathematically using the following formula: which was suggested by Cordero and Torregrosa in (2007) [24]. When we look at the numerical results of Table 1, we come to know that our proposed algorithms are showing best performance as compared to the other ones. For example, in first test example f 1 of Table 1, Algorithm 3 is the best as it took less number of iterations among the all other compared methods with great precision. In second, third, and fifth test examples, Algorithm 1 performed better than the other ones. In the fourth one, Algorithm 3 looks superior to the other methods. In short, we can say that the proposed algorithms are best in terms of accuracy, speed, number of iterations, and computational order of convergence as compared to the other well-known iterative methods. All numerical examples have been solved using the computer program Maple 13 by taking the accuracy ε � 10 − 15 for the following stopping criteria: (39) Table 2 shows the comparison of the number of iterations required for different iterative methods with our developed algorithms (for β � 1) to approximate the root of the given nonlinear function with the accuracy ε � 10 −100 . e columns represent the number of iterations for different functions along with initial guess x 0 . e numerical results as shown in Table 2 again certified the fast and best performance of the proposed algorithms in terms of number of iterations for the above defined stopping criteria with the given accuracy. In all test examples, the proposed algorithms consumed less number of iterations as compared to the other iterative methods. All calculations have been carried out using the computer program Maple 13. Table 3 shows the effect of parameter on the proposed algorithms by taking three different values of β. We applied the proposed algorithms on different test examples and the obtained results showing that the numeric behavior of the proposed algorithms changes with the change of the parameter β. A major change can be seen in first test example f 1 of Table 3. Here, the Algorithm 1 took six iterations for β � 1, thirty-two iterations for β � 0.10, and thirteen iterations for β � 0.25. Similar changes can be observed from the other examples of Table 3. By looking at the overall results of Table 3, we can conclude that β � 1 is the best choice of parameter for the proposed algorithms, which we have already used in Tables 1 and 2.

Basins of Attraction
An attractor's basin of attraction is the region of the phase space, over which iterations are defined, such that any point (any initial condition) in that region will eventually be iterated into the attractor. Equations or systems that are nonlinear can give rise to a richer variety of behavior than can linear systems. e basins of attraction describe the dynamical aspects and characteristics of an iterative method that is being under consideration over a large number of examples and sets of parameter values, see [25,26] and references cited therein. e basin of attraction for complex Newton's method was first considered and attributed by Cayley [27]. e aim of this section is to represent the basins for the proposed algorithms by using graphical tools. In order to represent the basins of attraction via different computer programmes, we have to choose an initial rectangle R which contains the roots of the considered polynomial. en, for every point z 0 in the region, we run an iterative method and then color the point corresponding to z 0 which is depended upon the approximate convergence of the truncated orbit to a root, or lack thereof. e resolution of the image depends upon our discretization of the rectangle R. For example, if we discretize or by its zeros (roots) z 1 , z 2 , . . . , z n−1 , z n : of degree n has exactly n roots which may be distinct or repeated. e polynomial's degree illustrates the number of basins of attraction and placing roots on the complex plane. e localization of basins can be controlled manually. Usually, the colors of basins of attraction depend upon the considered iterative method and the total number of iterations required to attain the approximate solution of some polynomials with a given accuracy. e detailed study of basins of attraction, its theoretical background, and interesting applications are discussed in [28][29][30][31][32][33][34][35][36].

Applications.
In a numerical algorithm that is purely based on an iterative process, we always require stopping criteria for the whole process, i.e., a test which provides us information that the process has converged or it is very close Example 6. Basins of attraction using the complex polynomial p 6 (z) for proposed algorithms.
In Examples 1-6, basins of attraction for the complex polynomials of different degrees through our proposed algorithms (for β � 1) have been shown.
In the first experiment, we have run all the proposed algorithms to obtain the simple zeros of the cubic polynomial p 1 (z). e results of the basin of attractions are given in Figure 1. For each distinct root of the considered polynomial, there exists a unique color on the corresponding basins of attraction, so there exists three unique colors, namely, brown, yellow, and red that can be easily seen from Figure 1. In the next experiment, we consider the polynomial p 2 (z) which has three distinct roots with multiplicity 2.
e basins of attraction are presented in Figure 2. ree unique colors corresponding to the distinct roots can be seen in Figure 2.
e repeated roots appeared with the same colors on the basins of attraction. In Examples 3 and 4, we ran the proposed algorithms for the complex polynomials p 3 (z) and p 4 (z). e results are given in Figures 3 and 4. Both the polynomials have four distinct roots, and the corresponding four colors can be easily seen in Figures 3 and 4, respectively. e later polynomial p 4 (z) has all roots with multiplicity 2. e basins of attraction for the complex polynomials p 5 (z) and p 6 (z) through our proposed algorithms have been presented in Figures 5 and 6, respectively.
ese polynomials have five distinct roots, but the zeros of the later polynomial p 6 (z) are not simple and have multiplicity 2. e five unique colors corresponding to these roots appeared on the basins of attraction that can be seen from Figures 5 and 6.   When we look at the generated images, we can read two important characteristics. One of them is the convergence speed of the considered algorithm, which can be depicted by the shade of the color. e darkness of the colors shows less number of iterations of the considered algorithm and vice versa. e second one is the dynamical aspects of the   algorithm. Low dynamics are in those specific areas which contain small variation of colors, whereas in areas having large variation of colors, the dynamics are high. e black color in images locates those places where the solution cannot be achieved for the given number of iterations. e areas with the same colors in above figures indicate the same number of iterations needed to approximate the solution and give similar look to the contour lines on the map.

Concluding Remarks
Using variational iteration technique, three new root-finding algorithms for the solution of nonlinear equations in one variable have been established, having ninth order of convergence. By using some test examples, the performance and efficiency of the proposed algorithms have been analyzed. Tables 1 and 2 show the best performance of the proposed algorithms in terms of accuracy, speed, number of iterations, and computational order of convergence as compared to other well-known existing iterative methods. We have also presented the basins of attraction using some complex polynomials through newly developed algorithms which describe the fractal behavior and dynamical aspects of the proposed algorithms. e variational iteration technique can be applied to derive a broad range of new algorithms for solving a system of nonlinear equations in one dimension.

Data Availability
All data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.