On Constructing Two-Point Optimal Fourth-Order Multiple-Root Finders with a Generic Error Corrector and Illustrating Their Dynamics

With an error corrector via principal branch of the mth root of a function-to-function ratio, we propose optimal quartic-order multiple-root finders for nonlinear equations.The relevant optimal order satisfies Kung-Traub conjecture made in 1974. Numerical experiments performed for various test equations demonstrate convergence behavior agreeing with theory and the basins of attractions for several examples are presented.

Definition 1.Let { 0 ,  1 ,  2 , . . .,   , . ..} be a sequence converging to  and   =   −  be the th iterate error.If there exist real numbers  ∈ R and  ∈ R − {0} such that the following error equation holds: then  or || is called the asymptotic error constant and  is called the order of convergence [5].
By a close inspection of iterative methods (1), (2), and ( 4)- (6), we find that an iterative method can be constructed in the following form: where   is implicitly dependent upon   , for example, with   = (  )/  (  ) in (1) and   = {1 + ()}((  )/  (  )) in (6).We may regard   as an errorcorrecting function.Consequently, it would be natural to call the function   as the error corrector.Usually   takes the form of   =   ⋅ ℎ, with   as a weighting function that is widely known among many researchers.A more generic form of the error corrector   will be investigated during the course of developing new quartic-order multiple-root finders.
The main aim of this paper is to design new two-step two-point optimal quartic-order multiple-root finders with multiplicity of  ≥ 1.The first step is to compute   (  ) using (  ),   (  ) usually with a Newton-like method or other.The second step is to update   in the first step by introducing an error corrector   formed by (  )/  (  ) and a principal branch of [(  )/(  )] 1/ .We will check the optimality based on the Kung-Traub conjecture [3] that a multipoint method [15] without memory can achieve its convergence order at most of 2 −1 for  functional evaluations.
This paper is comprised of four sections as follows.Following this introductory section, Section 2 describes main results with convergence analysis for newly proposed two-point optimal fourth-order multiple-root finders.Principal branch of a logarithmic function plays a crucial role in developing new methods in view of the relation The convergence analysis includes the derivation of the error equation for the proposed methods.In Section 3, special cases of error-correcting functions are treated with tabulated results and labeled case numbers.Two types of error-correcting functions are constructed based on bivariate polynomials and rational functions.In the first part of Section 4, with error-correcting functions properly chosen from Section 3, a variety of numerical examples are presented for a wide selection of test functions.A comparison for the convergence behavior is made among the proposed methods and the listed existing methods (4)- (6).The second part of Section 4 discusses related dynamics of maps (8) behind the basins of attraction.Dynamical properties of the proposed methods along with their illustrative basins of attraction are displayed with detailed analyses and comments.Section 5 describes overall conclusion as well as possible future work.

Main Results
We first assume that a function  : C → C has a multiple root  with integer multiplicity  ≥ 1 and is analytic [16] in a small neighborhood of .Then with the concept of error corrector   introduced in (7) we propose new two-step iterative multiple-root finders below, given an initial guess  0 sufficiently close to : for  = 0, 1, 2, . .., where  ∈ N is a parameter and   : C 2 → C is holomorphic [17] in a neighborhood of (, 0), where  ∈ R is to be determined later for optimal quarticorder convergence.Since  is a one-to  (15).These suggested methods require one derivative and two functions in order to achieve optimal order of four.In this section, we establish a main theorem describing the convergence analysis regarding proposed methods (8) and find out how to select the parameter  and the error-correcting function   for optimal fourth-order convergence.
Although a variety of forms of error-correcting functions   (, ℎ) are available in view of (26), we will limit ourselves to considering two cases of error correctors comprising loworder bivariate polynomials or simple rational functions.
Case 1 (  with a bivariate polynomial).In this case,  31 ,  22 ,  13 ,  04 , and  40 can be regarded as parameters to be chosen to satisfy We list typical subcases with selected parameters and   (, ℎ) in Table 1, where SN stands for the corresponding subcase identification number.
In Table 2, we list typical subcases with interesting choices of parameters and   (, ℎ).
Throughout the experiments, we have moderately assigned 112 significant digits as the minimum number of precision digits, via Mathematica [18] command MinPrecision = 112, to achieve the desired accuracy ensuring convergence of the proposed methods.It is necessary to compute   =   −  with high accuracy for desired numerical results.In case that  is not exact, it is replaced by a more accurate value which has a larger number of significant digits than the assigned $MinPrecision = 112.Definition 4 (computational convergence order).Assume that theoretical asymptotic error constant  = lim  → ∞ (|  |/| −1 |  ) and convergence order  ≥ 1 are known.Define   = log|  /|/log| −1 | as the computational convergence order.Note that lim  → ∞   = .
If  and   both mutually have the same accuracy of $MinPrecision = 112, then   =   −  would be nearly zero as  becomes large and thus computing | +1 / 4   | would unfavorably cause a numeric overflow.Computed values of   are accurate up to 112 significant digits.To observe the reliable convergence behavior, we desire  with enough accuracy of 16 digits higher than MinPrecision, which has 128 significant digits.To supply such , a set of following Mathematica commands are used: sol = FindRoot [ () , {,  0 } , PrecisionGoal → 16 Although the number of significant digits of   and  is 112 and 128, respectively, we list the two values at most up to 15 significant digits due to the limited paper space.We set the error bound  to (1/2) × 10 −80 satisfying |  − | < .
Iterative methods (26) with all subcases of both Cases 1 and 2 were, respectively, identified by Y1A, Y1B, Y1C, Y1D, Y1E, Y1F, and Y1G and Y2A, Y2B, Y2C, Y2D, Y2E, Y2F, Y2G, and Y2H, being Y-prefixed.Among them, typical three methods have been successfully applied to three test functions shown below: Methods Y1D, Y2B, and Y2E in Table 3 have clearly confirmed quartic-order convergence.Table 3 lists iteration indexes , approximate zeros   , residual errors |(  )|, errors |  |, and computational asymptotic error constants   as well as the theoretical asymptotic error constant  and computational asymptotic convergence order   .The values of   agree up to 10 significant digits with .Undoubtedly, the computed asymptotic order of convergence well approaches 4. The computational asymptotic error constant reveals a good agreement with the theory developed in Section 3.
Additional functions below are tested to further confirm the convergence of methods (8): (32) In Table 8, we directly compare numerical errors |  − | of proposed methods Y1B, Y1C, Y1E, and Y1G in Case 1 with those of existing optimal fourth-order multiple-root finders.Abbreviations Kan, Zhou, and Sol denote existing optimal fourth-order multiple root finders obtained by Kanwar et al. (5), Zhou et al. (6) with () = (1 +  + 2 2 ), and Soleymani and Babajee (4), respectively.The least errors within the prescribed error bound are highlighted in boldface.Method Y1E shows best convergence for  1 ,  5 , and  7 and method Y1G for  2 ,  4 and method Sol for  3 ,  6 .
Likewise, in Table 9, we compare numerical errors |  − | of proposed methods Y2A, Y2D, Y2F, and Y2G in Case 2 with those of existing optimal fourth-order multiple-root finders.Method Y2G shows best convergence for  1 ,  3 ,  5 , and  7 and method Y2D for  4 ,  6 , while method shows best convergence Y2F for  2 .
Recall that lim  → ∞ (( +1 − )/(  − )  ) =    ()/! for an iterative method  +1 =   (  ) converging to  with order ; note that the iteration function   () is dependent upon a given nonlinear equation ().Consequently, with a given order of convergence , the local convergence behavior is dependent on the function (), an initial value  0 , and a multiple zero  itself.Hence, for given test functions, zeros, and initial values, no method is expected to always show better performance than the others.The efficiency index [5] is defined by EI =  1/ , where  is the order of convergence and  is the number of distinct functional or derivative evaluations per iteration.The proposed methods (8) as well as all other listed methods have the same EI = 4 1/3 ≈ 1.587 agreeing with Kung-Traub optimality conjecture.
At this point, we are now ready to discuss the dynamical behavior of iterative maps (8).It is important to properly select initial values close to zero  for guaranteed convergence of iterative methods.It is not, however, an easy task to determine how close the initial values are to zero .A convenient way of employing stable initial values is to directly use visual basins of attraction.In view of inspection of the area of convergence on the basins of attraction, the larger area of convergence would imply a better method.
Clearly measuring the size of area of convergence requires a quantitative analysis.To this end, we provide Tables 5-8 featuring statistical data for the average number of iterations per point and the number of divergent points including CPU time.In the following 4 examples, we select a 6 by 6 square region centered at the origin and containing all the zeros of the given test polynomial functions.We then take a 600 × 600 uniform grid in the square to display initial points for the iterative methods via basins of attraction.We color the grid point, according to the iteration number for convergence and the root it converged to.This way we can find out if the method converged within the maximum number of iteration allowed and if it converged to the root closer to the initial grid point.To continue our discussion, let us first identify four members of iterative map (8) associated with Cases 1D, 1G, 2D, and 2G by Y1D, Y1G, Y2D, and Y2G, respectively.
For illustrating the complex dynamics of (8) with the desired basins of attraction, we take applied to various polynomials having multiple roots with multiplicity  = 2, 3, 4, 7. Statistical data for the basins of attraction are tabulated in Tables 4-7.In this tables, abbreviations CPU, TCON, AVG, and TDIV denote the value of CPU time for convergence, the value of total convergent points, the value of average iteration number for convergence, and the value of divergent points.
In the first example, we have taken the following polynomial: whose roots  = −0.939221,0.82811 are both real with multiplicity  = 2. Based on Table 4 and Figure 1, we can find out that Y2G is best in view of lower AVG and TDIV, followed by Kan and Y2D.As can be seen in Figure 1, Sol and Y1D have shown considerable amount of black points.These points causing divergence behavior were expected from the last column of Table 4.The best result for CPU is by Zhou and the worst one is by Kan.
Our next sample has triple roots.The polynomial has three identical roots at the origin: The results are listed in Table 5 and Figure 2. The method Y2D performs best in view of lower AVG and TDIV.These proposed methods Y1D, Y1G, Y2D, and Y2G appear to perform better than Kan, Sol, and Zhou.As can be seen in Figure 2, Sol has shown considerable amount of black points, while Y2G has shown a few black ones.The best result for CPU is by Zhou and the worst one is by Y2D.
As a third example, we take the following polynomial whose roots are all of multiplicity four: As can be seen in Figure 3, Sol has shown considerable amount of black points, while Zhou has shown several black ones.The best result for CPU is by Zhou and the worst one is by Kan.The results are presented in Table 6 and Figure 3.The method 2G is best in view of lower AVG and TDIV.In the last example, we use the polynomial having two roots of unity  4 () = ( 2 − 1) 7 . (36) The results are given in Table 7 and Figure 4.The method 2G is best in view of lower AVG and TDIV.As can be seen in Figure 4, Sol has shown many black points, while has shown Y2G a few black ones.The best result for CPU is by Y1D and the worst one is by Kan.

Conclusion
We have shown that optimal quartic-order multiple-root finders are constructed with an error-correcting function generated by (  )/  (  ) and a principal branch of [(  )/(  )] 1/ along with the derivation of their relevant

Table 2 :
Typical subcases of Case 2 with selected parameters.