The Polynomial Pivots as Initial Values for a New Root-Finding Iterative Method

A new iterative method for polynomial root-finding based on the development of two novel recursive functions is proposed. In addition, the concept of polynomial pivots associated with these functions is introduced. The pivots present the property of lying close to some of the roots under certain conditions; this closeness leads us to propose them as efficient starting points for the proposed iterative sequences. Conditions for local convergence are studied demonstrating that the new recursive sequences converge with linear velocity. Furthermore, an a priori checkable global convergence test inside pivots-centered balls is proposed. In order to accelerate the convergence from linear to quadratic velocity, new recursive functions together with their associated sequences are constructed. Both the recursive functions (linear) and the corrected (quadratic convergence) are validated with two nontrivial numerical examples. In them, the efficiency of the pivots as starting points, the quadratic convergence of the proposed functions, and the validity of the theoretical results are visualized.


Introduction
Perhaps the oldest problem in numerical analysis deals with the search of polynomials' roots.Since Abel and Galois proved the nonexistence of radical-based solutions for general polynomials or order higher than four, the only method to obtain the complete set of roots is numerical calculus and particularly the iterative methods.For any iterative method, a recursive function together with an initial guess is required.Most methods are focused on the local efficiency of the recursive schemes, convergence conditions, and velocity at the roots, whereas the study on where (and why) to start the iterative sequences is less considered.Hence, the challenge is to find reasonably efficient initial guesses, that is, starting points for the iterative sequences lying close to some of the roots.
The problem of searching zeros of functions has been extensively discussed in several books on numerical analysis; see, for instance, [1][2][3][4][5].In particular, a survey of methods specially developed for polynomials has been recently published by McNamee [6].The latter has compiled an extensive bibliography [7][8][9][10] in which probably most of the published methods for root-finding are included.In addition, McNamee [6] has proposed an indicator to measure the efficiency of an iterative method and has applied it to the most common approaches.Other relevant reviews on algorithms to search zeros have been presented by Pan [11][12][13][14] and by Pan and Zheng [14].
As mentioned, the location of initial approximations is of special relevance in iterative schemes so that the success or failure may largely depend on it.In the book of Kyurkchiev [15], a selection of initial approximations specially developed for simultaneous methods is listed.The roots' bounds obtained from the polynomial coefficients have traditionally been a rough tool in some cases but useful tool for zeros' locating.An interesting survey of these bounds is listed in [6].The quotient-difference method [16] includes in addition an initial approximation, although for its construction all polynomial coefficients must be nonzero.Hubbard et al. [17] with the study of the convergence spectrum have proposed a finite and relatively small set of starting points assuring that, at least from one of them, the convergence of Newton's method is guaranteed.Bini et al. [18] developed improved initial conditions for the known QR method.Petković et al.
Journal of Applied Mathematics [19][20][21] have proposed parametric families of simultaneous root-finding methods based on the Hansen-Patrick formula [22].Monsi et al. [23,24] have introduced the named point symmetric single step procedure and its variants for finding zeros simultaneously.In addition, the necessary initial conditions to guarantee convergence using Smale's point estimation theory [25] have been reviewed in [26][27][28].Zhu [29][30][31] has analyzed the initial conditions of convergence for Durand-Kerner's method and for the Newton-like simultaneous methods based on the parallel circular iteration.Lázaro et al. [32] proposed efficient recursive functions to reach eigenvalues in vibrating systems independently on the chosen starting point, showing global convergence in the whole complex plane.Kornerup and Muller [33] and Kjurkchiev [34] have also discussed the influence of starting points for certain Newton-Raphson iterations and for Euler-Chebyshev's method, respectively.Saidanlu et al. [35] studied the conditions for determining initial approximations of exact roots for certain iterative matrix zerofinding method.The recently published book of Petković et al. [36] explores the development of powerful multipoint algorithms to solve nonlinear equations involved in research problems.
In this paper, we consider the -order polynomial where   ∈ C for 0 ≤  ≤  − 1 and  0 ̸ = 0.In the present paper, a pair of novel recursive functions for polynomial rootfinding is proposed.In addition, associated with each one of these functions a characteristic complex number can be calculated from the polynomial coefficients  −1 and  −2 .Both complex numbers, named pivots, lay close to some root of the polynomial when they become large (in absolute value) with respect to the rest of polynomial coefficients.In fact, it is demonstrated that the pivots are attractive fixed points at the complex infinite.Conditions for local and global convergence of the new iterative scheme are provided; the convergence is demonstrated within a family of closed balls centered at the pivots under certain a priori conditions that can be verified.In fact, based on the theorem of global convergence, a test to identify the polynomial class for which the convergence is ensured is proposed.It is also proved that the velocity of convergence is linear; in order to accelerate the iterative process up to a quadratic order, new corrected recursive functions are proposed based on Steffensen acceleration approach.The corrected recursive functions present the same properties as those of the originals with respect to the pivots, but with quadratic convergence.
In order to validate the theoretical results, two numerical examples are analyzed.In the first two, the results of the proposed recursive functions are studied for single and multiple roots, respectively.In the third example, the influence of the pivots is discussed.In addition, in this last example the test of global convergence is applied, directly relating the proposed pivots to the success of the iteration scheme.

Definitions and Previous Results.
Based on the polynomial of (1), the following definition introduces two complexvalued functions of complex variable of special interest in this paper.
Definition 1 (recursive functions).Associated with the polynomial () of (1) one introduces two functions ,  : C → C defined as where that will be named recursive functions (RF).
As mentioned above, √• represents the square root of a complex number whose branch line is R − ∪ {0}.Since the origin is the only branch point of the square root function so defined, the region of analyticity can consequently be expressed as where The following proposition relates the polynomial roots with the fixed points of the defined functions.Proposition 2. If  0 ̸ = 0, then a complex number  ∈ C is root of the polynomial () if and only if it is a fixed point of either the function () or ().
Proof.Starting from the general form of the polynomial given by (1), we obtain an equivalent expression where () is the function defined in (4).Now, the right-hand term of (7) can be handled to obtain the product of three terms in which we find the RF introduced in (2), (3): Since  0 ̸ = 0, the value  = 0 cannot be a root.Hence, a complex number  satisfies () = 0 if and only if it is a fixed point of some of the RF; that is, either  = () or  = ().
(ii) If  = () then the polynomial derivatives evaluated at  =  lead to and hence the relationship between root multiplicity and the derivatives  () () is obtained following the same arguments as (i).
Since each root of the polynomial is a fixed point of one of the functions () or (), the question arises whether, starting at certain point and iterating these functions, the convergence to any of the root holds.The first step in our research is to introduce the recursive sequences associated with the equally named functions together with the concept of point of attraction.Definition 4 (recursive sequence).Let  0 ,  0 ∈ C. One defines recursive sequences, {  } ∞ =1 and {  } ∞ =1 , associated with the functions () and () as the set of complex numbers calculated as   =  ( −1 ) ,   =  ( −1 ) ,  ≥ 1. ( Definition 5 (point of attraction).A complex number  is said to be a point of attraction of () if there exists an initial point  0 ∈ C, such that Similarly, a point of attraction of () is defined as the limit of the sequence {  }.Obviously, any point of attraction is a root of (), but the reciprocal affirmation is not true.

Local
Convergence.This subsection deals with the necessary conditions for the local convergence of the recursive sequences.The main result is presented in Theorem 8. Previously, we introduce the lemma of McLeod in order to prove the Lipschitz continuity of complex-valued functions of complex variable.This result will be used repeatedly along the current work.[37]).Let () : F → C be a complex function, analytic in a convex domain
Therefore, the hypothesis of Banach's fixed point theorem is verified [38] and the convergence of the succession {  } ∞ =1 to the (unique) fixed point  in (, ) is guaranteed.Furthermore, the error rate in each iteration can be bounded by Theorem 8 does not cover the case of multiple roots since for all of them   () = 1 holds, although this fact does not imply that these roots cannot be points of attraction.However in such cases the sequences will converge more slowly [2].
The necessary condition imposed by (17) in the previous theorem allows predicting certain characteristics of the roots that present local convergence.Assuming that the absolute value of a root is greater than certain 1/ > 0, that is, || ≥ 1/, and denoting  = max 0≤≤−3 |  |, after some operations we obtain In view of this reasoning, it seems that roots with  ≪ 1, that is, those with large absolute value, will present local convergence for some of the proposed recursive sequences.However, although intuitive, this result is not valid in general since the inequalities given by ( 21) do not guarantee (17).Let us improve the necessary conditions to impose on the polynomial coefficients to check convergence.For that, we define the concept of pivots of a polynomial.

Global Convergence.
For any iterative numerical scheme, it is always desirable to provide a priori information about the convergence.If certain recursive sequence is convergent for any starting point inside certain complex set, it is said that such sequence is globally convergent.This section aimed to study the global convergence of the recursive sequences within sets with closed balls centered at two characteristic points of the polynomials  and V, named pivots of the polynomial.

Definition 9 (pivots). One defines the pivots of polynomial
where The pivots of a polynomial have the property of lying close to some root when these pivots are relatively large (in absolute value) with respect to the rest of polynomial coefficients.This may be an important advantage because the pivots can be used as effective initial guesses in a recursive scheme.This behavior is explained in the results of this section.The first result (Proposition 10) states that the pivot  is a point of attraction in the (complex) infinite of ().The same argument holds for the pivot V and ().Proposition 10.Let , V ∈ A be the pivots of the polynomial (); then Proof.(i) From the definition of () given in (4), let us calculate the following limits: lim Now, from the expressions of () and   () given by ( 2), (18) and by ( 22), ( 24) The proof for the function () is analogous.
This result justifies the use of the family of closed balls centered at  and V as suitable sets for the global convergence of {  } and {  }, respectively.In what follows up to the next section, only the case of the recursive function () will be rigorously analyzed.The proofs of the lemmas and theorems can easily be extrapolated to the case of ().
where the expression of the general term of the sum has been used.(ii) Using the previous result (iii) Following the same reasoning as that of ( 27) where now the expression of the sum ∑ −2 =1 (+1) −  has been used.
(iv) From the bounds calculated in (i) and (ii) Here the number   =   (| −1 | +   )/|| 2 has been introduced in order to simplify the notation in subsequent developments.
(v) Let us define the function () as that one which verifies the following identity: after some direct operations and using the definition of  2 = The same conclusions of this lemma can easily be extrapolated for pivot V, simply changing  by V in the above expressions.In this case,   = |V| −  > 0 is defined as the distance between the origin and the ball (V, ); that is,   = min{|| :  ∈ (V, )}.Lemma 11,let
Therefore, the Banach contraction principle can be applied ensuring that there exists a unique fixed point  ∈ (, ) of the function () so that lim  → ∞   = , for any  0 ∈ (, ).Furthermore Lemma 12 and Theorem 13 have been presented for describing the conditions under which global convergence towards fixed points of () in balls of the form (, ) can be guaranteed.We can also find versions of these results for the function (), for the pivot V and for the family of balls (V, ).It is clear that, in such case,   ,   , and   have exactly the same mathematical form although the meaning of  is now the radius of a ball centered at V and   = |V|− is the distance between the origin and (V, ).In order to be consistent in the presentation of the results we also write the associated lemma and theorem about convergence of sequences {  } within balls (V, ).Lemma 14.Under the same conditions of Lemma 11, let one consider the positive real numbers that depend on the radius  of the ball (V, ) centered at pivot V and depending on   = |V|−: The necessary conditions of this theorem are given in terms of the three numbers   ,   , and   , (30), (34), called convergence indexes and introduced in Lemmas 11, 12, and 14.These indexes can be used to construct an a priori test to check global convergence, since for their calculation no previous root-computing is needed.For that, chosen a radius , the indexes ensure the convergence of the RF provided that   ,   < 1 and   ≤ 1.If these inequalities are verified for ball (, ) as well as for (V, ), then we can ensure that the two sequences {  } and {  } are convergent and the method reaches two different roots.On the opposite, if there does not exist radius  verifying   < 1,   < 1, and   ≤ 1, for some ball (, ) or (V, ), global convergence cannot be a priori ensured.The latter obviously is not synonymous of no convergence, because local convergence could still arise.Let us illustrate the application of this test with an example considering the 16th order polynomial The test is equivalent to find at least a radius  > 0 so that   ,   < 1 and   ≤ 1 for both the sequences {  } and {  }.Thus, the indexes associated with the pivot V (global convergence of {  }) are represented as function of the radius  in Figure 1.It can be observed that   ,   ,   < 1 in the range 0.208 <  < 1.296, consequently ensuring the global convergence for any ball (V, ) centered at V and radius in the previous range.Furthermore, an a priori estimation of the (linear) velocity of convergence can be calculated for  min = 0.208, resulting in  min = 0.156.Otherwise, testing balls of the form (, ), centered at the other pivot  (global convergence of {  }), we observe that   > 1 in the whole range 0 <  < ||.Note that although the recursive sequence {  } does not pass the test, it locally converges to a root.The polynomial of (55) is an example that presents a good behavior with both recursive sequences, but the sequence {  } passes the test, whereas the other {  } does not.In general, the proposed test is somewhat conservative; in fact, several numerical experiences have shown that convergence of recursive sequences can succeed (starting at the pivots) for polynomials that do not pass the test.
To conclude this section, we will make some remarks on the convergence to multiple roots.As proved in Proposition 3, a root  = () with multiplicity  ≥ 2 verifies   () = 1.Therefore,  ∈ A cannot be contained in any closed ball (, ) ⊂ A verifying |  ()| ≤   < 1.In this case, the Banach contraction principle does not provide information on the convergence, although if it holds the scheme will present sublinear velocity [2] as shown in the numerical examples.

Definitions and Previous Results
. Since the velocity of the recursive scheme given by the proposed functions () and () is linear for single roots, the objective is to propose two new functions, whose associated recursive sequences present quadratic convergence.For that, we construct the following two functions based on Steffensen's acceleration method [39] used for iterative fixed point schemes (more details can be found in [2]): We name these functions corrected recursive functions (CRF); consequently, associated with them, there exist two recursive fixed-point schemes, named corrected recursive sequences and presented in the following definition.
As will be demonstrated, the introduction of the functions () and () considerably improves the convergence velocity.In fact, both functions have the same properties as those of Steffensen's method: (a) any fixed point of () and () is point of attraction of () and () even for multiple roots, and (b) the convergence to the fixed points is quadratic for single roots.

Local Convergence. The present subsection deals with the local convergence of sequences { x𝑘 } ∞
=1 and { ŷ } ∞ =1 .Let us see that effectively the introduction of () and () accelerates the convergence.Indeed, naming () =  − () and () =  − (), it is easy to prove that the corrected recursive sequences are Newton's schemes of  and ; that is, Consequently, the convergence can be resumed in the following theorem.The details of the proof can be found in reference [2].
Theorem 17.Let  ∈ A be a root of the equation () = 0 with multiplicity  ≥ 1.
The previous theorem states that any root of the polynomial () is a point of attraction of some corrected recursive function.If the initial guess is close enough to a root, it can be assured that the sequence converges.However, since the corrected recursive sequences are after all two Newton's schemes, they are sensitive to the chosen initial guesses.Fortunately, the behavior of the CRF with respect to the pivots is the same as that of the RF.Thus, when the pivots  and V (see Definition 9) are relatively large in absolute value, at least one of them lies close to some root, which in turn is also high (in absolute value) respect to the other roots.

Global Convergence.
As shown in Section 2.3, the necessary conditions for global convergence of the RF () and () are related to the location of , V defined in (22).The main results were presented at Proposition 10 and Theorem 13.As with () and (), , V also are points of attraction of the corrected recursive functions at the infinite.This property is demonstrated in the following proposition.Proposition 18.If , V ∈ A are the complex numbers defined by (22), then Proof.From the results obtained in Proposition 10 lim Journal of Applied Mathematics 9 Differentiating now two times (4) of () Consequently, lim  → ∞ |  ()| = 0 also holds.Hence lim In the same way, it is verified that lim  → ∞   () = 0. Finally, calculating the derivative of () and () and taking limits result in lim The previous proposition suggests that, under similar conditions as those imposed for () and (), global convergence in a closed ball centered in  or V for () and () can also be assured.Therefore, choosing x0 =  and ŷ0 = V, the sequences { x } and/or { ŷ } may converge to a root provided that , V are large in absolute value.This choice considerably improves the efficiency of the recursive scheme because two good approximations are available as starting points.In fact, as will be shown in the numerical examples, the approximation given by () or (and) (V) becomes a very accurate estimation of one (two) root(s) of the polynomial.
We think that the polynomial pivots , V and their one-step initial approximations, (), (V), represent the main advantage of the present method since they constitute themselves good estimations of one or even two roots for certain classes of polynomials.Hence, they could be used as initial guesses not only for the proposed recursive sequences but also within other efficient root-finding algorithms [40].Another contribution of the present paper is to identify quantitatively this class of polynomials.For that, Theorem 13 on global convergence inside closed balls centered in the pivots, (, ) and (V, ), is used.The theory shows that those polynomials that pass the test present global convergence in previously defined closed balls.Numerical experiences show that the pivots of these classes of polynomials usually lie close to some root.Furthermore, () and/or (V) represent in such cases a much more accurate approximation of this root, which in general coincides with the largest one.This latter affirmation was qualitatively advanced in Theorem 8 and in subsequent comments; see (21).Now, the following section which focused on the convergence region also validates this affirmation, showing a close relationship between the root's size (in the sense of its absolute value) and the quality of the proposed method.

Remarks on the Convergence Region.
As proved in the theorems presented in this section, local convergence is always guaranteed for the corrected recursive functions.Therefore, choosing a starting point close enough to a root, the recursive sequence will converge.However, the following question arises: which complex number must be chosen as starting point?As well known, this is a key issue but very difficult to answer for any nonglobally convergent root finder.It was proved in the previous subsections that the pivots of the polynomial are a good choice under certain conditions.But how to relate the convergence region, that is, the set of valid initial points for which convergence holds, with the functions () and ()?As expected from the previous argumentation, roots with larger absolute value present also a wider convergence radius.
To this end, firstly it will be demonstrated that if  is a fixed point of () and (), it is verified that lim d           = 0,  = 2, 3, . . . .
From the definition of () and () given in ( 4) and ( 2), it follows that the approximation order is Now, assuming that  is a single root (for multiple roots, the analysis would be analog and will be omitted), the second and the third derivative of () at  =  can be directly calculated from its definition given by (44), resulting in Note that since  has multiplicity  = 1, it is verified that   () ̸ = 1, from Proposition 3. Introducing the approximation order of (52) in (53), it follows that These expressions can easily be generalized by induction giving  () () = O( −2− ) and consequently (51) holds.
Secondly, let us consider the closed ball (, ) centered at the root  with radius  > 0. From the theorem of Earle-Hamilton [41] on fixed points of analytical self-mappings, the sequence { x } starting at any point  0 ∈ (, ) is convergent towards  if there exists a positive real number 0 < () < 1 (in general depending on ) so that | () − | < (). ( Since  is a single root, () =  and   () = 0.The expansion of function () around  =  leads to Approximating () to the second order, the distance between () and the root can be bounded by (61) Intuitively, it can be generalized that the radius of convergence   associated with the th approximation of () also increases with the roots, so that lim  → ∞   = ∞. (62) Therefore, the higher the root in absolute value, the larger the convergence region, so that the proposed numerical method is expected to work better in the sense that there exist many more starting points from which convergence will hold.In the numerical examples this behavior will be validated showing that the proposed method presents convergence towards the largest roots.In practice, those polynomials with relatively high absolute values of the pivots with respect to the rest of the polynomial coefficients will present good behavior with respect to the convergence.

Numerical Examples
The local convergence of the recursive functions is directly related to the first derivatives   () and   () evaluated at the roots (see (18)), shown in Table 1 in absolute value.In view of the results, it can be ensured that the convergence of the sequence   = ( −1 ) only holds for the root  = −6, since it is the only one for which |  (−6)| < 1.Let us take the initial values of the sequences to be equal to the pivots calculated from (22); that is,  take their lowest values at these roots and therefore the first estimation of the convergence radius  2 is higher than those in the other roots; see (58).This fact explains the satisfactory behavior of the method in this example and with these roots.Some remarks on the influence of the pivots  and V can be made.The relative error between the root −6 and  = −0.76−5.72 is approximately 100(+6)/|−6| ≈ 13.6%.This confirms the theoretical result obtained in Proposition 10 and Theorem 13: roots relatively large (in absolute value) than the rest are close to the values  and V defined by ( 22).If, for example, instead of root −6 we take −12, the pivot is updated to  = −0.42− 11.89 and then the relative error becomes 100( + 12)/| − 12| ≈ 3.64%.It is important to note that the approximations given by  and V are closed-forms that depend on the polynomial coefficients, in particular on  −1 and  −2 .These pivots had been proved to be an excellent initial estimation under certain conditions.Moreover, if these conditions hold, improved closed-forms can be constructed just evaluating () or (V) depending now on the complete set of coefficients.Thus, it can easily be verified that for the current example () = 0.0299 − 6.0178 and the relative distance with the root is only 100(() + 6)/| − 6| ≈ 0.58%, a very accurate one-step estimation.
In order to compare the proposed method with another one of quadratic convergence, the iteration error for Newton's method has also been represented in Figure 2. If {  } is Newton's sequence, it is well known that the sequence is Since the pivots of the polynomial,  and V, are relatively close to the roots −6 and −3.5, it should be expected that also Newton's sequence converges to these roots, if  0 = , V are the starting points.However, both sequences converge to −2 and −1 and they do it quadratically as shown in Figure 2. Note that some iterations are needed before Newton's scheme finds the convergence region of the roots, iteration in which the quadratic convergence can be visualized.This shortcoming does not occur in this example with the proposed method, which provides not only recursive functions but also efficient initial guesses given by the pivots.

Example 2.
In this example, the behavior of the method for multiple roots is examined using the polynomial Note that it has the same roots as those of Example 1, though now 2 has multiplicity  = 2.As in Example 1, the relevant values related to convergence are listed in Table 2.It can be observed that the double root exactly verifies   (2) = 1.0 as predicted by Theorem 13.Although this does not guarantee the convergence of {  }, it may occur that, depending on the initial point  0 , the approaching path lies inside the region where |  (  )| < 1 but close to the unity.In this case the convergence order becomes sublinear as can be seen for {  } in Figure 3.As predicted by Theorem 17, the acceleration process through () leads for this double root to a linear convergence that clearly can be visualized in Figure 3 for { ŷ }.However, the iteration error is not able to decrease under approximately 10 −9 .This limit is due to the singularity of 2 in () that, although removable, produces some numerical instabilities when the function is evaluated  at the proximity of the root.The radius of convergence  2 can also be estimated for the double root using a parallel development as that for single roots.The root 2 presents the largest value of  2 , the fact that can explain why the sequence, although linearly, converges to this root for initial values relatively far from the root; for example, the freely chosen  0 = −4 − 4.
The results of the proposed method for functions () and () are similar to those of Example 1.As shown in Table 2, the sequence {  } again presents local convergence towards the root −6 with an approximate convergence radius  2 significantly higher than those of the rest of fixed points of ().

Conclusions
In this paper, a new method for polynomial root-finding is presented.The key idea is the construction of two complex functions, called recursive functions, which can be used in a fixed-point recursive scheme.Necessary conditions for local and global convergence are provided.The latter are studied in a closed ball centered at two certain characteristic points, called pivots of the polynomial.It is demonstrated that the pivots are fixed points of the recursive functions at the complex infinite.Necessary a priori conditions for global convergence are given in terms of the polynomial coefficients.In practice, if a root is in absolute value relatively larger than the others, one of the pivots lies at the proximity of such root.In such cases, the pivots have been demonstrated to be very good initial guesses, not only for the proposed recursive sequences but also for any existing iterative rootfinding method.In practice, those polynomials whose pivots are relatively higher than the rest of the coefficients usually present convergence.In addition, the higher the absolute value of pivots, the faster the convergence.
The convergence of the recursive sequences is linear and is not guaranteed for any initial point in the complex plane.For these reasons, corrected recursive functions are constructed to accelerate the convergence.These functions are inspired in the well-known Steffensen's acceleration method and present the same properties: local convergence and the error-decay velocity quadratic for single roots and linear for multiple ones.The convergence region around the roots, that is, the set of complex values from which convergence holds, is studied.It is concluded that a direct relationship between the absolute value root and the convergence radius exists: the latter increases with the former.
Finally, the theoretical results are validated with two numerical examples.In the first, the convergence of the proposed recursive sequences for a single-root polynomial is analyzed.This example clearly reveals the close connection between the pivots and the largest roots of the polynomial.In fact, under certain conditions, just evaluating the corrected recursive functions at the pivots can lead to a very accurate approximation of a root.In the second example the efficiency of the proposed method is studied for multiple roots.In them, as predicted by the theoretical results, the recursive sequences converge under a sublinear scheme, whereas the corrected sequences present now linear convergence.Further work is currently being developed to generalize the concept of polynomial pivots and to propose new more accurate initial approximations only based on the polynomial coefficients.
(36) comparing these expressions with(35),(36), it is clear that |  ()| and |()−V| bounds have the same mathematical expressions as those of |  ()| and |() − | in (, ).Consequently, the proof of Lemma 14 and Theorem 15 can be omitted since they can be directly deduced from the proofs of Lemma 12 and Theorem 13.
() = O( −4 ) and   () = O( −5 ), the radius of convergence  3 → ∞ when  → ∞.For the th order approximation of (), new values of the radius of convergence   can be calculated as result of the following inequality:

Table 1 :
Hence, the recursive sequences {  }, {  } and the corrected { x }, { x } can be built.To visualize the convergence process, the iteration errors Results of local convergence for Example 1.
corrected recursive sequences { x } and { ŷ } is quadratic, and this allows reaching the roots  = −6 and  = −3.5 with four and seven iterations, respectively.A tolerance of 10 −12 for the iteration error has been assumed.Moreover, testing the recursive sequences for other starting points (not shown here), we find that they also converge in the great majority of cases.From Table1, one can notice that |  ()| and |  ()|

Table 2 :
Results of local convergence for Example 2.