Dynamical Techniques for Analyzing Iterative Schemes with Memory

We construct a new biparametric three-point method with memory to highly improve the computational efficiency of its original partner, without adding functional evaluations. In this way, through different estimations of self-accelerating parameters, we have modified an existing seventh-order method. The parameters have been defined by Hermite interpolating polynomial that allows the accelerating effect. In particular, the R-order of the proposed iterative method with memory is increased from seven to ten. A real multidimensional analysis of the stability of this method with memory is made, in order to study its dependence on the initial estimations. Taking into account that usually iterative methods with memory are more stable than their derivative-free partners and the obtained results in this study, the behavior of this scheme shows to be excellent, but for a small domain. Numerical examples and comparison are also provided, confirming the theoretical results.


Introduction
In recent times, solving nonlinear problems described by f x = 0 is burning difficulty in real-world phenomena.In this direction, numerous iterative methods have been projected (see, e.g., [1][2][3][4][5][6][7]).These iterative methods have a noteworthy area of research in numerical analysis, as they can be applied in several areas of pure or applied sciences.Out of them, the most eminent one-point iterative method without memory is the Newton-Raphson scheme, which is given by and has second-order convergence.One drawback of this method is the clause f ′ x n ≠ 0, which confines its applications.
The first objective and inspiration to design iterative methods for solving this kind of problem are to get the highest order of convergence with the least computational cost.Therefore, a lot of researchers have paid much interest to construct optimal multipoint methods without memory, in the sense of Kung-Traub's conjecture [8], using n + 1 functional evaluation which can reach the optimal 2 n .The paper follows three main goals: primarily, to avoid the restriction f ′ x ≠ 0 in the practice; along with the subsequent goal, to develop the proposed families to methods with memory in an approach that achieves convergence R-order 10 without any new functional evaluation; and to analyze the dependence of the resulting scheme from the set of initial estimations used.
Multipoint schemes have enormous practical meaning, as they conquer theoretical limits of any point-to-point scheme about order and efficiency.They also generate approximations of higher precision and highly improved computer arithmetics, and symbolic calculation has allowed efficient execution of multipoint methods.
On the other hand, iterative schemes with memory utilize information from the recent and preceding iterations.Traub, in [9], designed the first method with memory by a small change in the well-known Steffensen's scheme as follows: , n ≥ 0, 2 where x 0 and ρ 0 are given; ρ n is a self-accelerating parameter defined as where N 1 x is Newton's interpolating polynomial of the first degree.The order of convergence of (2) was 2 41.More recently, some authors have constructed iterative schemes with memory from optimal procedures of different orders, mainly four (see, e.g., [10][11][12]), eight ( [13][14][15], among others), or even general n-point schemes [16,17].Some good reviews regarding the acceleration of convergence order by using memory are [18,19].
The convergence order of a new method is improved regarding its without memory partner; it is derived by adding self-accelerating parameters but holding the derivatives in the iterative expression.The accelerated convergence rate has been obtained without additional functional evaluations, which results in higher computational efficiency.However, another important aspect of an iterative method to be considered is the numerical stability, that is, the analysis that tells us how dependent the scheme of the initial guesses used is.The dynamical performance of the rational functions associated to iterative schemes is a very useful element to study their dependence on initial estimations.In recent years, complex discrete dynamics has been widely used on iterative methods without memory (see, e.g., [20][21][22][23]).Nevertheless, it is known that iterative schemes with memory cannot be analyzed by means of these techniques.This is the reason why the authors focused in [24][25][26] their qualitative study by transforming them into multidimensional dynamical systems, as their qualitative properties can be analyzed by using standard tools as the classical Hénon map (see [27,28]).
In this manuscript, we have obtained a multipoint method with memory, to solve the nonlinear equations followed by its convergence and stability analysis.Sections 2, 3, and 4 can be summarized as follows: in Section 2, we design a biparametric three-point iterative scheme with memory.The method has been obtained by employing two self-accelerating parameters, which use current and previous data from available information.These parameters are recalculated at each iteration by using Hermite interpolating polynomials, increasing from 7 to 10 the order of convergence.Section 3 is related to the qualitative analysis of the proposed scheme in terms of the real discrete dynamical system involved.After introducing some basic concepts, the stability of the method on a low degree generic polynomial is studied.In general, it presents a very stable behavior, but also a chaotic one for the values of the parameter in a small domain.In Section 4, some numerical examples are presented confirming the results proven in Section 2.

Convergence Analysis
In this section, we demonstrate the improvement of the convergence order of the method given by Khattri and Argyros [29] by adding up two different parameters in the first and the third steps.Firstly, we consider the seventh order without memory scheme presented in [29].
The error equations of each step of ( 6) are and From (9), we can assure that the order of convergence of ( 6) is still seven, with the independence of parameters γ and λ.This order can be improved from seven to ten, by taking γ = c 2 and λ = f ′ α c 3 , but root α is not known.
So, to improve the rate of convergence of (6), we recalculate the value of parameters γ and λ in each iteration, by taking γ ≈ c 2 and λ ≈ f ′ α c 3 , as f ′ α , f ′′ α , and f ′′′ α are not provided.We denote by γ n and λ n these estimations, and they are computed by using the current and previous iteration satisfying lim n→∞ γ n = c 2 = f ′′ α /2f ′ α and lim n→∞ λ n = f ′ α c 3 = f ′′′ α /6.We consider the following formulas for γ n and λ n , using Hermite interpolating polynomials of different degrees: Proof.It is known that the error expression of H m x can be obtained by After differentiating (12) at x = y n , we get Taylor's development of the derivative of f at x n , y n ∈ I and ξ ∈ I about α provides f ′′′ y n = f ′ α 6c 3 + 24c 4 e n,y + O e 2 n,y 14 and where e ξ = ξ − α.Putting ( 14) and ( 15) in ( 13), we obtain 3 Complexity which implies that is, The concept of the R-order of convergence [30] and the subsequent declaration (see [4], p.287) is used to approximate the order of convergence of (6).
Then, we can prove the following result.
Theorem 2. Let f I ⊆ ℝ → ℝ be a sufficiently differentiable function and let γ n and λ n be the varying parameters in iterative scheme (6) obtained by means of (10).If the initial estimate x 0 is close enough to a simple root α of f x , then, the R-order of the iterative method is at least 10.We denote this scheme by OM10.
Proof.Let x n be the generated sequence by scheme OM10.
As it converges to the root α of f x , with R-order r, we write e n+1 ~Dn,r e r n , 20 Let us consider now sequences y n and z n with R-order p and q, respectively; then, and e n,z ~Dn,q e q n ~Dn,q The following error expression of the method with memory ( 6) can be obtained by ( 7), (8), and ( 9) and the varying parameter γ n and λ n . and Here, we excluded higher order terms in ( 24), ( 25), and (26).
The unique positive solution of ( 43) is r = 10, q = 5, and p = 3.So R-order of method ( 6), for γ n and λ n being calculated by (10), is at least 10.

Multidimensional Dynamical Study
In this section, we build the discrete dynamical system associated to OM10, for overcoming its qualitative analysis.The general iterative expression of a fixed-point procedure that employs two previous iterations in order to get the following one is where x 0 and x 1 are initial approximations.But if the scheme is a three-step one, as OM10, not only x n−1 but also the subsequent intermediate points y n−1 and z n−1 must be used to calculate the following iterate x n with a high order of convergence.So the fixed point iteration is expressed as To obtain the fixed points, we define the fixed point multidimensional function G ℝ 4 → ℝ 4 associated to g by means of Therefore, a fixed point of G satisfies x n+1 = x n and We define a discrete dynamical system in ℝ Let G ℝ 4 → ℝ 4 be a vectorial function.The orbit of a point x ∈ ℝ 4 is defined as The stability of fixed points in multivariate operators (see, e.g., [31]) is analyzed as follows.
In addition, a fixed point is hyperbolic if λ j satisfies λ j ≠ 1∀j.In particular, if λ i exists such that |λ i | < 1 and λ j such that |λ j | > 1, then, the hyperbolic point is a saddle point.
Let us remark the difference between the stability of a fixed point x * in one-dimensional dynamics and that in multidimensional dynamics: meanwhile, in the scalar case, the stability of the fixed point depends on the value of the derivative operator at the point: R being the rational function associated to the iterative method applied on a polynomial p x ); in the multidimensional case, the eigenvalues of the Jacobian matrix associated to the fixed point operator are the elements that determine the character of the fixed points.
Moreover, a point x is called critical if the associated Jacobian matrix G ′ x satisfies det G ′ x = 0 (in the onedimensional case, a critical point is that what makes the derivative fixed point operator vanish).One way to calculate critical points, for iterative methods of convergence order higher than two, is to find those fixed points with null eigenvalues λ j = 0, ∀j.In the case that they are also fixed points, they are called superattracting, as an extension of the scalar case.
So, x * being an attracting fixed point of G, its basin of attraction A x * is defined as A key result from Julia and Fatou [32] proves the relationship between the existence of free critical points (they are called in this way if they are different from the roots of p x ) and the set of initial points that converge to an attracting periodic point: there is at least one critical point associated with each invariant Fatou component.So observing the orbit of the critical points, all the attracting behaviors can be found.This result is valid as for complex as for real iterative functions.The main drawback is that often the analytic expression of the critical points cannot be found in high-order methods, because of the elevated degree of the polynomials involved in the rational function.
The union of all the basins of attraction defines the Fatou set, usually represented as a dynamical plane.It is numerically constructed starting with a mesh of initial guesses, iterating the method on them and assigning different colors depending on the basin they converge to.

Dynamical Analysis of OM10
. Now, we study the behavior of the operator associated to scheme OM10 on quadratic polynomials.In order to get this aim, the asymptotic stability of the fixed points of the associate rational vector applied on p x = x 2 − c, with c ∈ ℝ, must be analyzed.It is clear that, if c > 0, p x has two simple real roots, it has one double root at x = 0 if c = 0 and p x does not have real roots if c < 0.
We need to calculate the fixed points of the associate fixed point operator on method OM10 on p x = x 2 − c, M z, zy, zz, x, c , that is, a high-degree rational function depending on the parameter of the polynomial c.However, it is not viable to analyze directly the fixed points of this operator, as usually indetermination appears when we calculate the fixed points due to cancelations of factors as z − x , zy − x , ansd zz − x in the denominator of M z, zy, zz, x, c .In order to avoid this and using that all the fixed points have equal components, we force firstly zy = zz = x and, after simplifying, z = x.The resulting one-dimensional reduced operator is Moreover, there are two values of c which reduce the rational function: For c = 1/4, there is only one (superattracting) fixed point that is a root of p x , x = −1/2 and one strange fixed point, x = 5/6, that is repulsive.Proof.It is straightforward that identity m x, c = x leads us to the fixed points.Specifically, That is, the fixed points of m x, c are the roots of p x or the real roots of polynomial 1 + c − 4x + 3x 2 .Then, real strange fixed points s 1 c and s 2 c appear when c ≤ 1/3.

Complexity
Regarding their stability, it can be checked that the eigenvalues of the reduced Jacobian matrix coincide with the derivative of m x x, c : .

53
and then, their stability is easily stated.

55
where m x s 2 c , c = 0 for c = 5/16.This is graphically checked in Figure 1.
An example of the behavior stated at Proposition 1 is presented at Figure 2, where the basins of attraction for different values of c are showed.These pictures have been generated by means of a real mesh x, z of 800 × 800 points, following the routines appearing in [33].For each initial pair x, z , intermediate values zy and zz have been generated from z by adding a small random value (in MATLAB code, zy = z + 0 01 * rand 1 and zz = z + 0 01 * rand 1 ).When the method is executed on each initial estimation x, z under these conditions, the point of the mesh is represented in orange or blue color if the process has converged to one of the roots c or − c at a distance than 10 −3 ; other colors to convergence to strange fixed points, and black points to initial guesses that make the method diverge or do not converge after a maximum number of iterations, 200.
Let us remark that c = 5/16 is the value of parameter c that makes the strange fixed point s 2 c superattractor; even in this situation, it is difficult to find initial estimations converging to it (a converging orbit to s 2 5/16 is plotted in where the initial estimation of the orbit is selected).Also, c = 0 1 is used and convergence is observed (Figure 2(d)) to both roots with black areas of slow convergence.No other attractors appear.In Figure 2(e), the most frequent behavior is the convergence to the roots, but a black line of convergence to a 2-periodic orbit near 1, 1 also appears.In the case of double root, corresponding to c = 0 (see Figure 2(f)), convergence to x = z = 0 is very slow.On the other hand, critical points play a key role in the stability of the system, as they appear always in any basin of attraction.Then, it is necessary, not only to calculate them but also to analyze their asymptotic behavior, in order to detect all the attracting elements: attracting fixed points, periodic orbits, strange attractors, and so on.This is the reason why we calculate the free critical points of m x x, c , with equal components, in the following result.Maybe other critical points of the multivariate rational functions exist, but their calculation is not feasible due to the high degree of the polynomials (of several variables) involved.(b) If c > 1/4, there also exist two more free critical points: 0 or, equivalently, forcing the eigenvalues of the reduced Jacobian matrix to be zero.
Let us remark that, although the dynamics of the method seems to be stable from the dynamical planes plotted and the analysis of the strange fixed points, the presence of any of these critical points can make them bifurcate into periodic orbits or to generate chaos for specific values of c.In what follows, we analyze how it changes depending on c and if there exists any other kind of bifurcations that leads to repulsive or attractive points or to periodic or strange attractors.

Feigenbaum Diagrams.
In order to analyze bifurcations, we employ Feigenbaum diagrams of the multidimensional rational function related to OM10 on p x , by using as initial estimation real critical points described in Proposition 2 and searching those ranges where changes of behavior happen.The critical points are not independent but conjugated two- Table 1: Nonlinear test functions along with their zeros.

Nonlinear function
Root By using the critical point c 1 c as the starting guess, a bifurcation diagram is observed in Figure 3(a).In this case, it is observed that the critical point is in the basin of attraction of the roots − c, as converges to it for c > 0. A symmetric behavior is obtained for c 2 c as the initial estimation, belonging to the basin of attraction of the root c.In Figure 3(b), critical point c 3 c shows also convergence to one of the roots, but c 4 c shows a chaotic behavior for 1/4 < c < 1/3, with period-doubling bifurcation cascades as can be observed in Figures 3(c) and 3(d).
We plot the iteration of m x, c for the values of c in one of the blue regions of Figure 3(d), in the x, z space.In the sequence shown in Figure 4, several strange attractors appear, separated or unified depending on very close values of c.These plots have been generated by fixing a value of c, with an initial estimation at x = 0 8. OM10 which has been applied on it, plotting one point per iteration (blue for the first 2000 iterations, green for iteration 2001 to 4000, and magenta for iteration 4001 to 10,000).The resulting images show that several attracting points joint into wandering areas that are unified and separated depending on c.
A numerical test of methods with memory is usually made by using starting point x 0 and also starting values of the accelerating parameters: γ 0 to calculate y 1 and z 1 and thereafter λ 0 in order to estimate x 1 .We check the performance of the methods by using this strategy, in order to see if the unstable behavior is avoided or, on the contrary, increased.For this, we use γ 0 = 0 01 and λ 0 = 2.It can be observed in Figure 5 that the stability of the method is highly improved when appropriate initial values are considered, not only x 0 but also the steps y 1 and z 1 and, subsequently, the second iterate x 1 .The areas of convergence around the searched roots are wide in spite of the complexity of the nonlinear functions involved, and the behavior of the method is stable.

Conclusion
In this work, we have presented a new family of biparametric three-step schemes with memory to solve nonlinear equations.As a result, we have used two self-correcting parameters, designed by Hermite interpolating polynomials in the seventh-order method to achieve higher order convergence without any additional calculation.The R-order of OM10 increases up to 10.The stability analysis of this family has   been made by transforming it in a discrete dynamical system and studying the asymptotical behavior of the fixed and critical points.The method has been shown to be very stable except in few cases.These results have been checked by using dynamical planes, and the proposed method has been compared in performance and computational efficiency with a few existing methods by numerical examples.This confirms the validity of theoretical results.
e n,y ~Dn,p e p n ~Dn,p D n−1,

1 2~ 1 ~ 1 2
e n,y ~−c 6 e n−1,y e n−1,z e 2 n−1 D n−1,r e r n−−c 6 D n−1,p D n−1,q D 2 n−c 6 D n−1,p D n−1,q D 2 n−1,r e p+q+2r+2 n−1 , 40 e n,z ~c6 e n−1,y e n−1,z e 2 n−1 D n−1,r e r n−1 4 ~c6 D n−1,p D n−1,q D 4 n−1,r e p n−1 e q n−1 e 4r n−1 e 2 n−1 ~c6 D n−1,p D n−1,q D 4 n−1,r e p+q+4r+2 n−1 41 and e n+1 ~c2 6 e n−1,y e n−1,z e 2 n−−c 7 f ′ α e n−1,y e n−1,z e 2 n−1 D n−1,r e r n−1 4 from function G ℝ 4 → ℝ4 , given by G z, zy, zz, x = x, xy, xz, g z, xy, xz, x , 47where the steps of nth and (n − 1)th iterations are denoted byz = x n−1 , zy = y n−1 , zz = z n−1 , xy = y n , xz = z n , and x = x n .Fixed points z, zy, zz, x of G satisfy z = zy = zz = x and x = g z, xy, xz, x .By imposing these conditions to the rational operator G, it can be reduced to a real-valued function g x , to study the asymptotic stability of these points.5ComplexityIf all the components of a fixed point of G are different from r, r being a zero of the nonlinear scalar function f , then, it is called strange fixed point.

Proposition 1 .
The number of real fixed points (and their stability) of m x, c depends on the value of parameter c: (a) If c < 0, p x has no real roots, so fixed points s 1 c = 2/3 − 1/3 1 − 3c and s 2 c = 2/3 + 1/3 1 − 3c are strange, being both repulsive.6 Complexity (b) If c = 0, the only (double) root of p x , x = 0 is an attracting fixed point of m x, 0 and also x = 1/3 is a repulsive strange fixed point.(c) When 0 < c < 1/4 or c > 1/4, fixed points c, c and − c, − c are superattracting (corresponding to the roots of p x ).Moreover, s 1 c and s 2 c are strange fixed points if 1/4 < c ≤ 1/3; s 1 c is always repulsive, and s 2 c is attracting if c ∈ 7/25, 1/3 and superattracting if c = 5/16.

Figure 1 :Figure 2 :
Figure 1: Absolute value of the eigenvalues λ i of M s i c , s i c , s i c , s i c , c , i = 1, 2.

Figure 3 : 8 ComplexityFigure 2
Figure 3: Bifurcation diagrams of family OM10 on p x for free independent critical points.

Proposition 2 .
The number of real critical points of m x, c depends on parameter c, ± c, c > 0 are always critical points, and also, (a) If c > 0, c 1 c = 1 − c and c 2 c = 1 + c are free critical points.

Figure 4 :
Figure 4: Bifurcation diagrams of family OM10 on p x for free independent critical points.
Complexity by-two: c 1 c = −1/c 2 c and c 3 c = −1/c 4 c .Nevertheless, it is necessary to analyze the bifurcation diagram corresponding each one of them, as their performance is different.

Figure 5 :
Figure 5: Dynamical planes of OM10 on test functions.

Table 2 :
Numerical comparison of bi-parameter with memory method.