Some Recent Modifications of Fixed Point Iterative Schemes for Computing Zeros of Nonlinear Equations

In computational mathematics, it is a matter of deep concern to recognize which of the given iteration schemes converges quickly with lesser error to the desired solution. Fixed point iterative schemes are constructed to be used for solving equations emerging in many fields of science and engineering. ,ese schemes reformulate a nonlinear equation f(s) � 0 into a fixed point equation of the form s � g(s) ; such application determines the solution of the original equation via the support of fixed point iterative method and is subject to existence and uniqueness. In this manuscript, we introduce a newmodified family of fixed point iterative schemes for solving nonlinear equations which generalize further recursive methods as particular cases. We also prove the convergence of our suggested schemes. We also consider some of the mathematical models which are categorically nonlinear in essence to authenticate the performance and effectiveness of these schemes which can be seen as an expansion and rationalization of some existing techniques.


Introduction
In many disciplines of engineering and mathematical sciences, a broader class of problems are studied in the framework of a nonlinear equation f(s) � 0, where f: D R ⟶ R is a sufficiently smooth function close to simple zero μ ∈ D. e development of various iterative methods for finding the approximate solution of a nonlinear equation f(s) � 0 has become an active area of research in many scientific fields. Several numerical techniques such as Taylor series, decomposition methods, quadrature formulas, modified homotopy perturbation method, and variational iterative method have been used to explore the diversity of iterative methods, for example, see . One of the most well-known and extensively used iterative methods for solving the nonlinear equation is Newton's method which was initiated by Traub [31]. Many numerical methods have been constructed as an extension of Newton and Newton's like methods. Weerakoon and Fernando [30] have improved the convergence of the Newton method by approximating the indefinite integral in Newton's theorem by the rectangular and trapezoidal rule. Frontini and Sormani [13] have extended their results to determine another variant of the Newton method which is cubically convergent. Later on, Ozban [25] has introduced some newly improved forms of the Newton's method by using the concept of harmonic mean and midpoint rule. Abbasbandy [1], Chun [7], and Darvishi and Barati [10] have constructed and introduced different higherorder iterative methods by applying the decomposition technique of Adomian [32]. Implementation of this Adomian decomposition technique required higher-order derivatives evaluation which is a major pitfall of this method. To overcome this drawback, Daftardar-Gejji and Jafari [11] have used different modifications of the Adomian decomposition method and introduced a simple technique which does not require derivative evaluation of the Adomian polynomial. is technique is useful to rewrite the nonlinear equation as the sum of linear and nonlinear parts. Bhalekar and Daftardar-Gejji [6] have determined the convergence of the former method [11] in detail and established its equivalence with Adomian decomposition method. Numerous researchers are extensively using technique [11] and derived several higher-order iterative methods for solving nonlinear equations. Ali et al. [2] have introduced a family of iterative methods using the quadrature formula as well as the fundamental theorem of calculus and checked the validity and performance of these methods by examining two mathematical models. In [18,20], decomposition technique [11] is implemented and merged sophistically with a coupled system of equation to investigate various order iterative methods. Alharbi et al. [4] have used the central idea of the decomposition technique along with auxiliary function and investigated generalized and comprehensive forms of higherorder iterative methods for finding solutions of nonlinear equations. Ali et al. [3] have constructed several new iterative methods by using Taylor's series expansion of the function g(s).
ese iterative methods can be viewed as a generalization of some well-known methods such as the Newton method, Halley method, and Traub's method.
Inspired and motivated by the continuing research ventures in this direction, we consider the well-known fixed point iterative method [33], in which we rewrite the nonlinear equation f(s) � 0 as s � g(s) that satisfies the following properties for existence and uniqueness of fixed point: en, we use Newton's theorem along with writing the functional equation s � g(s) as coupled system and applying the decomposition technique presented in [11]. In the second section of this study, we introduce some new iterative methods and determine their special cases. e third section comprises of convergence analysis of the proposed iterative methods. Efficiency and performance of newly constructed family of recursive approaches are tested in the last section, by solving some test examples along with two models, i.e., motion of a particle on an inclined plane and Lenard-Jones potential applications to minimization problem. We also present the graphical analysis for these models. Numerical results of the examples reveal and validate the efficacy of our newly proposed methods.

New Class of Family of Iterative Schemes
Consider the nonlinear equation which can be rewritten as where λ is an initial guess sufficiently close to μ which is simple root of (1). Now, utilizing the technique of Noor et al. [22], approximate the function f(s) using fundamental theorem of calculus and quadrature formula: where τ i represents the knots and τ i ∈ [0, 1] and w i satisfy the condition: Applying the technique of He [15], and writing the nonlinear equation as an equivalent coupled system of equations, It can be rewritten as where It is clear that M(s) is nonlinear operator. Now, we construct sequence of higher-order iterative methods by employing decomposition technique initiated by Gejji and Jafari [11]. With the support of this technique, solution of (8) can be represented as in terms of the infinite series: e nonlinear operator can be decomposed as 2 Complexity us, from equations (8), (11), and (12), we have which generates the following iterative scheme: It follows that Notable approximation of s is conveyed as lim m⟶∞ S m � s, For m � 0, we have Implementing (10), From (6), it can easily be computed as H(s 0 ) � 0 and using in (18), we get For m � 1, Using (4) and (17), we have is formulation suggests Algorithm 1. Algorithm 2 and Algorithm 3 are the main iterative schemes which generate further special cases for different values k, w, and τ. Taking κ � 1, w 1 � 1, and τ 1 � 0, Algorithm 2 turns down to the subsequent iterative scheme:

Some Special Manifestations of Algorithm 3.
Picking κ � 1, w 1 � 1, and τ 1 � 0, Algorithm 3 turns down to the subsequent iterative scheme: To the best of our knowledge, Algorithm 7, Algorithm 8, Algorithm 9, Algorithm 11, and Algorithm 12 appear to be new ones.

Convergence Analysis
Theorem 1. Let f: I ⊆ R ⟶ R be differentiable function where I is an open interval. Let μ ∈ I be a simple zero of f(s) � 0 or s � g(s) where g: I ⊆ R ⟶ R is sufficiently smooth in the neighborhood of root. If s o is an initial guess existing nearly close to μ, then the multistep methods defined by the Algorithms 2,3,7,8,9,11,and 12 have convergence of order at least 3, 4, 3, 3, 4, 4, and 4, respectively. Proof. Let µ be root of nonlinear equation f(s) � 0 or equivalently s � g(s). Let the errors at nth and (n + 1)th iterations be e n and e n+1 , respectively. Now, expanding g(s) and g ′ (s n ) by using Taylor's series about µ, Complexity 3 (i) For a given s 0 , approximate solution s n+1 is computed by the following iterative scheme: S n+1 � ((g(s n ) − s n g′(s n )) /(1 − g′(s n ))), g′(s n ) ≠ 1, n � 0, 1, 2. (ii) Kang et al. [20] developed this algorithm which has quadratic convergence. From (6) and (8), we have (17), Algorithm 1, and simplifying, we have s (vi) By using Algorithm 1 and computing, we get s � ( ). (vii) is relation yields the following two-step algorithm. ALGORITHM 1: e proposed method.
(i) Let s 0 be an initial guess, then one can figure out s n+1 (approximate solution) with the support of the subsequent recursive scheme: ALGORITHM 3: e proposed method.
(i) Let s 0 be an initial guess, then one can figure out the approximate solution s n+1 by the following recursive scheme: (i) Let s 0 be an initial guess, then the approximate solution s n+1 is computed by the following iterative scheme: ALGORITHM 8: A newly designed scheme.
ALGORITHM 12: A new recursive approach.
Complexity g s n � µ + e n g ′ (µ) + e 2 g ′ s n � g ′ (µ) + e n g ″ (µ) + e 2 Considering Algorithm 2 and following (26), where g ′ (µ) ≠ 1, we obtain Taylor's series expansion of κ i�1 w i g ′ (s n + τ i (t n − s n )) about µ, By substituting (27)- (29) into Algorithm 2, we obtain the error term of Algorithm 2, Expanding g(u n ) in terms of Taylor series about µ, 6 Complexity Expanding κ i�1 w i g ′ (s n + τ i (u n − s n )) by Taylor's series, Applying (30)-(32) into Algorithm 3, we get the error term of Algorithm 3, Now, we investigate the convergence order of special cases of Algorithms 2 and 3.
Amplifying g ′ (s n + 2t n /3) in terms of Taylor series about µ, Starting with (23), (27), (29), and (34), we obtain 4g t n − t n g′ s n + 3g′ s n + 2t n 3 Now, by substituting (33) and (34) into Algorithm 7 and simplifying, we obtain the error term of Algorithm 7, Unfolding g ′ (s n + t n /2) in terms of Taylors' series about µ, Employing (23), (27), (28), and (37) and simplifying, we have By substituting (40) and (41) into Algorithm 8, we get the error term of Algorithm 8, Now, again considering the error term of Algorithm 4 investigated by Kwun et al. [21], Amplifying g(u n ) in terms of Taylors' series about µ, 8 Complexity By substituting (24) and (46) into Algorithm 9, we get the error term of Algorithm 9, Now, considering (41) and expanding g(u n ) in terms of Taylor's series about µ, Opening up the term g ′ (s n + 2u n /3) in the form of Taylor's series about µ, From (23), (48), and (49) and simplifying, we have 4g u n − u n g ′ s n + 3g ′ s n + 2u n 3 By substituting (51) and (52) into Algorithm 11, we get the error term of Algorithm 11, From (42) considering the error equation of Algorithm 8, Expanding g(u n ) in terms of Taylor's series about µ, Elaborating g ′ (s n + u n /2) in the terminology of Taylor's series about µ, (57) From (55)-(57), we have 2g u n − u n g ′ s n + g ′ x n + u n 2 2 − g ′ s n + g ′ s n + u n 2 By substituting (58) and (59) into Algorithm 12, we get the error term of Algorithm 12, is completes the proof.

Efficiency Index
Commonly in literature, the efficiency index [31] of an algorithm supplies us with information about the numeric behavior and performance of the method under examination. It is also used to compare different iterative methods and mathematically defined as EI � P 1/m , where P represents the order of the method and m is the total number of function evaluations (the function and the derivatives involved) per iteration necessary by the method. Taking into account this fact, one can calculate the EI of different iterative methods. Since the Ullah method (UM) [5] is quadratically convergent and needs three function evaluations per iteration, thus EI for this method will be 2 1/4 ≈ 1.18921. Similarly, EI of Farooq method FM [28] is 3 1/5 ≈ 1.24573 because order of convergence of the method is three and requires five function evaluations per iteration. e efficiency indexes of Noor methods [22] with cubic and fourth order of convergence are 3 1/5 ≈ 1.24573, 4 1/6 ≈ 1.2592, and 4 1 /8 ≈ 1.18921. Now, we compute the efficiency indexes of newly proposed algorithms. e 10 Complexity convergence order of the methods described in Algorithms 7 and 8 is 3, see (37) and (42), and requires four function evaluations per iteration. us, the efficiency index for both methods is 3 1/4 ≈ 1.31607. e convergence order of the methods described in Algorithms 9, 11, and 12 is 4, see equations (47), (53), and (60), and the total number of evaluations per iteration is 4, 6, and 6, respectively. us, the efficiency indexes for these methods are EI � 4 1/4 ≈ � 1.41421, 4 1/6 ≈ 1.25992, and 4 1/6 ≈ 1.25992, respectively. Table 1 summarizes the efficiency indexes of different algorithms that we have computed above, and it can easily be noted that efficiency indexes of the newly established algorithms AG1, AG2, AG3, AG4, and AG5 are better than the efficiency indexes of other iterative methods.

Applications
In this section, we consider two well-known models related to mathematics, physics, and physical chemistry which include a nonlinear model formed due to the motion of particles on an inclined plane and Lenard-Jones potential, a renowned model that represents the interaction between neutral molecules or atoms. We also include some examples used by Chun [7] to elaborate the efficacy of the proposed algorithms. For computational work, we implement codes in MAPLE software and MATLAB for graphical analysis, and the following stopping criterion is taken into account for entire computations: We display a comparative representation of newly established methods: Algorithm 7 (AG1), Algorithm 8 (AG2), Algorithm 9 (AG3), Algorithm 11 (AG4), and Algorithm 12 (AG5), introduced in this paper, with secondorder Ullah method (UM) [5], third-order Farooq method (Algorithm 13) (FM) [28], with alpha � 0.9, and Noor methods [22] {(Algorithm 2.8) (NR1), (Algorithm 2.12) (NR2), and (Algorithm 2.15) (NR3)}, to show that the proposed methods perform more efficiently, see Tables 2-4 and Figures 1-3. We obtain an estimated simple root rather than the exact based on the exactness ε of the computer and use ε � 10 − 15 . As for the convergence criteria is concerned it is desired that the distance δ among two consecutive estimations for zero is not more than 10 − 15 . In Tables 2-4, s 0 is the initial guess, (IT) represents the number of the iterations, s n is the approximate root, and f(s n ) is its corresponding functional value.
e computational order of convergence (COC) (see [9]) is computed to check the behavior of the proposed methods for presented examples and given by COC ≈ ln s n+1 − s n /ln s n − s n− 1 ln s n − s n− 1 /ln s n− 1 − s n− 2 .
We want to determine the value of λ which demonstrates the constant birth rate of population. For computational work, we take s 0 � 3 as an initial estimate. e solution of this example approximated to 16 decimal digits is 0.1009979296857498. e numerical results for this problem are given in Table 2. Figure 1 shows the fall of residuals for this example. It is clear from the computational results in terms of number of iterations that new fixed point iterative methods AG1, AG2, AG3, AG4, and AG5 are more efficient in their performance as compared to the already known ordinary methods UM, FM, NR1, NR2, and NR3.
Example 2 (Lenard-Jones potential model [24]). Consider a specific model for atom potential referred as Lenard-Jones potential which is a well-known model in atomic physics and physical chemistry.
where V o denotes the depth of the potential and µ is the length scale representing the distance where the interparticle interaction between two atoms becomes zero. e function V(s) in (65) has minimum value at s � 2 1/6 µ. We choose V o � 1 and µ � 2, then the actual minimum value of function V(s) will be s � 2 7/6 . Now, differentiating V(s) in (65), minimization problem is transformed into the problem of finding the solution of the nonlinear equation given by In this example for computational evaluations, we utilize s o � 1.4 as an initial estimate. e columns in Table 3 illustrate the comparison of numerical results in terms of number of iterations for this problem. Figure 2 projects the drop of residuals for this example. It is concluded from Table 3 and Figure 2 that the effectiveness and presentation of the proposed schemes are much refined than other similar standard methods. Results of this example also show that fixed point iterative methods converge more rapidly towards solution as compared to the existing ones.

Example 3 (transcendental and algebraic problems).
To numerically analyze the suggested algorithms, we consider the following transcendental and algebraic equations used by Chun [7]: In Table 4, we display the numerical results for examples f 1 (s), f 2 (s), f 3 (s), f 4 (s), and f 5 (s) to validate the theoretical results. e second column in Table 4 shows the number of iterations required to reach the stopping criteria (62). It is clear from the results obtained that new methods need a smaller number of iterations when compared with other methods. e columns in Table 5 represent the number of iterations for different nonlinear functions along with initial guess s 0 . Figure 3 shows the comparison of the iterative methods with respect to number of iterations. A comparative representation of number of iterations is presented, needed for different methods with our developed methods using the stopping criteria (62) with the accuracy ε � 10 − 200 . It is clear from Table 5 that settling the same convergence criteria for all the methods, the number of iterations required for the new methods remains less than the number of iterations needed by the other methods of the same order.

Conclusion
We have introduced a new modified family of recently developed iterative methods (Algorithms 2 and 3) by using a decomposition technique for solving nonlinear equations. Various new iterative methods of different order have been constructed as special cases with the support of recently constructed family. e convergence criteria of newly proposed methods is reviewed and inspected in order to ensure convergence order. In Tables 2-5 and Figures 1-3, we furnish the comparative by taking into account both mathematically and graphically for these strategies with a few known procedures by examining two models and a few algebraic nonlinear equations. e numerical results and graphical depiction certified the swiftness and finest performance of the methods with reference to the number of iterations even though the accuracy has been raised up to ε� 10 − 200 .

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare no conflicts of interest.