Study of Dynamical Behavior and Stability of Iterative Methods for Nonlinear Equation with Applications in Engineering

In this article, we first construct a family of optimal 2-step iterative methods for finding a single root of the nonlinear equation using the procedure of weight function. We then extend these methods for determining all roots simultaneously. Convergence analysis is presented for both cases to show that the order of convergence is 4 in case of the single-root finding method and is 6 for simultaneous determination of all distinct as well as multiple roots of a nonlinear equation. The dynamical behavior is presented to analyze the stability of fixed and critical points of the rational operator of one-point iterative methods. The computational cost, basins of attraction, efficiency, log of the residual, and numerical test examples show that the newly constructed methods are more efficient as compared with the existing methods in the literature.


Introduction
To solve the nonlinear equation is the oldest problem of engineering in general and in mathematics in particular. ese nonlinear equations have diverse applications in many areas of science and engineering. To find the roots of (1), we look towards iterative schemes, which can be classified as to approximate single root and all roots of (1). In this article, we are going to work on both types of iterative schemes. A lot of iterative methods of different convergence orders already exist in the literature (see [1][2][3][4][5][6][7][8][9][10][11]) to approximate the roots of (1). Ostrowski [7] defined efficiency index I of these iterative methods in terms of their convergence order k and the number of function evaluations per iteration, say u, i.e., An iterative method is said to be optimal according to Kung-Traub conjecture [1] if holds. e aforementioned methods are used to approximate one root at a time. However, mathematicians are also interested in finding of all roots of (1) simultaneously. is is due to the fact that simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single-root finding methods, and implemented for parallel computing as well. More detail on single as well as simultaneous determination of all roots can be found in [1,[12][13][14][15][16][17][18][19][20][21][22][23][24] and references cited therein. e most famous of the single-root finding method is the classical Newton-Raphson method: Method (4) requires one evaluation of the function and one of its first derivative to achieve optimal order 2 having efficiency 1.41 using the Traub conjecture. Using Weierstrass' correction [15], in (4), we get the classical Weierstrass-Dochive method to approximate all roots of nonlinear equation (1) given as Method (6) has convergence order 2. Later, Aberth-Ehrlich [14] presented the 3 rd -order simultaneous method given as where N(s i ) � f ′ (s i )/f(s i ). e main aim of this paper is to construct the family of optimal fourth-order single-root finding methods using the procedure of weight function and then convert them into simultaneous iterative methods for finding all distinct as well as multiple roots of nonlinear equation (1). Using the complex dynamical system, we will be able to choose those values of parameters of iterative methods which give a wider convergence area on initial approximations.
Chun [26] gave the fourth-order optimal method as (abbreviated as MM2) where u � f y i f s i . (9) Cordero et al. [3] proposed the fourth-order optimal method as (abbreviated as MM3) Chun [27] introduced the fourth-order optimal method as (abbreviated as MM4) Here, we propose the following family of iterative methods: where u � f y i f s i . (12) For iterative scheme (12), we have the following convergence theorem: Theorem 1. Let ζ ∈ I be a simple root of a sufficiently differential functionf: I ⊆ R ⟶ R in an open interval I. If s 0 is sufficiently close to ζ and Γ be a real-valued function satisfying Γ(0) � 0, Γ ′ (0) � 1, Γ ″ (0) � 4, and Γ ‴ (0) < ∞, then the convergence order of the family of iterative method (12) is 4 and satisfies the following error equation: where Proof. Let ζ be a simple root of f, and s i � ζ + e i . By Taylor's series expansion of f(s i ) about s � ζ taking f(ζ) � 0, we get Dividing (14) by (10), we have 2 Mathematical Problems in Engineering is gives us, using Taylor series, we have Now, taking Γq � 0, Γ(0) � 1, and Γ ″ (0) � 4 in equation (12) and on simplification gives Hence, it proves fourth-order convergence.

Construction of Simultaneous Methods
Suppose nonlinear equation (1) is implies that Now, an approximation of f(s i )/f ′ (s i ) is formed by replacing s j with z tj as follows: Using (27) in (4), we have

Convergence Analysis.
In this section, the convergence analysis of a family of simultaneous methods (M1-M5) is given in the form of the following theorem. Obviously, convergence for the methods (28) will follow from the convergence of the methods (29) from theorem (2) when the multiplicities of the roots are one.
. . , ζ n be the simple roots of nonlinear equation (1). If If be the initial approximations of the roots, respectively, and sufficiently close to actual roots, the order of convergence of method (58-62) is six.
Proof. Let ε i � s i − ζ i and ε i ′ � s i+1 − ζ i be the errors in approximations s i and s i+1 , respectively. en, for distinct roots, we have us, for multiple roots, we have from (29) If it is assumed that absolute values of all errors ε j (j � 1, 2, 3, . . .) are of the same order as, say, |ε j | � O|ε|, then from (30), we have us, (32) shows the convergence order of methods M1-M5 which is six. Hence, the theorem is proved.

Complex Dynamical Study of Families of Iterative Methods
Here, we discuss stability of the family of iterative method (BB1) only in the background contexture of complex dynamics. Rational map arising due to iterative method (BB1) is where y � s − (f(s)/f ′ (s)) and u � (f(y)/f(s)).
Recalling some basic concepts of this theory, detailed information can be found in [2,4,6,8]. Taking a rational function R f : C ⟶ C, where C denotes the Riemann sphere, the orbit s 0 ∈ C defines a set such as An attracting point s * ∈ C defines the basin of attraction, R(s * ), as the set of starting points whose orbit tends to s * .
Furthermore, the implementation of the dynamical plane of the rational operator corresponding to iterative methods divides the complex plane into a mesh of values of real part along the xaxis and imaginary on the y-axis. e initial estimates are depicted in color depending on where their orbit converges, and thus, basins of attraction of the corresponding iterative methods are obtained. e scaling theorem allows the suitable change of the coordinate to reduce dynamics of iteration of general maps to study the specific family of iteration of similar maps.
As iterative method (33), holds scaling theorem and thus allows the dynamic studies of iterative function (BB1) for the Onepoint iterative method (BB1) has a universal Julia set if a rational map exists which conjugates by Mobius transformation.
Theorem 4. For a rational map Ω g (s) arising from (33) where Möbius transformation is given by which is considered as the map from C ∪ ∞. en, we have where Similarly, we can get the following conclusions. where Theorem 6. For a rational map Ω g (s)arising from e fixed points for rational function (36) are s � 1, s � 0, and s � ∞. For the stability of fixed points of iterative method (36), we calculate Ω g ′ (s, β), i.e., where It is evident from (36) that s � 0 and ∞ are always superattractive fixed points, but stability of other fixed points depends on the value of parameter β which is present here. e operator Ω g ′ (s, β) for s � 1 gives Analyzing (46), as β ⟶ ± ∞, we obtained horizontal asymptotes in |Ω g ′ (1, β)| � 0.
In the following result, we present the stability of the strange fixed point for s � 1.

Theorem 7.
e character of the strange point s � 1 is as follows: Let us consider β � a + ib as an arbitrary complex number. en, e functions where we examine stability of iterative method (36) are given as where . We observe cr1 � 1/cr2 and cr3 � 1/cr4. Figure 2 presents the zones of stability of strange fixed points. Fixed points 1 and 0 are represented by black-dotted lines, while critical points (− 1, cr1, cr2, cr3, cr4) are represented by black, red, blue, green and orange-dotted lines, respectively (see Figure 3).

Theorem 8.
e only member of the family of iterative methods whose operator is always conjugated to the rational map s 2 is the element corresponding to β � 0.

Dynamical Planes.
e generations of the dynamical planes are similar to the parametric plane. To execute them, the real and imaginary parts of the starting approximation are represented as two axes over a mesh of 250 × 250 in the complex plane. e stopping criteria are the same as in the parametric plane but assign different colors to indicate to which root the method converges and black in the other case.

Computational Aspect
Here, we compare the computational efficiency of M. S. Petkovi´c method [32] and the new methods (M1-M5) given by (28). As presented in [32], the efficiency of an iterative method can be estimated using the efficiency index given by where D is the computational cost and u is the order of convergence of the iterative method. Using arithmetic operation per iteration with certain weight depending on the execution time of operation, the computational cost D is evaluated. e weights used for division, multiplication, and addition plus subtraction are w d , w m , and w as , respectively. For a given polynomial of degree m and n roots, the number of division, multiplication, addition and subtraction per iteration for all roots is denoted by D m , M m , and AS m . e cost of computation can be calculated as us, (52) becomes Reducing the number of operations of a complex polynomial of degree m with real and complex roots to operations of real arithmetic, as given in Table 2. Applying (38) and data given in Table 2, we calculate the percentage ratio ρ((M1 − M5), PJ6M) [3] which is given by  [32], showing the dominance efficiency of our methods (M1-M5) as compared to them.

Numerical Results
Here, some numerical examples are considered in order to demonstrate the performance of our family of two-step fourth-order single-root finding methods (BB1-BB5) and six-order simultaneous methods (M1-M5), respectively. We compare our family of optimal fourth-order singleroot finding methods (BB1-BB5) with MM1-MM4 methods. Family of simultaneous methods (M1-M5) of convergence order six is compared with M. S. Petkovi´c method [32] of the same order (abbreviated as the PJ6M method). All the computations are performed using CAS Maple 18 with 2500 (64-digit floating-point arithmetic in case of simultaneous methods) significant digits. For single-root finding methods, the stopping criteria are as follows: whereas e i � ‖f(s i )‖ 2 for simultaneous methods.
We take ∈ � 10 − 600 for the single-root finding method and ∈ � 10 − 30 for simultaneous determination of all roots of nonlinear equation (1). Numerical test examples from [22,[33][34][35] are provided in Tables 3-12. In Tables 3, 4, 6, 7, 9, and 11, we present the numerical results for simultaneous determination of all roots while Tables 5, 8, 10, and 12 for single-root finding methods. In all tables, CO represents the convergence order, n, the number of iterations, ρ, the computational order of convergence, and CPU, the computational time in seconds. e value of arbitrary parameter β used in iterative methods BB1-BB5 and M1-M5 is 1.5 for test Examples 1-4. We observe that numerical results of single-root finding methods (BB1-BB5) as well as all-root finding methods (M1-M5) are better than MM1-MM4 and PJ6M, respectively, on the same number of iterations. We also calculate the CPU execution time as all the calculations are done using Maple 18 on Processor Intel (R) Core (TM) i3-3110m CPU@2.4GHz with 64 bit Operating System. We observe from the tables that CPU time of the methods M1-M5 is better than the PJ6M method, showing the efficiency of our methods (M1-M5) as compared to them.

Applications in Engineering.
In this section, we discuss the applications in engineering.
For distinct roots, we used method (28), and for multiple roots, we used method (29).
Example 2 (for determination of all distinct and multiple roots, see [22]). Here, we consider another standard test function for the demonstration of convergence behavior of newly constructed methods. Consider with exact roots ζ 1 � 1, ζ 2 � 2, ζ 3 � 2.5 as shown in Figure 14. e initial guessed values have been taken as s 1 For distinct roots, we used method (28), and for multiple roots, we used method (29).  Example 3. (for uniform beam design, see [34]). Figure 15 shows a uniform beam subject to a linearly increasing distributed load. e nonlinear equation for the resulting elastic curve is      We have to determine the point of maximum deflection which is the value of s where f ′ (s) � 0, i.e., where L � 600 cm, E � 50, 000KN/cm 3 , I � 30, 000 cm 4 , and w 0 � 2.5 KN/cm. e exact roots of (62) are ζ 1 � − 600, ζ 2 � − 268.328, ζ 3 � 268.328, and ζ 4 � 600 as shown in Figure 16.
e initial estimates for f 3 (s) have been taken as Example 4 (for continuous stirred tank reactor (CSTR), see [35]). We consider the isothermal stirred tank reactor (CSTR). Items A and R are fed to the reactor at rates of Q and         q-Q, respectively. e following complex reaction system develops in the reactor (see [35]): is problem was tested by Douglas (see [36]) in order to construct the simple feedback control systems. In his searching of this system, he designed the following equation for the transfer function of the reactor with a proportional control system: with H c being the gain of the proportional controller. is control system is stable for values of H c that yield roots of the transfer function having negative real parts. If we take H c � 0, we get the nonlinear equation: f 4 (s) � s 4 + 11.50s 3 + 47.49s 2 + 83.06325s + 51.23266875 � 0.
e transfer function has four negative real roots as shown in Figure 17 that are (68)

Conclusion
Here, we have developed five families of two-step single-root finding methods of optimal convergence order four and five simultaneous iterative methods of order six, respectively. From Tables 3-10 and Figures 9, 10, and 12, we observe that our methods (BB1-BB5 and M1-M5) are superior in terms of efficiency, stability, CPU time, and residual error as compared with the methods MM1-MM4 and PJ6M, respectively [37].

Data Availability
No data were used to support this study.