An Integrated Genetic Algorithm and Homotopy Analysis Method to Solve Nonlinear Equation Systems

Solving nonlinear equation systems for engineering applications is one of the broadest and most essential numerical studies. Several methods and combinations were developed to solve such problems by either finding their roots mathematically or formalizing such problems as an optimization task to obtain the optimal solution using a predetermined objective function. (is paper proposes a new algorithm for solving square and nonsquare nonlinear systems combining the genetic algorithm (GA) and the homotopy analysis method (HAM). First, the GA is applied to find out the solution. If it is realized, the algorithm is terminated at this stage as the target solution is determined. Otherwise, the HAM is initiated based on the GA stage’s computed initial guess and linear operator. Moreover, the GA is utilized to calculate the optimum value of the convergence control parameter (h) algebraically without plotting the h-curves or identifying the valid region. Four test functions are examined in this paper to verify the proposed algorithm’s accuracy and efficiency.(e results are compared to the NewtonHAM (NHAM) and Newton homotopy differential equation (NHDE). (e results corroborated the superiority of the proposed algorithm in solving nonlinear equation systems efficiently.


Introduction
Various applications had to be modeled via a system of ordinary or partial, or even fractional, differential equations [1][2][3][4][5][6][7]. Solving such systems may be a challenge for mathematicians. For this target, several iterative methods were proposed to solve nonlinear equation systems, as summarized in the literature [8][9][10][11]. e homotopy analysis method (HAM) [12][13][14][15][16] is a known semianalytic approach to approximating series solutions of nonlinear equations. e HAM characteristics, including the initial approximation, auxiliary linear operator, and function, provide a powerful tool for nonlinear equation systems. Two conversion types of the HAM are available: the convergence of the calculated series solutions to the finite value of the variable x in the nonlinear problem's domain; the convergence of this value to the nonlinear equation's solution. e convergence region and the convergence rate of HAM's series solutions depend on a convergence control parameter (h or c o ) introduced by Liao [17]. After selecting the initial approximation, auxiliary linear operator, and function, HAM provides one family of solutions. According to this parameter's value, a member of this family is selected to be an approximated solution to the nonlinear equation.
Consequently, the convergence region and the convergence rate of the series solutions calculated by HAM depend on the convergence control parameter [18][19][20]. As proposed by Liao [17], the convergence control parameter is obtained by plotting the h-curves. A valid region (R h ) is a horizontal segment obtained by plotting the curves of specific quantities versus h. e convergence control parameter's acceptable values can be determined through this valid region by trial and error. However, the h-curve approach cannot give the optimal value of h in (R h ). Several techniques were developed to accelerate the convergence of the series solution.
In [21,22], an optimal homotopy asymptotic method (OHAM) is suggested as a HAM alternative. An auxiliary function is assumed based on a set of unknown constants. After applying the condition of residuals, these constants are determined. If this approximation order increases, it becomes more complex and CPU-time-consuming to solve the resulting nonlinear equations. In [23], one-step optimal HAM was proposed for estimating the convergence control parameter to overcome this disadvantage. e main difference between both approaches is that one algebraic equation is solved at each order of approximation in the one-step optimal HAM, which reduces the CPU time. It is consequently easier to apply using symbolic computation codes. However, it is not sure that we get the convergence analysis using this approach only. Hence, applying a multiple-step optimal approach is needed as seen in [21], followed by the one-step optimal procedure. In [24,25], an approach based on adding a second auxiliary parameter to the zero-order deformation equation is proposed to enhance the convergence characteristics further. Adding a new additional parameter resulted in a new dimension in the convergence region, allowing selecting auxiliary operators more freely and generalizing the homotopy analysis method for a broader range of nonlinear problems. However, higher-order derivatives are computationally intensive. As a result, the approach is recommended for low order homotopy iteration schemes.
Another way to determine the convergence control parameter's optimal value is to minimize the squared residual error as in [26][27][28]. is residual is discretized by numerical rules like the trapezoidal rule as in [24,29]. In [30], the residual error was discretized using Chebyshev points. e proposed approach determines the convergence control parameter's optimal value by minimizing the discrete residual function's norm at each HAM approximation order. is approach could find an acceptable approximation to a class of nonlinear differential equations. In [31], a new scheme based on stochastic arithmetic (SA) to find the optimal convergence control parameter was proposed.
is scheme was called the CESTAC method. Moreover, the CADNA library is applied to implement the CESTAC method on the algorithms. Although the proposed approach obtained the optimal convergence control and the optimal number of iterations, plotting the h-curves is still needed because the control parameter's proper initial value is chosen from the convergence region. Another way to calculate the convergence parameter is to minimize the residual as the objective function of a formulated optimization problem. Simultaneously, the solution is the value of the convergence parameter that reduces it to almost zero. e proposed optimization problem is classified as a type of a general equilibrium problem [32]. en, the traditional methods can be applied to get the solution. However, they have limitations because of the derivative calculations and the influence of the initial guess. ese previous modifications provided enhanced approaches to calculate the convergence of the control parameter. However, they still have severe limitations, including the following: (i) Plotting h-curves is a necessity (ii) It hardly converges without being combined with other schemes (iii) e calculations of higher-order derivatives are computationally intensive (iv) e approaches that decrease the symbolic calculations are only suitable for low-dimensional systems (v) e absence of rules for defining the required terms in the HAM series increases the computational time and slows down the convergence e other type of modifications depended on combining HAM and other conventional schemes for solving nonlinear systems. ese combinations aim to improve the traditional methods' performance and avoid (if possible) exploring the hcurves to find the optimal value of convergence control parameter h. In [33], a Newton homotopy differential equation (NHDE) was proposed to find the solution of nonlinear systems when initiated by a poor initial approximation. e procedure depends on constructing the Newton homotopy equation of the nonlinear system. en, it is differentiated to create a linear ordinary differential equation of the two unknowns (x, q). Finally, the obtained equation is solved by the Euler method to get the desired solution's values. Although this algorithm does not need to search for the optimal value of h, it depends on calculating the partial derivatives. In large systems, calculating the partial derivatives is high burden and is affected by a singularity. In [34], another combination of the Newton-Raphson scheme and HAM was proposed to construct a more efficient numerical algorithm named the Newton homotopy analysis method (NHAM). In this method, the value of h was determined by Newton-Raphson scheme. However, this method was applied to nonlinear equations of only one variable and was not generalized to solve nonlinear systems.
In [35], a combination of the Newton method and HAM (NHAM) was considered to solve the algebraic and transcendental equations. It aimed to accelerate the HAM's convergence by beginning the calculations using the HAM to get a new initial point. en, it applies the Newton method with that initial point. If the Newton method does not converge to the solution after some iterations, HAM is involved again, initiated with the Newton's method solution. Although this method accelerated the convergence of HAM and enabled it to start whatever the initial guess was, it depends on defining an approximation value of h, leading to plot h-curves, or choosing it before starting the calculations. Choosing the convergence parameter at the beginning of the problem may lead to estimates of unrequired HAM terms, affecting the convergence with an undesired rate. Moreover, it depends on the Jacobian matrix, which is affected seriously by singularities. ese modified methodologies have their limitations as summarized below: (i) ere is a need for plotting h-curves (ii) e calculations of partial derivatives increase the computational burden (iii) e appearance of the problems in the traditional methods like Jacobian singularity and stiffness hinders the convergence (iv) e undetermined number of the HAM series affects the computation time Genetic algorithms (GAs) are search-based algorithms based on the concepts of natural selection and genetics. Holland developed them as a subset of evolutionary algorithms [36]. e algorithm begins with creating a pool or a population of possible solutions to the given problem. ese solutions then undergo recombination and mutation (like in natural genetics), producing new children, and the process is repeated over various generations. ey have been used to solve a lot of applications and optimization problems [37][38][39][40][41]. Genetic algorithms are randomized, but they perform much better than random local search. GAs have various advantages, making them famous and suitable for multiple applications. ey do not require any derivative information. Accordingly, they have massive parallelism that helps update the global search space and are more efficient than traditional methods. ey also optimize both continuous and discrete functions with single or multiple objectives besides providing a list of "good" solutions, not just a single solution.
ey are useful when the search space is enormous, and there are many parameters involved. Several hybridization techniques and combinations were developed based on the GAs [42][43][44][45][46]. ese hybridization techniques aim to benefit the GA's advantages by accelerating the hybrid method, enhancing their initial conditions, reducing their computational burden, or avoiding derivative calculations based on the gradient.
is work aims to drive an improved algorithm combining the benefits of the previous HAM algorithms' modifications. It also seeks to use optimization methods in calculating the residual to determine the required number in the series. Finally, it is benefiting from the parallelism and global convergence of the genetic algorithm. For this aim, a hybrid algorithm combining genetic algorithm (GA) and HAM is proposed to solve nonlinear equation systems. e proposed algorithm (GAHAM) tries to determine the target solution through the GA without involving the HAM. If the GA fails to converge to the solution individually, the HAM phase is utilized for this task. en, the GA enhances the HAM performance by introducing an improved initial guess to the HAM. Moreover, the GA will be used in HAM calculations to determine the optimum value of the convergence control parameter without plotting the h-curves. e contribution of this work can be summarized as follows: (i) e proposed combination of GA and HAM successfully solved nonlinear equation systems with less computational time and effort (ii) e GA initiates the HAM with an improved initial guess. If it fails to converge by itself, improving the HAM procedure's performance decreases the required HAM terms (iii) e GA is used in the HAM calculations to determine the convergence parameter, which leads to dispensing the plotting of h-curves and decreasing the computational burden (iv) As all calculations are done algebraically during this algorithm, they can be easily repeated in high-dimension or slow convergent problems e main benefits of the proposed algorithm are stated as follows: (1) If the genetic algorithm succeeded in calculating the solution, the HAM's symbolic calculations or the gradient calculations using the traditional methods are not needed. As a result, the computational effort and time are reduced. (2) If the genetic algorithm fails to detect the solution, it will introduce an enhanced initial guess and linear operator to the HAM yielding faster convergence and less symbolic computations. (3) If the HAM is needed, the genetic algorithm is applied to calculate the convergence control parameter's optimum value. e genetic algorithm calculates that value without plotting the h-curves, decreasing the computation time and accelerating the convergence to the required solution. e paper is organized as follows. Section 2 illustrates the arithmetic criteria, recurrence relation of HAM, and other proposed algorithm details with a flowchart. Section 3 shows the results of the proposed algorithm for four test functions compared with the results of the Newton homotopy differential equation (NHDE) [33] and Newton homotopy analysis method (NHAM) [35]. Section 4 illustrates the result analysis and concluding remarks. Finally, Section 5 summarizes the conclusions. Figure 1 illustrates the flowchart of the proposed algorithm. After choosing a random initial guess (X o , F o ), GA is applied first, resulting in a new point (X n , F n ). If the resulting residual value is less than the specified tolerance F n ≤ Tol, this point will be the solution of the investigated system and the algorithm will be terminated without involving the HAM. Unfortunately, this is not guaranteed for all problems. In this case, the point resulting from the GA stage will be utilized as an improved initial guess to the HAM in the second stage, where the HAM has a definite number of terms for solution series (X o � X n , F o � F n ). e choice of the number will be declared in the section of concluding remarks. In stage 3, e HAM computations are completed, Mathematical Problems in Engineering while the GA is applied to calculate an optimum value of the convergence control parameter without plotting h-curves.

The Proposed Algorithm
e new point (X n , F n ) is obtained, and the residual is compared to the desired tolerance. If the terminating condition is satisfied, the process is terminated using both the GA and HAMs sequentially. If the solution is not reached, the algorithm is repeated from the first stage. e initial population will depend on the resulting point from the HAM procedure. e described computational stages are defined as follows.
e system of nonlinear equations is defined as where For m number of nonlinear equations in n number of unknowns or variables. e residual function is calculated as and, at an initial random point Assume that the enhanced point is X 1 � [x 11 , x 12 , . . . , x 1n ]. As a result of improving the initial guess, it is convenient that Figure 2 shows an assumption of the relation between the initial and the improved residuals at two points x o1 and x 11 . Every variable in the system will have the components of these two points. According to Figures 2(a) and 2(b), if the line segment joins the pairs of (x o1 , ‖F o ‖) and (x 11 , ‖F 1 ‖), there will be a proposed angle θ between this line segment and the vertical line x o1 . e residual may decrease with the decrease in the variable as shown in Figure 2(a), or it may decrease with the increase of the variable as shown in Figure 2 en, the change in the variable x o1 can be calculated as Because F 1 is already unknown, an assumption of obtaining the first stage's solution is proposed for more en, the improved initial point is calculated as It is noticeable that the number of angles equals the number of unknowns in the nonlinear system.
As illustrated in (6), the variable change depends on the tan function, which can be any real number within (−∞, ∞). As a result, the values of the tan function cannot be entered directly into GA as a search domain, unlike the values of θ. is causes these angles' values to represent the GA chromosomes. eir domain is defined as for i � 1, 2, . . . , n(number of independent variables) is proposed domain of the angle confirms that the new initial point will have a residual less than the initial random guess. According to the proposed angle domain, it should lie in the third or fourth quarter related to ‖F o ‖, which confirms that the new residual will be less than the initial residual ‖F o ‖. An additional parameter α is proposed to adjust the step size and accelerate the calculations. e domain of this parameter is (0, ∞). It is entered with the tan function of the angle limited by [0, (π/2)]. As a result, the total number of unknowns in the n-unknowns system will be n + 1.
After constructing the chromosomes based on these intervals, the values of tan θ i , α are calculated, and the improved initial point is performed. e optimization problem in GA stage is formulated as where i � 1, 2, . . . , n, and θ n+1 ∈ 0, π 2 , After completing the calculations in GA stage, the resulting point is examined to be the optimum solution. If the residual is less than the predetermined tolerance, the algorithm stops after the first stage using GA, and the obtained solution is confirmed. Otherwise, the resulting point is entered into the HAM stage as an enhanced initial guess.

Stage 2: e HAM Procedure.
Based on the realized initial guess from the GA stage, the HAM is initiated, where it can be represented as where X o is the initial guess and q is the homotopy parameter or embedding parameter. At q � 0, and q � 1, where (x o1 , ||F o ||) (x 11 ,||F 1 ||) x o1

Mathematical Problems in Engineering
X � X o + qX 1 + q 2 X 2 + q 3 X 3 + · · · + q r X r � k�r k�0 q k X k , the following homotopy function is considered to perform the homotopy function solving (1).
en, (12) is written as where L is the linear operator and h is the convergence control parameter.
Consider (16) with a nonsingular matrix of B; the deformation equation that is called zeroth-order deformation equation is calculated as e matrix B is calculated from the results of the first stage as Both enhanced X o and F o are calculated from the results of the GA stage. e matrix (X o ) is a nonsquare matrix. According to [1,2], the product (X T o X o ) is invertible and the pseudoinverse of X o can be obtained from the formula e matrix B is calculated as en, (17) is written as e differentiation of (21) w.r.t. q will result in After substitution with q � 0 and As F is a function of the vector X, which in turn is a function of the embedded parameter q, the derivative is computed as dX dq | q�0 � X 1 .
From (23), the term X 1 is calculated as e differentiation of (22) w.r.t q results in (26) is written as and substitution with q � 0 in (27) results in Because | (d 2 X/dq 2 )| q�0 � 2X 2 and from (23), (28) is written as by multiplying both sides by (B − 1 /2), the term X 2 is calculated as and substitution with q � 0 in (30) yields Because | (d 3 X/dq 3 )| q�0 � 6X 3 and from (13), (32) is written as Because | (d 2 X/dq 2 )| q�0 � 2X 2 , (33) is written as and, by multiplying both sides by (B − 1 /6), the term X 3 is calculated as By repeating the same procedure, the r th order deformation equation is obtained. is can be used to determine the solution series of algebraic equation systems. In general, the series term k�r k�0 q k X k at q � 1 is calculated as follows: According to (36), the solution series is obtained at the end of this stage as a function of the convergence control parameter (h). e last step is applying GA to determine the optimum value of h and, therefore, the final solution.

Stage 3: Calculating the Optimal Value of Convergence Control Parameter and Final Solution.
In this stage, the residual function, which is a function of the convergence control parameter (h), is involved as GA's objective function.
e domain of h should be defined to start the GA optimization process. As illustrated earlier, past contributions depended on plotting the h-curves, and the resulting horizontal segment will give the desired search domain. On the other hand, this proposed algorithm is advantageous with no need to draw these curves. As reported in [4,5], h ∈ R c and R c ⊂ R. Accordingly, the convergence control parameter's search domain is defined as (−∞, ∞). e domain of the control convergence parameter is defined according to (37) to avoid indefinite values.
According to (37), the search for the optimum value of h occurs in the indefinite domain of real numbers through the definite ∅ domain search. e optimization problem is formulated in GA at this stage as At the end of this stage, the convergence control parameter's optimum value is obtained, and therefore a new solution is calculated. Hence, stages 2 and 3 represent the HAM procedure (stage) based on the enhanced initial guess. If the resulting solution of HAM procedure is optimum, the algorithm terminates, and the solution is displayed. Otherwise, it will be entered to the first stage as a new initial guess, and then the algorithm starts again until the desired tolerance is reached or no more improvement in the residual function is achieved. en, the final solution of the nonlinear equations is declared.
It is worth mentioning that, according to this sequence of steps, the number of calculated terms using the HAM is identified before starting the HAM stage. In the proposed algorithm, it is selected as five. e reason for this choice is explained in the section of test functions. For more illustration of the algorithm procedure, Algorithm 1 shows the pseudocode of the proposed algorithm.

Test Functions and Result Analysis
Four test functions were utilized to investigate the performance of the proposed algorithm. e results are compared to Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). For more generality, square and nonsquare systems are considered test functions. In cases of nonsquare systems, matrices X + o and B + are calculated using a pseudoinverse method. e number of variables and the number of equations are n and m, respectively. All these test functions were described in detail in [34,[41][42][43][44].
e proposed algorithm is coded in MATLAB 2020a and implemented on the computer with Intel® Core ™ i7 CPU, 3.2 GHz, and 16 GB RAM. For computational studies, a population size equal to 50, and 50 generations are used; crossover fraction is 0.8; migration interval is 20; and migration factor is selected to be 0.2. [42]. As seen in [41], the first test function is represented as

Selected Test Function 1 (Beale Function)
e exact solution of this system is X � [3.0, 0.5]. e comparison starts with the NHAM. ree different cases of initial guesses are proposed, and the results are summarized in Table 1. In the NHAM and the GAHAM, the number of terms calculated is five. is choice was selected carefully, where it achieved the goal of improving or reaching the solution with the tested cases while reducing the timeconsuming symbolic calculations.
According to Table 1, the NHAM in case 1 converged to the exact solution after one iteration of the HAM followed by 56 iterations of Newton's method. e completion of calculations required the plotting of the h-curve of X 1 to identify the valid region and, therefore, the convergence parameter optimum value. Figure 3 shows the h-curve and the corresponding valid region using the NHAM. e NHDE approach diverges despite increasing the number of iterations to 500. In the GAHAM, the solution is obtained after two iterations of the HAM and one iteration of the GA stage. e HAM iteration includes the last two stages of the proposed algorithm. All calculations proceeded algebraically without drawing the corresponding h-curve, which led to the acceleration of computations and accurate results.
For case 2, the NHAM used one HAM iteration, followed by 77 Newton's iterations. NHDE needed 22 iterations to converge. However, no HAM iterations were required in the GAHAM. Convergence occurred after one iteration of the first stage using the GA stage.
For case 3, NHAM converges after two iterations of HAM and 111 Newton's iterations. It should be mentioned here that, after the first iteration of HAM, 100 Newton's iterations were used without convergence which leads to the second iteration of the HAM. After the second iteration of Mathematical Problems in Engineering the HAM and 11 Newton's iterations, the final solution was reached. Furthermore, the calculation of the convergence parameter from curves was time-consuming.
NHAM's consequence prevented the movement to the Newton method step until completion of convergence control parameter calculations, which affected the speed of convergence and increased the computational burden. Figure 4 shows the hcurve of the HAM iterations to detect the valid region of the convergence control parameter. NHDE converges after 105 iterations. However, the GAHAM converges after one iteration of the GA stage, and no HAM iterations were needed. e GAHAM has the minimum number of iterations. Besides, the Initialize: (i) the size of the population (N) (ii) the maximum number of generations (G) (iii) the initial random guess (X o ) with its corresponding residual function F (X o ) (iv) (n + 1) the number of angles (θ i ) with their respective domains, where n is the number of system variables (v) m number of HAM series (vi) e angle ϕ with its domain (vii) e desired tolerance for stopping � Tol While the stop criterion is not satisfied, do Stage 1 For It � 1 : G Do For i � 1 : N Do Tan n � tan θ 1 tan θ 2 tan θ 2 . . . . . . tan θ n α � tan θ n+1 Calculate the new individual X i � X o + αTan n Calculate its fitness function ability to converge, whatever the initial guess was without plotting h-curves as the NHAM and without the required Jacobian avoiding singularity or divergence probability, may appear using the NHDE method. [43]. According to [41], selected test function 2 is represented as

Selected Test Function 2 (Kowalik and Osborne Function)
e exact solution of this test function is X � [0.1922, 0.2257, 0.1426, 0.1484], where the results are summarized in Table 2. Figures 5(a) and 5(b) illustrate the variation of residual versus iterations for each method. Figure 5(a) shows the results of three methods based on the whole number of iterations. Figure 5(b) shows the details of convergence during the first five iterations to show the difference in calculated residuals for both NHAM and GAHAM.
e GAHAM converged at the third iteration, whereas the NHAM required many iterations to reach near the solution. Hence, the NHAM needs a condition for stopping in case of no more improvement in the resulting solution. However, the NHDE found the exact solution after 495 iterations, which means that the GAHAM was faster than the other two methods and more accurate than the NHAM.

Selected Test Function 3.
According to [34], selected test function 3 is represented as  [34] may result under certain conditions and an initial guess. On the contrary, when the NHAM was applied to this function based on random initial guess and five-term HAM series in each HAM iteration, it did not converge. e Newton method used 100 iterations, and the HAM used 15 iterations in the NHAM. Table 3 summarizes the results of the three methods. Figure 6(a) shows the residual of the three methods up to 12 iterations, where the GAHAM was not included in the figure because of NHAM's high residual. Compressing the residual's scale, Figure 6(b) shows that the residual decreased fastness in the GAHAM and its ability to reach the exact solution is faster than the NHDE method. [44]. Selected test function 4 is represented as

Selected Test Function 4 (Penalty Function II)
e exact solution of this function is As shown in Table 4, the NHAM and the NHDE diverge. e NHAM was applied for five iterations of the HAM. 100 iterations of Newton's method followed each iteration of the HAM. However, Newton method's iterative process was interrupted in each stage after ten consecutive iterations of divergence, so that the whole number of Newton's iterations was 61. NHDE also diverged even with increasing the number of iterations to 600. e exact solution was detected using GAHAM after nine iterations of the three stages. Figure 7(a) shows the residual of the three methods during all iterations. Figure 7(b) shows the three methods' results during the first ten iterations to illustrate more details about the three methods' performance. It declares that the GAHAM decreases the residual continuously during the iterations until the exact solution is reached.

Analysis of the Results and Concluding Remarks
Comparing the proposed GAHAM's overall results and the considered NHAM and the NHDE methods raises the following remarks.      (i) e number of terms can be identified before the calculations, enabling symbolic computations' control. is number varies according to the system's complexity and nonlinearity. (ii) e drawing of the h-curve to find the convergence parameter's optimum value is not needed because the proposed domain includes all the real numbers, which contains the valid region.
(5) e specified criterion and domain of angles in the GA stage allow converging to the solution. As shown in the test functions, the GA stage could find the system's solution without involving any HAM stage in some problems. (6) e proposed algorithm may not contain the additional parameter of step size (α) in extensive systems and fast convergence cases. (7) e advantages of this approach can be summarized as follows: (i) It benefits from the genetic algorithm's parallelism to realize the solution without derivatives, initial guesses considerations, and symbolic calculations of HAM (if possible). (ii) If the HAM is required, the proposed approach introduces an initial guess that is as close as possible to the optimum solution. As a result, the symbolic calculations using HAM will be reduced, leading to saving of computational burden and time. (iii) e proposed approach calculates the convergence parameter of HAM without plotting the h-curves, which accelerates the computations and increases the ability to repeat calculations. (iv) e ability to repeat the calculations enables the computation of the minimum number of HAM terms required to reach the optimum solution, which accelerates the convergence and reduces its required computational time and burden.
(9) It is known that each algorithm or approach has its own limitations. e limitations of the proposed GAHAM algorithm are as follows: (i) It depends on defining the number of HAM terms before starting calculations. ere is no definite rule to achieve that, so it is determined according to the resulting residual yields from the GA stage. If this residual is not high, the required number is expected to be low. e proposed search found that the required terms are less than or equal to five. (ii) e effect of the GA parameters has not been involved in this search. e effect of these parameters on the convergence of the algorithm should be investigated in future work.

Conclusion
is paper proposes a new algorithm (GAHAM) combining both the HAM and the GA for solving nonlinear equation systems. e algorithm calculates the solution based on three sequential stages. First, the GA is utilized to find either the system's solution or the best initial guess to the HAM. If the first stage failed to reach the target solution, the second stage is performed using the HAM. e HAM calculations were based on a specific number of series terms, and the linear operator matrix resulted from the improved GA stage's solution. In the third stage, the HAM's convergence control parameter is calculated using the GA to minimize the residual algebraically based on the predefined search domain without plotting the h-curves. e proposed algorithm's performance was investigated using four selected test functions with square and nonsquare systems compared with the Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). e results demonstrated that, even in singular systems, the proposed algorithm succeeded in obtaining the solution while other compared methods failed. e proposed algorithm gets the benefits of the parallelism of the genetic algorithm to calculate the required solution. If it fails to find that solution, it accelerates HAM calculations by introducing an enhanced initial guess and linear operator and calculating the convergence parameter without plotting h-curves. All results verified the fastness and efficiency of the proposed algorithm in solving nonlinear equation systems.

Data Availability
All data related to the work are included in the paper and its relevant references. Clarification or data requirements that support the findings of this study are available from the author upon request.

Conflicts of Interest
e author declares no conflicts of interest.