MPEMathematical Problems in Engineering1563-51471024-123XHindawi10.1155/2021/55893225589322Research ArticleAn Integrated Genetic Algorithm and Homotopy Analysis Method to Solve Nonlinear Equation Systemshttps://orcid.org/0000-0003-4769-4727OmarHala A.ArifMuhammadDepartment of Basic Engineering SciencesFaculty of EngineeringMenoufia UniversityMenoufiaEgyptmenofia.edu.eg202115620212021162202114202126202115620212021Copyright © 2021 Hala A. Omar.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Solving nonlinear equation systems for engineering applications is one of the broadest and most essential numerical studies. Several methods and combinations were developed to solve such problems by either finding their roots mathematically or formalizing such problems as an optimization task to obtain the optimal solution using a predetermined objective function. This paper proposes a new algorithm for solving square and nonsquare nonlinear systems combining the genetic algorithm (GA) and the homotopy analysis method (HAM). First, the GA is applied to find out the solution. If it is realized, the algorithm is terminated at this stage as the target solution is determined. Otherwise, the HAM is initiated based on the GA stage’s computed initial guess and linear operator. Moreover, the GA is utilized to calculate the optimum value of the convergence control parameter (h) algebraically without plotting the h-curves or identifying the valid region. Four test functions are examined in this paper to verify the proposed algorithm’s accuracy and efficiency. The results are compared to the Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). The results corroborated the superiority of the proposed algorithm in solving nonlinear equation systems efficiently.

1. Introduction

Various applications had to be modeled via a system of ordinary or partial, or even fractional, differential equations . Solving such systems may be a challenge for mathematicians. For this target, several iterative methods were proposed to solve nonlinear equation systems, as summarized in the literature . The homotopy analysis method (HAM)  is a known semianalytic approach to approximating series solutions of nonlinear equations. The HAM characteristics, including the initial approximation, auxiliary linear operator, and function, provide a powerful tool for nonlinear equation systems. Two conversion types of the HAM are available: the convergence of the calculated series solutions to the finite value of the variable x in the nonlinear problem’s domain; the convergence of this value to the nonlinear equation’s solution. The convergence region and the convergence rate of HAM’s series solutions depend on a convergence control parameter (h or co) introduced by Liao . After selecting the initial approximation, auxiliary linear operator, and function, HAM provides one family of solutions. According to this parameter’s value, a member of this family is selected to be an approximated solution to the nonlinear equation.

Consequently, the convergence region and the convergence rate of the series solutions calculated by HAM depend on the convergence control parameter . As proposed by Liao , the convergence control parameter is obtained by plotting the h-curves. A valid region (Rh) is a horizontal segment obtained by plotting the curves of specific quantities versus h. The convergence control parameter’s acceptable values can be determined through this valid region by trial and error. However, the h-curve approach cannot give the optimal value of h in Rh. Several techniques were developed to accelerate the convergence of the series solution.

In [21, 22], an optimal homotopy asymptotic method (OHAM) is suggested as a HAM alternative. An auxiliary function is assumed based on a set of unknown constants. After applying the condition of residuals, these constants are determined. If this approximation order increases, it becomes more complex and CPU-time-consuming to solve the resulting nonlinear equations. In , one-step optimal HAM was proposed for estimating the convergence control parameter to overcome this disadvantage. The main difference between both approaches is that one algebraic equation is solved at each order of approximation in the one-step optimal HAM, which reduces the CPU time. It is consequently easier to apply using symbolic computation codes. However, it is not sure that we get the convergence analysis using this approach only. Hence, applying a multiple-step optimal approach is needed as seen in , followed by the one-step optimal procedure. In [24, 25], an approach based on adding a second auxiliary parameter to the zero-order deformation equation is proposed to enhance the convergence characteristics further. Adding a new additional parameter resulted in a new dimension in the convergence region, allowing selecting auxiliary operators more freely and generalizing the homotopy analysis method for a broader range of nonlinear problems. However, higher-order derivatives are computationally intensive. As a result, the approach is recommended for low order homotopy iteration schemes.

Another way to determine the convergence control parameter’s optimal value is to minimize the squared residual error as in . This residual is discretized by numerical rules like the trapezoidal rule as in [24, 29]. In , the residual error was discretized using Chebyshev points. The proposed approach determines the convergence control parameter’s optimal value by minimizing the discrete residual function’s norm at each HAM approximation order. This approach could find an acceptable approximation to a class of nonlinear differential equations. In , a new scheme based on stochastic arithmetic (SA) to find the optimal convergence control parameter was proposed. This scheme was called the CESTAC method. Moreover, the CADNA library is applied to implement the CESTAC method on the algorithms. Although the proposed approach obtained the optimal convergence control and the optimal number of iterations, plotting the h-curves is still needed because the control parameter’s proper initial value is chosen from the convergence region. Another way to calculate the convergence parameter is to minimize the residual as the objective function of a formulated optimization problem. Simultaneously, the solution is the value of the convergence parameter that reduces it to almost zero. The proposed optimization problem is classified as a type of a general equilibrium problem . Then, the traditional methods can be applied to get the solution. However, they have limitations because of the derivative calculations and the influence of the initial guess. These previous modifications provided enhanced approaches to calculate the convergence of the control parameter. However, they still have severe limitations, including the following:

Plotting h-curves is a necessity

It hardly converges without being combined with other schemes

The calculations of higher-order derivatives are computationally intensive

The approaches that decrease the symbolic calculations are only suitable for low-dimensional systems

The absence of rules for defining the required terms in the HAM series increases the computational time and slows down the convergence

The other type of modifications depended on combining HAM and other conventional schemes for solving nonlinear systems. These combinations aim to improve the traditional methods’ performance and avoid (if possible) exploring the h-curves to find the optimal value of convergence control parameter h. In , a Newton homotopy differential equation (NHDE) was proposed to find the solution of nonlinear systems when initiated by a poor initial approximation. The procedure depends on constructing the Newton homotopy equation of the nonlinear system. Then, it is differentiated to create a linear ordinary differential equation of the two unknowns x,q. Finally, the obtained equation is solved by the Euler method to get the desired solution’s values. Although this algorithm does not need to search for the optimal value of h, it depends on calculating the partial derivatives. In large systems, calculating the partial derivatives is high burden and is affected by a singularity. In , another combination of the Newton–Raphson scheme and HAM was proposed to construct a more efficient numerical algorithm named the Newton homotopy analysis method (NHAM). In this method, the value of h was determined by Newton–Raphson scheme. However, this method was applied to nonlinear equations of only one variable and was not generalized to solve nonlinear systems.

In , a combination of the Newton method and HAM (NHAM) was considered to solve the algebraic and transcendental equations. It aimed to accelerate the HAM’s convergence by beginning the calculations using the HAM to get a new initial point. Then, it applies the Newton method with that initial point. If the Newton method does not converge to the solution after some iterations, HAM is involved again, initiated with the Newton’s method solution. Although this method accelerated the convergence of HAM and enabled it to start whatever the initial guess was, it depends on defining an approximation value of h, leading to plot h-curves, or choosing it before starting the calculations. Choosing the convergence parameter at the beginning of the problem may lead to estimates of unrequired HAM terms, affecting the convergence with an undesired rate. Moreover, it depends on the Jacobian matrix, which is affected seriously by singularities. These modified methodologies have their limitations as summarized below:

There is a need for plotting h-curves

The calculations of partial derivatives increase the computational burden

The appearance of the problems in the traditional methods like Jacobian singularity and stiffness hinders the convergence

The undetermined number of the HAM series affects the computation time

Genetic algorithms (GAs) are search-based algorithms based on the concepts of natural selection and genetics. Holland developed them as a subset of evolutionary algorithms . The algorithm begins with creating a pool or a population of possible solutions to the given problem. These solutions then undergo recombination and mutation (like in natural genetics), producing new children, and the process is repeated over various generations. They have been used to solve a lot of applications and optimization problems . Genetic algorithms are randomized, but they perform much better than random local search. GAs have various advantages, making them famous and suitable for multiple applications. They do not require any derivative information. Accordingly, they have massive parallelism that helps update the global search space and are more efficient than traditional methods. They also optimize both continuous and discrete functions with single or multiple objectives besides providing a list of “good” solutions, not just a single solution. They are useful when the search space is enormous, and there are many parameters involved. Several hybridization techniques and combinations were developed based on the GAs . These hybridization techniques aim to benefit the GA’s advantages by accelerating the hybrid method, enhancing their initial conditions, reducing their computational burden, or avoiding derivative calculations based on the gradient.

This work aims to drive an improved algorithm combining the benefits of the previous HAM algorithms’ modifications. It also seeks to use optimization methods in calculating the residual to determine the required number in the series. Finally, it is benefiting from the parallelism and global convergence of the genetic algorithm. For this aim, a hybrid algorithm combining genetic algorithm (GA) and HAM is proposed to solve nonlinear equation systems. The proposed algorithm (GAHAM) tries to determine the target solution through the GA without involving the HAM. If the GA fails to converge to the solution individually, the HAM phase is utilized for this task. Then, the GA enhances the HAM performance by introducing an improved initial guess to the HAM. Moreover, the GA will be used in HAM calculations to determine the optimum value of the convergence control parameter without plotting the h-curves. The contribution of this work can be summarized as follows:

The proposed combination of GA and HAM successfully solved nonlinear equation systems with less computational time and effort

The GA initiates the HAM with an improved initial guess. If it fails to converge by itself, improving the HAM procedure’s performance decreases the required HAM terms

The GA is used in the HAM calculations to determine the convergence parameter, which leads to dispensing the plotting of h-curves and decreasing the computational burden

As all calculations are done algebraically during this algorithm, they can be easily repeated in high-dimension or slow convergent problems

The main benefits of the proposed algorithm are stated as follows:

If the genetic algorithm succeeded in calculating the solution, the HAM’s symbolic calculations or the gradient calculations using the traditional methods are not needed. As a result, the computational effort and time are reduced.

If the genetic algorithm fails to detect the solution, it will introduce an enhanced initial guess and linear operator to the HAM yielding faster convergence and less symbolic computations.

If the HAM is needed, the genetic algorithm is applied to calculate the convergence control parameter’s optimum value. The genetic algorithm calculates that value without plotting the h-curves, decreasing the computation time and accelerating the convergence to the required solution.

The paper is organized as follows. Section 2 illustrates the arithmetic criteria, recurrence relation of HAM, and other proposed algorithm details with a flowchart. Section 3 shows the results of the proposed algorithm for four test functions compared with the results of the Newton homotopy differential equation (NHDE)  and Newton homotopy analysis method (NHAM) . Section 4 illustrates the result analysis and concluding remarks. Finally, Section 5 summarizes the conclusions.

2. The Proposed Algorithm

Figure 1 illustrates the flowchart of the proposed algorithm. After choosing a random initial guess Xo,Fo, GA is applied first, resulting in a new point (Xn,Fn). If the resulting residual value is less than the specified tolerance FnTol, this point will be the solution of the investigated system and the algorithm will be terminated without involving the HAM. Unfortunately, this is not guaranteed for all problems. In this case, the point resulting from the GA stage will be utilized as an improved initial guess to the HAM in the second stage, where the HAM has a definite number of terms for solution series (Xo=Xn,Fo=Fn). The choice of the number will be declared in the section of concluding remarks. In stage 3, The HAM computations are completed, while the GA is applied to calculate an optimum value of the convergence control parameter without plotting h-curves. The new point (Xn,Fn) is obtained, and the residual is compared to the desired tolerance. If the terminating condition is satisfied, the process is terminated using both the GA and HAMs sequentially. If the solution is not reached, the algorithm is repeated from the first stage. The initial population will depend on the resulting point from the HAM procedure.

Flowchart of GAHAM algorithm.

The described computational stages are defined as follows.

2.1. Stage 1: Improving the Initial Guess

The system of nonlinear equations is defined as(1)Fx=0,where(2)FX=f1X,f2X,f3X,,fmX=0,in variables X=x1,x2,x3,,xn.

For m number of nonlinear equations in n number of unknowns or variables. The residual function is calculated as(3)F=f12+f22++fm2,and, at an initial random point Xo, the residual is Fo, where Xo=xo1,xo2,,xon for i=1,2,,n.

Assume that the enhanced point is X1 =  x11,x12,,x1n. As a result of improving the initial guess, it is convenient that (4)F1<Fo.

Figure 2 shows an assumption of the relation between the initial and the improved residuals at two points xo1 and x11. Every variable in the system will have the components of these two points. According to Figures 2(a) and 2(b), if the line segment joins the pairs of xo1,Fo and (x11,F1), there will be a proposed angle θ between this line segment and the vertical line xo1. The residual may decrease with the decrease in the variable as shown in Figure 2(a), or it may decrease with the increase of the variable as shown in Figure 2(b). Then, the change in the variable xo1 can be calculated as(5)dx1=FoF1tanθ1.

(a) Residual decreases with xo1 decrease. (b) Residual decreases with xo1increase.

Because F1 is already unknown, an assumption of obtaining the first stage’s solution is proposed for more simplicity in calculations. As a result, F1=0. Thus, (5) can be rewritten for each variable component dx1 as(6)dxi=Fotanθi.

Then, the improved initial point is calculated as(7)x1i=xoi+dxi.

It is noticeable that the number of angles equals the number of unknowns in the nonlinear system.

As illustrated in (6), the variable change depends on the tan function, which can be any real number within ,. As a result, the values of the tan function cannot be entered directly into GA as a search domain, unlike the values of θ. This causes these angles’ values to represent the GA chromosomes. Their domain is defined as(8)θiπ,2π,for i=1,2,,nnumber of independent variables

This proposed domain of the angle confirms that the new initial point will have a residual less than the initial random guess. According to the proposed angle domain, it should lie in the third or fourth quarter related to Fo, which confirms that the new residual will be less than the initial residual Fo. An additional parameter α is proposed to adjust the step size and accelerate the calculations. The domain of this parameter is 0,. It is entered with the tan function of the angle limited by 0,π/2. As a result, the total number of unknowns in the n-unknowns system will be n + 1.

After constructing the chromosomes based on these intervals, the values of tanθi,α are calculated, and the improved initial point is performed. The optimization problem in GA stage is formulated as(9)minFX,where(10)θ1θ2θnθn+1,θiπ,2π,i=1,2,,n,and θn+10,π2,dxi=Fotanθi,α=tanθn+1,x1i=xoi+αdxi.

After completing the calculations in GA stage, the resulting point is examined to be the optimum solution. If the residual is less than the predetermined tolerance, the algorithm stops after the first stage using GA, and the obtained solution is confirmed. Otherwise, the resulting point is entered into the HAM stage as an enhanced initial guess.

2.2. Stage 2: The HAM Procedure

Based on the realized initial guess from the GA stage, the HAM is initiated, where it can be represented as(11)H:0,1×Rn:Rn,(12)Hq,x=qFx+1qFxFXo,where Xo is the initial guess and q is the homotopy parameter or embedding parameter.

At q=0, and q=1, where(13)X=Xo+qX1+q2X2+q3X3++qrXr=k=0k=rqkXk,(14)H0,x=FxFXo,H1,x=Fx,the following homotopy function is considered to perform the homotopy function solving (1).(15)H1,X=0 at the solution X.

Then, (12) is written as(16)1qLFXFXo=qhFx,where L is the linear operator and h is the convergence control parameter.

Consider (16) with a nonsingular matrix of B; the deformation equation that is called zeroth-order deformation equation is calculated as(17)1qBXFXo=qhFx.

The matrix B is calculated from the results of the first stage as(18)BXo=Fo.

Both enhanced Xo and Fo are calculated from the results of the GA stage. The matrix (Xo) is a nonsquare matrix. According to [1, 2], the product (XoTXo) is invertible and the pseudoinverse of Xo can be obtained from the formula(19)Xo+=XoT.Xo1.XoT.

The matrix B is calculated as(20)B=FoXo+.

Then, (17) is written as(21)BXFXoqBX+qFXo=qhFx.

The differentiation of (21) w.r.t. q will result in(22)BdXdqqBdXdqBX+FXo=qhdFdq+hFx.

After substitution with q=0 and BXo=FXo, the resulting equation is(23)BdXdq|q=0=hFXo.

As F is a function of the vector X, which in turn is a function of the embedded parameter q, the derivative is computed as(24)dXdq|q=0=X1.

From (23), the term X1 is calculated as(25)X1=hB1FXo.

The differentiation of (22) w.r.t q results in(26)Bd2Xdq2qBd2Xdq2BdXdqBdXdq=qhd2Fdq2+hdFdq+hdFdq,(26) is written as(27)Bd2Xdq2qBd2Xdq2=qhd2Fdq2+2hdFdq+2BdXdq,and substitution with q=0 in (27) results in(28)Bd2Xdq2|q=0=2BdXdq|q=0+2hdFdq|q=0.

Because d2X/dq2q=0=2X2 and from (23), (28) is written as(29)2BX2=2hFXo+2hdFdq|q=0=2hFXo+hdFdq|q=0,by multiplying both sides by B1/2, the term X2 is calculated as(30)X2=X1+hB1dFdq|q=0,differentiating (26) w.r.t q results in(31)Bd3Xdq3qBd3Xdq3Bd2Xdq2=qhd3Fdq3+hd2Fdq2+2hd2Fdq2+2Bd2Xdq2,and substitution with q=0 in (30) yields(32)Bd3Xdq3=3hd2Fdq2+3Bd2Xdq2.

Because d3X/dq3q=0=6X3 and from (13), (32) is written as(33)6BX3=3Bd2Xdq2|q=0+3hd2Fdq2|q=0.

Because d2X/dq2q=0=2X2, (33) is written as(34)6BX3=6BX2+3hd2Fdq2|q=0,and, by multiplying both sides by B1/6, the term X3 is calculated as(35)X3=X2+h2B1d2Fdq2|q=0.

By repeating the same procedure, the rth order deformation equation is obtained. This can be used to determine the solution series of algebraic equation systems. In general, the series term k=0k=rqkXk at q=1 is calculated as follows:(36)Xk=Xo,k=0,hB1FXo,k=1,Xk1+hk1!B1dk1dqk1FXq=0,k2.

According to (36), the solution series is obtained at the end of this stage as a function of the convergence control parameter (h). The last step is applying GA to determine the optimum value of h and, therefore, the final solution.

2.3. Stage 3: Calculating the Optimal Value of Convergence Control Parameter and Final Solution

In this stage, the residual function, which is a function of the convergence control parameter (h), is involved as GA’s objective function. The domain of h should be defined to start the GA optimization process. As illustrated earlier, past contributions depended on plotting the h-curves, and the resulting horizontal segment will give the desired search domain. On the other hand, this proposed algorithm is advantageous with no need to draw these curves. As reported in [4, 5], hRc and RcR. Accordingly, the convergence control parameter’s search domain is defined as ,. The domain of the control convergence parameter is defined according to (37) to avoid indefinite values.(37)h=tan,where,π2,π2.

According to (37), the search for the optimum value of h occurs in the indefinite domain of real numbers through the definite ∅ domain search. The optimization problem is formulated in GA at this stage as(38)minh=tanFh,where π2,π2.

At the end of this stage, the convergence control parameter’s optimum value is obtained, and therefore a new solution is calculated. Hence, stages 2 and 3 represent the HAM procedure (stage) based on the enhanced initial guess. If the resulting solution of HAM procedure is optimum, the algorithm terminates, and the solution is displayed. Otherwise, it will be entered to the first stage as a new initial guess, and then the algorithm starts again until the desired tolerance is reached or no more improvement in the residual function is achieved. Then, the final solution of the nonlinear equations is declared.

It is worth mentioning that, according to this sequence of steps, the number of calculated terms using the HAM is identified before starting the HAM stage. In the proposed algorithm, it is selected as five. The reason for this choice is explained in the section of test functions. For more illustration of the algorithm procedure, Algorithm 1 shows the pseudocode of the proposed algorithm.

<bold>Algorithm 1:</bold> Pseudocode of the proposed algorithm.

Initialize:

the size of the population (N)

the maximum number of generations (G)

the initial random guess Xo with its corresponding residual function FXo

(n + 1) the number of angles (θi) with their respective domains, where n is the number of system variables

m number of HAM series

The angle ϕ with its domain

The desired tolerance for stopping = Tol

While the stop criterion is not satisfied, do

Stage 1

For It = 1 : G Do

For i = 1 : N Do

Tan n=tanθ1tanθ2tanθ2tanθn

α=tanθn+1

Calculate the new individual

Xi=Xo+αTan n

Calculate its fitness function

Fi=FXi

End For

End For

Xb=Best X from stage 1

Fm=FXb

If FmTol

Then, display solution = Xb

Else

Xo=Xb

Fo=Fm

Stage 2

B=FoXo+

Xk=Xo,k=0,hB1FXo,k=1,Xk1+h/k1!B1dk1/dqk1FXq=0,k2

X=Xo+k=1k=mXk=Xh

Fh=FX

Stage 3

For It = 1 : G Do

For i = 1 : N Do

hi=tanϕi

Fhi=Fhhi

End for

End for

hb=Best X from stage 3

Xb=Xhb

Fm=FhXb

If FmTol

Then, display solution = Xb

Convergence control parameter optimum value = hb

Else

Xo=Xb

Fo=Fm

Go to stage 1

End while

3. Test Functions and Result Analysis

Four test functions were utilized to investigate the performance of the proposed algorithm. The results are compared to Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). For more generality, square and nonsquare systems are considered test functions. In cases of nonsquare systems, matrices Xo+ and B+ are calculated using a pseudoinverse method. The number of variables and the number of equations are n and m, respectively. All these test functions were described in detail in [34, 4144].

The proposed algorithm is coded in MATLAB 2020a and implemented on the computer with Intel® Core™ i7 CPU, 3.2 GHz, and 16 GB RAM. For computational studies, a population size equal to 50, and 50 generations are used; crossover fraction is 0.8; migration interval is 20; and migration factor is selected to be 0.2.

3.1. Selected Test Function 1 (Beale Function) [<xref ref-type="bibr" rid="B42">42</xref>]

As seen in , the first test function is represented as(39)n=2,m=3,y=1.5,2.25,2.625,fix=yix11x2i.

The exact solution of this system is X^=3.0,0.5. The comparison starts with the NHAM. Three different cases of initial guesses are proposed, and the results are summarized in Table 1. In the NHAM and the GAHAM, the number of terms calculated is five. This choice was selected carefully, where it achieved the goal of improving or reaching the solution with the tested cases while reducing the time-consuming symbolic calculations.

Comparison of results for test function 1 at different cases of the initial guess.

 Case 1. Initial guess: 7.0611, 6.0747 Method Result HAM iterations Other iterations NHAM Converging 1 h=−0.612 Newton 56 GAHAM Converging 1 h=−0.2341 GA 1 2 h=0.026 NHDE Diverging None 500 Case 2. Initial guess: 0.127, 0.9134 Method Result HAM iterations Other iterations NHAM Converging 1 h=0.04 Newton 77 GAHAM Converging None GA 1 NHDE Converging None 22 Case 3. Initial guess: −1.8926,−1.0322 Method Result HAM iterations Other iterations NHAM Converging 1 h=−1.59 Newton 100 2 h=−0.86 11 GAHAM Converging None GA 1 NHDE Converging None 105

According to Table 1, the NHAM in case 1 converged to the exact solution after one iteration of the HAM followed by 56 iterations of Newton’s method. The completion of calculations required the plotting of the h-curve of X1 to identify the valid region and, therefore, the convergence parameter optimum value. Figure 3 shows the h-curve and the corresponding valid region using the NHAM. The NHDE approach diverges despite increasing the number of iterations to 500. In the GAHAM, the solution is obtained after two iterations of the HAM and one iteration of the GA stage. The HAM iteration includes the last two stages of the proposed algorithm. All calculations proceeded algebraically without drawing the corresponding h-curve, which led to the acceleration of computations and accurate results.

hx1 The curve of case 1 of the initial guess.

For case 2, the NHAM used one HAM iteration, followed by 77 Newton’s iterations. NHDE needed 22 iterations to converge. However, no HAM iterations were required in the GAHAM. Convergence occurred after one iteration of the first stage using the GA stage.

For case 3, NHAM converges after two iterations of HAM and 111 Newton’s iterations. It should be mentioned here that, after the first iteration of HAM, 100 Newton’s iterations were used without convergence which leads to the second iteration of the HAM. After the second iteration of the HAM and 11 Newton’s iterations, the final solution was reached. Furthermore, the calculation of the convergence parameter from curves was time-consuming.

NHAM’s consequence prevented the movement to the Newton method step until completion of convergence control parameter calculations, which affected the speed of convergence and increased the computational burden. Figure 4 shows the h-curve of the HAM iterations to detect the valid region of the convergence control parameter. NHDE converges after 105 iterations. However, the GAHAM converges after one iteration of the GA stage, and no HAM iterations were needed. The GAHAM has the minimum number of iterations. Besides, the ability to converge, whatever the initial guess was without plotting h-curves as the NHAM and without the required Jacobian avoiding singularity or divergence probability, may appear using the NHDE method.

hx1 The curve of case 3 of the initial guess of both HAM iterations.

3.2. Selected Test Function 2 (Kowalik and Osborne Function) [<xref ref-type="bibr" rid="B43">43</xref>]

According to , selected test function 2 is represented as(40)n=4,m=11,fix=yix1ui2+uix2ui2+uix3+x4.

The exact solution of this test function is X^=0.1922,0.2257,0.1426,0.1484, where the results are summarized in Table 2. Figures 5(a) and 5(b) illustrate the variation of residual versus iterations for each method.

Results of test function 2.

 Test function 2 Method Result HAM iterations Other iterations NHAM Almost converging 1 Newton 302 NHDE Converging None 495 GAHAM Converging 3 GA None

(a) Residuals of test function 2 during 500 iterations. (b) Results of test function 2 during 5 iterations.

Figure 5(a) shows the results of three methods based on the whole number of iterations. Figure 5(b) shows the details of convergence during the first five iterations to show the difference in calculated residuals for both NHAM and GAHAM. The GAHAM converged at the third iteration, whereas the NHAM required many iterations to reach near the solution. Hence, the NHAM needs a condition for stopping in case of no more improvement in the resulting solution. However, the NHDE found the exact solution after 495 iterations, which means that the GAHAM was faster than the other two methods and more accurate than the NHAM.

3.3. Selected Test Function 3

According to , selected test function 3 is represented as(41)n=4,m=4,f1=xyz+d31=0,f2=x+y+z+d11,f3=2x+3y+4z+d35=0,f4=x+zy+d1=0,F=f1f2f3f4T.

The exact solution of this function is X^=1.4671,5,4.1776,0.3553. The exact solution calculated in  may result under certain conditions and an initial guess. On the contrary, when the NHAM was applied to this function based on random initial guess and five-term HAM series in each HAM iteration, it did not converge. The Newton method used 100 iterations, and the HAM used 15 iterations in the NHAM. Table 3 summarizes the results of the three methods. Figure 6(a) shows the residual of the three methods up to 12 iterations, where the GAHAM was not included in the figure because of NHAM’s high residual. Compressing the residual’s scale, Figure 6(b) shows that the residual decreased fastness in the GAHAM and its ability to reach the exact solution is faster than the NHDE method.

Results of test function 3.

 Test function 3 Method Result HAM iterations Other iterations NHAM Diverging 15 Newton 100 NHDE Converging None 11 GAHAM Converging 4 GA 4

(a) Residuals of test function 3 with full scale. (b) Compressed residual scale of test function 3.

3.4. Selected Test Function 4 (Penalty Function II) [<xref ref-type="bibr" rid="B44">44</xref>]

Selected test function 4 is represented as(42)n=10,m=2n,f1x=x10.2,yi=expi10+expi110,a=105,fix=a1/2expxi10+expxi110yi,for 2in,fix=a1/2expxin+110exp110,for n<i<2n,f2nx=j=1nnj+1xj21.

The exact solution of this function is(43)X^=0.2000,0.0304,0.0162,0.0523,0.0165,0.1650,0.0716,0.2845,0.2301,0.2516.

As shown in Table 4, the NHAM and the NHDE diverge. The NHAM was applied for five iterations of the HAM. 100 iterations of Newton’s method followed each iteration of the HAM. However, Newton method’s iterative process was interrupted in each stage after ten consecutive iterations of divergence, so that the whole number of Newton’s iterations was 61. NHDE also diverged even with increasing the number of iterations to 600. The exact solution was detected using GAHAM after nine iterations of the three stages. Figure 7(a) shows the residual of the three methods during all iterations. Figure 7(b) shows the three methods’ results during the first ten iterations to illustrate more details about the three methods’ performance. It declares that the GAHAM decreases the residual continuously during the iterations until the exact solution is reached.

Results of test function 4.

 Test function 4 Method Result HAM iterations Other iterations NHAM Diverging 5 Newton 61 NHDE Diverging None 600 GAHAM Converging 9 GA None

(a) Residuals of test function 4 during 600 iterations. (b) Residuals of test function 4 during 10 iterations.

4. Analysis of the Results and Concluding Remarks

Comparing the proposed GAHAM's overall results and the considered NHAM and the NHDE methods raises the following remarks.

The proposed algorithm combines the benefits of parallelism, the global ability of search of the GA, and the convergence of the HAM to find the solution of nonlinear equations.

Unlike Newton's method and other local algorithms, the algorithm does not require derivative calculations to improve the initial guess, facilitating the computations and converging even in singular systems (test function 3). Moreover, calculations based on the Jacobian matrix are time-consuming with highly computational burden in large systems unless an approximation is used for the Jacobian, but this may affect the solution's convergence.

In the NHAM and the NHDE method, the residual during the calculations does not continually decrease. It increases during the computations, affecting the speed of convergence and the number of required iterations. However, the GAHAM always offers continuous decrease of residual, ensuring the improved solution’s sustained improvement, and speeds up the convergence.

The main differences of the HAM calculations in the proposed algorithm and other HAM modifications are as follows:

The number of terms can be identified before the calculations, enabling symbolic computations’ control. This number varies according to the system’s complexity and nonlinearity.

The drawing of the h-curve to find the convergence parameter’s optimum value is not needed because the proposed domain includes all the real numbers, which contains the valid region.

The specified criterion and domain of angles in the GA stage allow converging to the solution. As shown in the test functions, the GA stage could find the system’s solution without involving any HAM stage in some problems.

The proposed algorithm may not contain the additional parameter of step size (α) in extensive systems and fast convergence cases.

The advantages of this approach can be summarized as follows:

It benefits from the genetic algorithm’s parallelism to realize the solution without derivatives, initial guesses considerations, and symbolic calculations of HAM (if possible).

If the HAM is required, the proposed approach introduces an initial guess that is as close as possible to the optimum solution. As a result, the symbolic calculations using HAM will be reduced, leading to saving of computational burden and time.

The proposed approach calculates the convergence parameter of HAM without plotting the h-curves, which accelerates the computations and increases the ability to repeat calculations.

The ability to repeat the calculations enables the computation of the minimum number of HAM terms required to reach the optimum solution, which accelerates the convergence and reduces its required computational time and burden.

It is known that each algorithm or approach has its own limitations. The limitations of the proposed GAHAM algorithm are as follows:

It depends on defining the number of HAM terms before starting calculations. There is no definite rule to achieve that, so it is determined according to the resulting residual yields from the GA stage. If this residual is not high, the required number is expected to be low. The proposed search found that the required terms are less than or equal to five.

The effect of the GA parameters has not been involved in this search. The effect of these parameters on the convergence of the algorithm should be investigated in future work.

5. Conclusion

This paper proposes a new algorithm (GAHAM) combining both the HAM and the GA for solving nonlinear equation systems. The algorithm calculates the solution based on three sequential stages. First, the GA is utilized to find either the system's solution or the best initial guess to the HAM. If the first stage failed to reach the target solution, the second stage is performed using the HAM. The HAM calculations were based on a specific number of series terms, and the linear operator matrix resulted from the improved GA stage’s solution. In the third stage, the HAM’s convergence control parameter is calculated using the GA to minimize the residual algebraically based on the predefined search domain without plotting the h-curves. The proposed algorithm’s performance was investigated using four selected test functions with square and nonsquare systems compared with the Newton HAM (NHAM) and Newton homotopy differential equation (NHDE). The results demonstrated that, even in singular systems, the proposed algorithm succeeded in obtaining the solution while other compared methods failed. The proposed algorithm gets the benefits of the parallelism of the genetic algorithm to calculate the required solution. If it fails to find that solution, it accelerates HAM calculations by introducing an enhanced initial guess and linear operator and calculating the convergence parameter without plotting h-curves. All results verified the fastness and efficiency of the proposed algorithm in solving nonlinear equation systems.

Data Availability

All data related to the work are included in the paper and its relevant references. Clarification or data requirements that support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares no conflicts of interest.