Combining Legendre ’ s Polynomials and Genetic Algorithm in the Solution of Nonlinear Initial-Value Problems

This work develops a method for solving ordinary differential equations, that is, initial-value problems, with solutions approximated by using Legendre’s polynomials. An iterative procedure for the adjustment of the polynomial coefficients is developed, based on the genetic algorithm. This procedure is applied to several examples providing comparisons between its results and the best polynomial fitting when numerical solutions by the traditional Runge-Kutta or Adams methods are available. The resulting algorithm provides reliable solutions even if the numerical solutions are not available, that is, when the mass matrix is singular or the equation produces unstable running processes.


Introduction
Mathematical modeling is a constant in engineering daily life, and expressing phenomena by using differential equations is almost mandatory.Successful modeling implies ability to make simplifying assumptions but, even under these assumptions, the solution of differential equations implies a lot of analytical and numerical efforts 1 .
Considering these difficulties and sometimes the impossibility of obtaining analytical solutions, several numerical methods were proposed and, nowadays, the great majority of these methods are part of symbolic and numeric softwares as MATLAB and Mathematica, for instance 2 .These methods are accurate and fast, but each one has its limitations, depending on the type of equation to be solved.The main difficulties appear when the mass-matrix is singular or the equation generates instability in the running process 3 .
Here, trying to deal with this kind of problem for equations with continuous solutions, a method for initial-value problems in ordinary differential equations is developed, assuming that the space of solutions is a Hilbert space, equipped with a Legendre's polynomial basis 4 .
The central idea is to look for the best polynomial coefficient combination, in order to satisfy the original differential equation in the whole interval, instead of searching a polynomial that better fits a set of points in the given interval.
Consequently, a new error criterion must be defined and a program by using differential evolution methods with elitism 5-7 is developed, based on this new criterion.
The determination of the search space developed here is performed by using sequential contractions, allowing fast evolutions with noticeable reductions in the error bands when compared with random search in large populations, as proposed by 5 .
Several examples are solved by the method presented and by using the classical numerical methods, with the accuracy of the results being compared.
In Section 2, the error criterion is discussed and a combination of a global and local criteria is proposed, providing optimal fitting of the polynomial, regarding the function and its derivatives.Section 3 presents the analytical basis of the method, by using Legendre's polynomials to optimize the solutions of an initial-value problem.
The use of the differential genetic algorithm with elitism, guaranteeing stability, applied to the solution of ordinary differential equations, emphasizing the choice of the coefficients under the optimizing criteria developed in Section 2, is described in Section 4. Finally, in Section 5, some examples are presented and the numerical quality of the solutions is discussed.

Error Criterion
Usually, the adopted optimization criterion to approximate a function by a polynomial in a given interval is to minimize the square error integral 8 .This kind of approach minimizes the global error along the whole interval, but produces a large value for the local error at the starting and ending points of the interval.
Consequently, when this criterion is applied to an initial-value problem, the approximation does not produce the best result, as the error for the starting point of the interval remains large.To solve this problem, an error criterion considering the local and global results is proposed, minimizing the product of two factors: the maximum error at the starting point of the interval and the global integral error.
To give an idea of this procedure, the differential equation given by 2.1 with initial condition y −1 e −1 , that admits the solution given by 2.2 in the −1, 1 interval, is studied.

2.2
If one tries to obtain, by using MATLAB, the 8th-order Legendre's polynomial that better fits to 2.2 , the result, with double-precision format, is given by 2.3 .Considering the substitution of y, given by 2.3 , in the original differential equation 2.1 , the integral error is 1.9.10 −8 for the whole interval y .0263x 8− .1551x 6.4963x 4 − .9996x 2 1, 0000.

2.3
If one continues the process, with the result given by 2.3 as an initial solution candidate to the original equation 2.1 , by using the method developed in the next Sections, a solution given by 2.4 , with an integral error 8.7.10 −9 for the whole interval, is obtained y .0242x 8− .1511x 60.4940x 4 − .9992x 2 1, 0000.

2.4
This result is related to the fact that the polynomial given by 2.3 is adequate to fit the function given by 2.2 , without considering its derivatives.In order to adequate the solution to the differential equation, given by 2.1 , the derivatives must be adjusted in the same way.

Analytic Foundations
In this section, it is considered the Hilbert space of piecewise continuous functions, defined in a closed interval a, b , with a scalar product between f and g, belonging to the space, defined as 4 f, g b a fg dx.

3.1
Considering the real interval −1, 1 , the Legendre's polynomials set is an orthogonal basis for the space of piecewise continuous function S, defined in this interval, that is, for any with P k being the k-degree normalized Legendre's polynomial and a k f, P k 4, 9 .It is important to notice that either the integral or the derivative of a Legendre's polynomial can be expressed as a linear combination of the Legendre's polynomial basis 10 , as the following theorems, with proofs presented in Appendix A, show.
Considering the theorems above, about integration and differentiation of Legendre's polynomials, it is interesting to express these results in a matrix form.

Operational Matrices
As shown in 11 , Theorem 3.3 allows the construction of matrices expressing any linear operation executed over the Hilbert space of the continuous functions.Here, numerical methods for solving differential equations by using Legendre's polynomials are developed.Therefore, constructing these matrices for integration and differentiation operations expressed in a Legendre's polynomials basis is useful.
Let f u a linear combination of a generic basis, {B k }, in the Hilbert space of continuous functions 3.5 Theorem 3.3.Let Z a n 1 × n 1 matrix describing the resulting coefficients from a linear operation, α, over the generic basis, {B k }, as functions of the same basis.Calling V v 0 v 1 . . .v n the resulting coefficients vector from the linear operation, α, over f u : V T Z T C T , with C being the vector of the coefficients described in 3.5 .
The procedure described by Theorem 3.3, proved in Appendix A, can be used for any basis in a Hilbert space, with the calculation of matrix Z representing the main operation to be done.
Combining Theorems 3.1 and 3.3, taking Legendre's polynomials as the basis in the Hilbert space of continuous functions, if Z L i is n 2 × n 2 and considering α the integral operation

3.6
All the elements of Z L i , not defined by 3.6 , are equal to zero.For instance, if n 2, Z L i can be written as It can be observed that the element Z L i n 2, n 3 Z L i 4, 5 does not appear, because the third degree Legendre's polynomial was not integrated.
Combining Theorems 3.2 and 3.3, taking Legendre's polynomials as the basis in the Hilbert space of continuous functions, if Z d i is n 1 × n 1 and considering α the derivative operation All the elements of Z L d , not defined by 3.8 , are equal to zero.For instance, for n 3, Z L d can be written as 3.9 The integration matrix Z L i is nonsingular, and, therefore, its inverse always exists.On the other hand, . The correction to be done is to modify c 0 , imposing the condition described, without changing the derivative of f u .
In 10 , the operational matrix was derived for the particular case where α is an integration operation.Theorem 3.3, developed here, permits general matrix expressions for all linear α-operations and considering any kind of basis in the Hilbert space of continuous functions.12 shows, additionally, this development for Fourier trigonometric and canonical basis.

Conservation of Initial Conditions in a Legendre's Basis
When using evolution methods to treat initial-value problems, expressing solutions in a Legendre's basis, the initial conditions must be conserved during all algorithm running process.This conservation depends on the correct combination of the coefficients, c i , to express the function in any step of the running process.This combination generates an algebraic linear system of equations that admits infinite solutions.It is solved choosing a subset of q coefficients, with q representing the order of the original differential equation, and expressing then as a function of the others.
In order to obtain the coefficients of a function derivative, the differentiation matrix, built as explained in the former subsection, has to be used.
As an example, if a third-order equation with solutions in the −1, 1 interval is studied with seven coefficients for the expression in Legendre's basis, one of the possible relationship among the coefficients is is the vector of the initial conditions.If one needs to obtain the coefficients of the integral of the function, the integration matrix, built as explained in the former subsection, must be used, but the initial condition must be considered, as explained below.
Considering that F u u −1 f ξ dξ and its Legendres expansion is given by: F u n 1 k 0 b k P k u , with P k being the k-degree Legendre's polynomial and that the initial condition for the original differential equation is F −1 β, b 0 obtained by the integration matrix must be replaced by b 0 b 0 β.For q differential equations, this process must be repeated q times.

Domain Transposition
The purpose of this work is to develop numerical methods to solve differential equations with solutions defined in a generic closed real interval a,b , but the exposed analytical foundations and other developed methods consider only particular intervals 13, 14 .Here, a generic transformation providing tools to transpose any general closed domain to a particular one is studied for up to fourth-order initial-value problems.Besides, the impact of this transformation in original differential equation and initial conditions is analyzed.
Let u T x ; T : a, b → c, d be a bijective transformation and its inverse T −1 , and y f x ; f : a, b → R a solution of an initial value problem, with a vector Y 0 representing the initial condition.
The interval transposed function y t u : c, d → R is defined by y t u y x , with x T −1 u and, for the first derivative, calling D 0 y x D 1 y 1 x y Consequently, the second and third derivatives are

3.13
For the sake of clarity, the forth derivative is written by grouping the terms expressed in function of D

3.14
As the solution of a differential equation is tied to the initial conditions, it is necessary to establish domain transformations as described above, in an inverse way.Therefore, calling ρ dx/du 1/ du/dx and considering equations 3.12 , 3.13 , and 3.14 in an inverse way, with the replacement of λ by ρ, the initial conditions, y 0 , y 1 0 , y 2 0 , and y 3 0 are given by
In spite of linear transformation being simple, it does not allow to work with equations defined in an infinite domain.In order to take care of this problem, the transformation T , given by

3.16
where v 1 arctan a and v 2 arctan b can be used.

Evolution-Adaptive Process for Solving Initial-Value Problems
In this section, the main question to be answered is if it is possible, with the computation power available, to generate random coefficients for an n-degree polynomial and, under a sequential algorithm, like differential evolution algorithm, over the coefficients, to obtain the exact solution of an initial value problem.Due to the randomness of the original genetic algorithms, the solution space has its regions visited in a random way, and there is no convergence.Nevertheless, when differential versions of genetic algorithms with elitism are applied, some stable processes and satisfactory results are obtained for particular cases 15-18 , including complex biological problems 19, 20 and quantum mechanics 21, 22 .

Mathematical Problems in Engineering
Here, a differential genetic algorithm with an implicit learning process, given by a good choice of the region of the space of solutions based on a restriction scheme that continuously contracts the volume of the space of research, guaranteeing stability, is developed.
Considering that a n-degree polynomial is the best candidate to the solution of an initial value problem, in a particular step of a sequential algorithm, under a certain error criterion, it is natural to ask how far of the exact solution it is.In order to treat this problem, the original differential equation is written with the higher-order derivative term expressed as a function of the lower-order derivatives, the original function, and the independent variable as: Replacing the best solution candidate as y, into G given by 4.1 , it is possible to evaluate G in the defined interval and fitting the obtained results by using a Legendre's polynomial up to n − q -degree.The fitting process follows Gauss-Legendre quadrature method 23, 24 .After that, the obtained polynomial is q-times integrated, by using the integration matrix given by Z L i T , defined in Section 3.1, and considering the given initial conditions.
Comparing this result with the original candidate, the dispersion is obtained.This dispersion defines the band variation for the coefficients of the polynomial candidate, in the next step of the algorithm.For the first step, the candidate is the null polynomial, providing the first band variation for each coefficient of the Legendre's polynomial.
To perform the differential evolution algorithm, a population is randomly mounted inside the first band variation.Then, by using a combination of crossing-over, random and mutation, new off-springs are generated feeding the sequence of the algorithm.
The program starts a tournament selection.If the best output of the tournament overcomes the former winner, a new band variation is constructed.Otherwise, by using elitism with random cross-over and mutation 25 , the algorithm creates a new generation with each gene coefficient varying inside the new band, avoiding stagnation in local minima.
The described procedure is possible, even when the mass-matrix is singular 3 because the Gauss-Legendre's abscissae do not include the interval extremes 23, 24 .If the numerical integration solution is available, it can be the starting point of the algorithm, instead of the null-polynomial, shortening the running time of the procedure.

Results and Analysis
Based on the analytical development presented in Section 3, by using a differential evolution algorithm, a method for solving initial value problems, from now on abbreviated by SIV and detailed in Appendix B, was developed, considering that the solutions are expressed as linear combinations of Legendre's polynomials, in the transposed domain.
Starting with an interval transposition, the SIV algorithm runs, adjusting the coefficients of the polynomial solution, minimizing the residual error when replacing the polynomial approximation and derivatives and transposed initial conditions in the transposed differential equation.After a chosen number of generations, applying the inverse transposition, the solution is obtained, in the original domain.
Here, the application of the SIV algorithm is shown considering four examples of nonlinear up to fourth-order equations.The obtained solutions are compared with the analytic ones, when they exist.
The algorithm is implemented with 66 chromosomes and 5% of mutation.The CPU is a PC Dual-core with 2 GB RAM and running MATLAB-7.
In the cases where it is not possible to express an analytic solution by a finite combination of elementary functions, the solution obtained by SIV is compared with the obtained by Adams' method, when the maximum MATLAB precision is used 2, 3 .

Linear Transposition: Zero-Order Bessel Equation
The first chosen example is the zero-order Bessel equation, corresponding to with D i denoting the ith derivative of the searched function, related to the independent variable x, x ∈ 0, 1 , and with the initial condition given by the vector 1, 0 .Equation 5.1 has analytic solution given by

5.2
In order to transpose the domain from the 0, 1 to −1, 1 , a linear transformation is used.For this kind of transformation, considering that a function is described in an interval a, b and will be transposed to an interval c, d , the relation u The SIV algorithm was applied to 5.1 , with a 26-coefficient polynomial approximation, starting with the null-polynomial.
The final result was compared with the analytic solution 5.2 .As, in this case, the mass matrix is singular 3, 23, 24 , the Runge-Kutta and Adams algorithms cannot be initialized and, consequently, they are not used for comparisons.
After two hundred generations, an evolved Legendre's series approximation was obtained with very low error margins, as shown in Figure 1 for the obtained solution of 5.1 and the corresponding derivatives.
To give an idea about the precision of the method, Figure 2 shows the residue by substituting the approximation of the function given by 5.2 by using Legendre's polynomial fitting process 2 , and the residue for the same degree evolved solution.Observing Figure 2, it can be noticed that the SIV algorithm improves the accuracy in, at least, two order of magnitude.Besides, the problem of lack of accuracy at the interval extremes is considerably reduced.

Linear Interval Transposition: Fourth-Order Differential Equation
Now, it will be considered an example of a fourth-order differential equation.Denoting D i the i-derivative of the searched function, the equation to be solved is x ∈ 0, π , and with the initial condition given by the vector: 0, 0, 2, 0 .In order to transpose the domain from the 0, π to −1, 1 , a linear transformation, as shown in Section 5.1, is used.
Starting with the solution obtained by using MATLAB ode113 with maximum precision 2 , the SIV method with 22 coefficients gives a solution with the error figures shown in Figure 3, after two hundred generations.
To give an idea about the precision of the method, Figure 4 shows the residue by substituting the approximation of the function given by 5.4 by using Legendre's polynomial fitting process 2 , and the residue for the same degree evolved solution.Observing Figure 4, it can be noticed that the SIV algorithm improves the accuracy in, at least, two order of magnitude.Besides, the problem of lack of accuracy at the interval extremes is considerably reduced.

Trigonometric Interval Transposition
Another example where an interval transposition is necessary is described here.Denoting D i the i-derivative of the searched function, the equation to be solved is x ∈ 0, π , and with the initial condition given by the vector: 0, 0, 2, 0 .In order to transpose the domain from 0, π to −1, 1 , a trigonometric transformation is used.For this kind of transformation, considering that a function of x is defined in an interval a, b , firstly it is transposed with a new function u tan v , defined for an interval v 1 , v 2 of v, such that v 1 tan −1 a and v 2 tan −1 b .
Then, this interval is transposed to c, d for the variable u, by using the transformation:  Starting with the null-polynomial, with seven coefficients and running the SIV process for six hundred generations, taking about 1 minute, a solution with the error figures shown in Figure 5 was obtained.
As a 6th-order approximation for the function given by 5.6 has a very small error, Figure 6 shows the residue of the evolved series, because it is practically the exact solution.

Equation without Analytic Solution
There are equations such that the analytic solution cannot be expressed by a finite combination of elementary functions.Here, an example of this kind of case is presented.Denoting D i the i-derivative of the searched function, the equation to be solved is x ∈ 0, 5 , and with the initial condition given by the vector 0, 0, .4699 .After a linear interval transposition, the SIV algorithm started with a 21-degree null-polynomial.After four hundred generations, a solution with residual errors shown in Figure 7 was obtained.
As the analytic solution cannot be expressed as a finite combination of elementary functions, in order to establish comparisons, the Adams' method, implemented with the maximum MATLAB precision 2 , was used to perform the Legendre's polynomial fitting for the function.Figure 8 shows that the SIV algorithm improves the precision in, at least, one order of magnitude, attenuating the lack of accuracy at the domain extremes.

Conclusions
A method that combines genetic algorithms with Legendre's polynomial approximation was proposed, in order to solve nonlinear initial-value problems, up to 4th-order differential equations.
The method presents several advantages.
i It provides analytical expressions for the solutions and their derivatives for the whole domain, instead of a low-degree piecewise polynomial;  ii the database is mounted in the transposed domain, according to Section 3.

B.2. Part 2-Adaptive-Evolutive Algorithm
i The number of coefficients of expansion is chosen.The program used here is prepared up to 41 coefficients; ii each chromosome individual is a set of coefficients of a series expansion of the solution and each gene is a coefficient; iii the zero element of the process is the null polynomial.The coefficients are adjusted following the idea expressed by 3.10 by changing q order of the equation coefficients, in order to obey the initial conditions.Consequently, the first winner appears; iv the winner is replaced into 4.1 with G being obtained in several points of the domain.In order to avoid the Runge phenomenon, the chosen points are the nodes of the polynomial basis from the evolutive process; v the series expansion for G is obtained in the chosen basis for the evolutive process by using the Gauss-Legendre numerical integration; vi having the integration matrix and the initial conditions, the series representing G is integrated q times; vii the winner coefficients are compared with the obtained in the former step.If they are equal, the solution is a exact one.If they are not equal, the error bands can be determined for each coefficient, restricting the search space, improving the speed and efficiency of the process; viii having the coefficients error bands, the winner is used as the central element to construct a new population with coefficients varying randomly in the band that works as a standard deviation for each coefficient; ix the population is submitted to a tournament with the ranking depending on the obtained residue substituting the candidate into the original equation.If the winner of the tournament surpasses the former winner, the process returns to the fourth step.If does not, the process goes to the next step; x the individual evaluation in the tournament is performed simply replacing the candidate in the equation and analyzing the residue; xi the lower the residue is, the better is the candidate ranking; xii a mutation is performed in a few genes about 5% .The 20% worst are discharged, and the champion is preserved for a new round.A new population is formed by the combination of the former population members.The process returns to the 9th step up to the number of generations chosen when the algorithm started to run; xiii the last generation is considered to be the final winner and, after a domain transposition, the results are presented.

Figure 1 :
Figure 1: Zero-order Bessel equation: error in the function and derivatives after 200 generations, comparing SIV and analytic solution.
Equation 5.3 has analytic solution given by f x x • sin x .Poly by best fitting-RMS: 4.18e − 010 Poly by SAM-RMS: 2.33e − 013

Figure 2 :
Figure 2: Zero-order Bessel equation: residue due to the best fitting polynomial compared with the residue by SIV polynomial.

Equation 5.3 has analytic solution given by f x tan −1 x 2 .Figure 3 :
Figure 3: A fourth-order nonlinear equation with linear domain transposition: error in the function and derivatives after 200 generations, comparing SIV and analytic solution.

Figure 4 :
Figure 4: A fourth-order nonlinear equation with linear domain transposition: residue due to the best fitting polynomial compared with the residue due to SIV polynomial.

Figure 5 :
Figure 5: A fourth-order nonlinear equation with trigonometric domain transposition: error in the function and derivatives after 600 generations, comparing SIV and analytic solution.