Approximate Periodic Solutions for Oscillatory Phenomena Modelled by Nonlinear Differential Equations

We apply the Fourier-least squares method (FLSM) which allows us to find approximate periodic solutions for a very general class of nonlinear differential equations modelling oscillatory phenomena. We illustrate the accuracy of the method by using several significant examples of nonlinear problems including the cubic Duffing oscillator, the Van der Pol oscillator, and the Jerk equations. The results are compared to those obtained by other methods.


Introduction
Oscillatory phenomena are frequently encountered in various fields of science such as, for example, physics, molecular biology, and many branches of engineering.These oscillatory phenomena can be modelled using nonlinear differential equations.Nonlinear differential equations are one of the most important mathematical tools required for understanding these oscillatory phenomena present in everyday life.As is known, finding exact solutions of nonlinear differential equations is possible only in some particular cases.This justifies the need to resort to approximate methods for the computation of approximate periodic solutions, which in turn could provide important information about the phenomena studied.
Equation ( 1) is a very general one, being able to model a large class of oscillatory phenomena.As a consequence, since our method can be applied for (1), it follows that it can be considered a powerful and useful method.
We remark that recently there has been much interest in finding approximate periodic solutions of nonlinear differential equations of type (1) and conditions (2) and many approximate methods were proposed for the computation of approximate periodic solutions of equations of this type.
Among the methods used to compute such approximate periodic solutions we mention the following.
If the problem consisting of (1) and conditions (2) admits a periodic solution, FLSM allows us to determine
We will find approximate Fourier-solutions x of the problem consisting of (1) and conditions (2) on R which satisfy the following conditions: where Definition 2. One calls a -approximate Fourier-solution of the problem consisting of (1) and conditions (2) a Fourierfunction x satisfying the relations ( 6) and (7).
Definition 3. One calls a weak -approximate Fourier-solution of the problem consisting of (1) and conditions (2) a Fourierfunction x satisfying the relation together with the initial conditions (7).
Definition 5.One considers a Fourier-sequence satisfying the conditions We call the Fourier-sequence {  ()} ∈N convergent to the solution of the problem consisting of (1) and conditions (2 We will find a weak -approximate Fourier-solution of the type (8), where  ≥ 1 and the constants ã , b , ω are calculated using the following steps.
The following convergence theorem holds.

Theorem 6.
If the problem consisting of (1) and conditions (2) admits a periodic solution, then the Fourier-sequence   () from (15) satisfies the property Moreover, for all  > 0, ∃ 0 ∈ N such that for all  ∈ N,  >  0 it follows that   () is a weak -approximate Fourier-solution of the problem consisting of (1) and conditions (2).
Proof.From the fact that the problem consisting of (1) and conditions (2) admits a periodic solution it follows that the series exists and its sequence of partial sums converges to the solution of the problem consisting of (1) and conditions (2); that is, lim Based on the way the Fourier-function   () is computed and taking into account the relations ( 8)-( 15), the following inequality holds: It follows that We obtain lim From this limit we obtain that for all  > 0, ∃ 0 ∈ N such that for all  ∈ N,  >  0 it follows that   () is a weak approximate Fourier-solution of the problem consisting of (1) and conditions (2).
Remark 7. Any -approximate Fourier-solution of the problem consisting of (1) and conditions (2) is also a weak approximate Fourier-solution, but the opposite is not always true.It follows that the set of weak approximate Fouriersolutions of the problem consisting of (1) and conditions (2) also contains the approximate Fourier-solutions of the problem.
Taking into account the above remark, in order to find approximate Fourier-solutions of the problem consisting of (1) and conditions (2) by the Fourier-Least Squares Method, we will first determine weak approximate Fourier-solutions, x.If |(, x)| < , then x is also an -approximate Fouriersolution of the problem.

Applications
The test problems included this section are the Duffing oscillator (two cases, an autonomous one and one involving integral forcing terms) and the Jerk equations.
These problems were extensively studied over the years, and various solutions, both approximate analytical ones and numerical ones, were proposed.
The qualitative properties of these oscillators were also extensively studied.Stability and bifurcation studies for the Duffing oscillators include [36][37][38] among many others.For Jerk-type equations a comprehensive bifurcation study can be found in [39] and a study of the limit cycles can be found in [40].Following the computations presented in these papers, corresponding conclusions can be drawn for the problems studied in the Sections 3.1-3.3.For example, in the case of the autonomous Duffing oscillator (23), for the values of the parameters considered in the computations ( = 1.25 and  = 2), a quick computation similar to the one in [36] indicates that the only equilibrium point is the origin, which is a center; similar computations can be performed for the other problems.
In the following we compute approximate solutions for the Duffing oscillator and the Jerk equations and compare our results with similar analytical approximations previously computed by using other methods.

Application 1:
The Autonomous Duffing Oscillator.Our first test problem is the autonomous Duffing oscillator: The Duffing oscillator is extensively studied in literature and some relatively recent results are presented in [12,19].In [12] approximate solutions are computed using the rational harmonic balance method (RHB) and in [19] approximate solutions of ( 23) are computed using a variational iteration procedure (VI).The approximate frequency of the oscillations obtained by using these methods is compared with the exact period known in the literature.
In the following, in order to obtain our approximation of the frequency and of the solution, we will perform the steps described in the previous section.The computations were performed using the SAGE open source software (version 5.5, available at http://www.sagemath.org/).
We will perform in detail our computations for the values  = 1.25 and  = 2.
Since the computations of the minimum of the functional (14) are relatively difficult for large values of  in (8), we will actually use an iterative procedure, starting with  = 1 and increasing the value until we achieve the desired precision.

Approximate Solution for 𝑝 = 1.
Taking into account these considerations, first we choose the approximate solution (8) of the form In Step 1, the expression (13) becomes Taking into account the initial conditions x(0) = , x(1) () = 0 we obtain the relations Replacing these values, the corresponding functional ( 14) from Step 2 is In Step 3 we must compute the minimum of  with respect to ã1 and ω1 .For relatively simple problems such as this it is possible to compute directly the critical points of  and subsequently select the value corresponding to the minimum.
In general, the critical points corresponding to the functional ( 14) (ã 1 , . . ., ã , b2 , . . ., b , ω) are the solution of the system: For  = 1 the system becomes This system can be solved directly.We used the "solve" command in SAGE and, after we excluded the complex solutions, we found the critical points In order to find the minimum, we use the second partial derivative test, which is easy enough to implement in SAGE, and find that ã1 =  0 1 = 1.95861253449, ω1 =  0 1 = 2.16432748538 is the local minima.Using again the relations  0 0 = − 0 1 + 2,  0 1 = 0, we replace all the values in the expression of the approximate solution (8) and we obtain our first approximation: In Figure 1 we present the comparison between our x1 approximate solution (solid line) and the numerical solution obtained by using a fourth order Runge-Kutta method (dotted line).
As we already saw, the approximate value of the frequency obtained here is ω1 =  0 1 = 2.16432748538 and it is already close to the exact value which in this case is   = 2.150416169536.
We observe that, generally speaking, if the value of  in the expression of x() is larger than 1, then the solution of the corresponding system of equations which gives the critical points (28) cannot be found directly.In the particular case of SAGE, the command "solve" fails to find the solutions, exiting with some kind of error message.
In this situation it is still possible to find good approximations of the solutions of the problem solving the system (28) by means of a numerical method.More precisely, we can find approximate solutions for the given problem ( 23) solving (28) by means of a SAGE implementation of the well-known Newton method.

Approximate Solution for 𝑝> 1.
As the following results will show, the Newton method is able to find approximate solutions of (28) which can lead to highly accurate approximate solutions of the problem (23).
For  = 2 the approximate solution (8) has the form After we compute the corresponding expressions of R(, ã0 , ã1 , ã2 , b1 , b2 , ω2 ) and (ã 1 , ã2 , b2 , ω2 ) and of the system (28) (all too large to insert here), we apply Newton's method taking as the starting point of the iteration (ã 1, , ã2, , b2, , ω2, ), where ã1, and ω2, are the values of  0 1 and  0 1 computed for the previous approximation  = 1, namely, ã1, = 1.95861253449 and ω2, = 2.16432748538.In order for the sequence of approximations given by Newton's method to converge to the solution(s) of the system (28), ã2, and b2, will take successively values on a given grid of the type  =  ã2 × b2 , where   is a division of a symmetric interval centered in zero.
In the particular case of the problem ( 23) it was actually sufficient to choose ã2, = 0 and b2, = 0 and the sequence converged to the minimum of .
The process can be carried on for increased values of  until the desired accuracy is reached.
While at a first look the computations may seem long and tedious, by using a software program such as SAGE, we were actually able to perform them easy and quick, obtaining a very good accuracy.Thus, for  = 7, we obtained for the approximate frequency the value  0 7 = 2.15041853722 and the corresponding approximate solution (8) has the form  In Figure 2 we present the comparison between the x7 approximate solution (solid line) and the numerical solution obtained by using a fourth order Runge-Kutta method (dotted line).
In [19] approximate solutions for the problem (23) were computed for the cases  = 1.25,  = 2 (studied above) and  = 250,  = 2.For the case  = 1.25,  = 2, the approximate solution from [19] is  1 presents the comparison of the absolute errors (computed as the difference in absolute value between the approximate solution and the corresponding numerical solution given by the Runge-Kutta method) corresponding to the approximate solutions  VI (see, [19]) and x7 for the case  = 1.25,  = 2.
For the case  = 250,  = 2, the approximate solution from [19] is For this case, the approximate solution computed using our method is Table 2 presents the comparison of the absolute errors (computed as the difference in absolute value between the approximate solution and the corresponding numerical solution given by the Runge-Kutta method) corresponding to the approximate solutions  VI (see, [19]) and x7 for the case  = 250,  = 2.
Finally, in Table 3, we compare the approximate values for the frequency  0 7 computed using our method with approximate values computed in [19] ( VI ) and [12] ( RHB ).The comparison is made by means of the percentage error, which for a given approximate error  approx is defined as   approx = 100 ⋅ (| exact −  approx |/ exact ), where  exact is the corresponding exact error.It is easy to see that our approximations are far better than the ones previously computed and they remain accurate even for the case of a very strong nonlinearity.
+  (1) (1) = 0, where The problem (38) (see, [14,15]) is a version of the wellknown Duffing equation involving both integral and nonintegral forcing terms with separated boundary conditions.This equation has been studied in a series of recent papers including [14,15].
In [14], the authors applied a generalized quasilinearization technique to prove the existence and uniqueness of the solution of Duffing equation involving both integral and nonintegral forcing terms.They showed that there are sequences of approximate solutions converging monotonically and quadratically to the unique solution of the problem.
In [15], the authors gave a representation of exact solution and approximate solution of Duffing equation involving both integral and nonintegral forcing terms in the reproducing kernel space (RKS).They represented the exact solution in the form of a series and they showed that the n-term approximation of the exact solution converges to the exact solution.
Next we present our results for (38) using FLSM.Also, we will compare these results with those obtained in [15].
Thus, for  = 3, we obtained the fact that the approximate periodic solution (8) Since in [15] only the numerical results are presented while the expression of approximate solution expression is not, we cannot perform a direct graphical comparison of our approximate solution with the corresponding solution from [15].Therefore, in Table 4, we present the comparison of several values of the absolute errors (computed as the difference in absolute value between the exact solution and the approximate solution) corresponding to the approximate solutions  RKS from [15] and to our approximate solutions for  = 0.1, 0.2, . . ., 1, as given in [15].
Recently, Ma et al. in [3], using the Homotopy Perturbation Method, obtained high-order analytic approximate periods and periodic solutions of the Jerk Equation (41).In [8,9], Gottlieb used the lowest-order harmonic balance method to determine analytical approximations to the periodic solution of the Jerk equations.Also, Leung and Guo in [27] obtained approximations for the angular frequency and the limit cycle for (41) based on the residue harmonic balance approach.
Table 5 presents the comparison of the absolute errors (computed as the difference in absolute value between the   approximate solution and the corresponding numerical solution given by the Runge-Kutta method) corresponding to the approximate solution  HPM from [3] and our approximate solution x3 for the case  = 0.5.Also, for  = 0.5, the approximate periodic solution  RHB from [27] The approximate frequency and period are  0 5 = 0.61535099365,  0 5 = 10.2107339908715, with an error of −0.000264516312410058.
Table 6 presents the comparison of the absolute errors (computed as the difference in absolute value between the approximate solution and the corresponding numerical solution given by the Runge-Kutta method) corresponding to the approximate solutions  RHB from [27] and our approximate solutions x5 for the case  = 0.5.In this case we omitted the graphical representation of the approximate solutions  RHB and x5 since they are both very close to the numerical solution.
It is easy to see from the Tables 5 and 6 that the solutions obtained by using FLSM are more accurate than the ones computed by using other methods.This fact is emphasized by Table 7 which presents a comparison of approximate periods computed in several papers, as presented in [27].In this table,   denotes an approximate period obtained by means of an approximate solution containing terms up to sin( ⋅  ⋅ ).

Conclusions
In the present paper the Fourier-least squares method (FLSM) is introduced as a straightforward and efficient method to compute approximate periodic solutions for a very general class of nonlinear differential equations modeling oscillatory phenomena.Since (1) is a very general one, being able to model a large class of oscillatory phenomena, FLSM can be considered a powerful and useful method.
The test problems include the cubic Duffing oscillator, the Van der Pol oscillator, and the Jerk equation.The computation of approximate solutions by FLSM clearly illustrates the accuracy of the method by comparison with approximate solutions previously computed by using other methods.

Table 1 :
(23)arison of the absolute errors of the approximate solutions for problem(23)in the case  = 1.25,  = 2.

Table 2 :
(23)arison of the absolute errors of the approximate solutions for problem(23)in the case  = 250,  = 2.

Table 4 :
(38)arison of the absolute errors of the approximate solutions for problem(38).

Table 5 :
[3]parison of the absolute errors corresponding to the solution  HPM from[3]and to our solution x3 .

Table 6 :
[27]arison of the absolute errors corresponding to the solution  RHB from[27]and to our solution x5 .

Table 7 :
Comparison of approximate periods.