Based on the fixed point concept in functional analysis, an improvement on the traditional spectral method is proposed for nonlinear oscillation equations with periodic solution. The key idea of this new approach (namely, the spectral fixed point method, SFPM) is to construct a contractive map to replace the nonlinear oscillation equation into a series of linear oscillation equations. Usually the series of linear oscillation equations can be solved relatively easily. Different from other existing numerical methods, such as the well-known Runge-Kutta method, SFPM can directly obtain the Fourier series solution of the nonlinear oscillation without resorting to the Fast Fourier Transform (FFT) algorithm. In the meanwhile, the steepest descent seeking algorithm is proposed in the framework of SFPM to improve the computational efficiency. Finally, some typical cases are investigated by SFPM and the comparison with the Runge-Kutta method shows that the present method is of high accuracy and efficiency.
1. Introduction
Oscillation phenomena are very common in nature and industrial production [1–3], and they are of great interest to scientists and engineers. Most oscillation systems are inherently nonlinear, and the superposition principle is invalid, so they are more difficult to handle than linear ones [2, 3].
In this paper, we focus on the initial value problem of the free nonlinear oscillator with cyclic motion, governed by
(1)u¨-φ(t,u,u˙,u¨)=0,t≥0,
where the dot denotes the derivative with respect to the time t and u is a physical variable, such as displacement. Usually the free nonlinear oscillator with cyclic motion has a limit cycle, which is independent of initial conditions. Then without loss of generality, the following initial value condition is considered:
(2)u(0)=1,u˙(0)=0.
Thus far, there are two branches on handling the aforementioned nonlinear oscillation equations. The first one is pure analytical, among which the well-known perturbation technique is widely applied to investigate the nonlinear oscillation equations [3–5]. Besides, the homotopy analysis method [6, 7] and variational iteration method [8] are used to investigate the nonlinear oscillation equations recently.
On the other hand, numerical methods are adopted to solve the nonlinear oscillations, for example, the Runge-Kutta method and the spectral method. When the aforementioned problem is numerically integrated by the Runge-Kutta method, the solution u(t) in discrete time {tn=nΔt,n=0,1,2,3,…,N} is obtained, where the time step Δt is restricted by the stability condition. It is well known that a function u(t) with the period T can be expressed by a Fourier series. To acquire some spectral characteristics of u(t), such as the frequency, amplitude-frequency distribution, the discrete Fourier analysis should be carried on the discrete solution {u(tn)∣tn=nΔt,n=0,1,2,3,…,N}. When the number N becomes larger, the Fast Fourier Transform (FFT) algorithm, which is somewhat complex, should be adopted for the computational efficiency [9].
In the recent decade, the spectral method [9–12] is prevailing due to its high accuracy. The function u(t) is approximated by a sum of the orthogonal functions, for example, the complex exponential function for the periodic function:
(3)u(t)≈UN(t)=∑k=-KKu^kexp(ikt),N=2K+1.
Since the function u(t) is assumed to be real, the two Fourier coefficients with an opposite value of k are complex conjugates; that is, u^k=u^-k¯, where the bar denotes complex conjugate operation. The nonlinear terms, such as u3 and uu˙, are handled by the pseudospectral technique, which is involved in the FFT algorithm for the computational efficiency, especially for large N. Meantime, the aliasing removal technique is needed to alleviate the aliasing error [9]. The accuracy of the approximate solution UN(t) is mainly decided by the number N, and usually the larger N, the more accurate of UN(t) is. The idea of the spectral methods is clear and straightforward, but the appropriate N is problem dependent and cannot be determined beforehand. Usually some different N, such as N1, N2, and N3(N1<N2<N3) should be chosen to find the appropriate N to satisfy the accuracy demand. What is the relationship between UN1(t), UN2(t), and UN3(t)? It is expected that the computational cost is economical if a method has the succession property, which means the more accurate (higher order) approximation UN3(t) can be further acquired from the less accurate (lower order) approximations UN1(t) and/or UN2(t) by adding some more correction terms and without discarding the existing less accurate ones. Unfortunately, the traditional spectral method does not have this succession property, and the valuable information provided by the less accurate approximation is not utilized sufficiently when we seek the more accurate ones.
Recently, the fixed point method [13–15], which is based on the fixed point concept in functional analysis, is adopted to acquire the series solution of the differential equation. In this paper, the idea of the fixed point method and the traditional spectral method are combined; therefore, the spectral fixed point method (SFPM) is proposed. By SFPM, we could directly obtain an explicit Fouries series solution of the nonlinear oscillation with cyclic motion. Moreover, the high accuracy property of the traditional spectral method is inherited by SFPM, and the spectral characteristic of the solution is simultaneously obtained without resorting to the FFT algorithm. Furthermore, it is notable that SFPM possesses the succession property, which means the computational cost of SFPM is economical.
The organization of the rest of this paper is as follows. In Section 2 the idea of the spectral fixed point method is elaborated, and the steepest descent seeking algorithm is proposed to improve computational efficiency. In Section 3, two examples are investigated by SFPM in detail. Finally, Section 4 is devoted to concluding remarks.
2. The Spectral Fixed Point Method2.1. The Key Idea of the Spectral Fixed Point Method
The fixed point is a basic concept in functional analysis [16, 17]. The famous Newton’s method for nonlinear algebra equations is just based on the Banach fixed point theorem. In [13], the fixed point concept is extended to solve nonlinear differential equations and the fixed point method (FPM) is proposed.
In the present work, the idea of FPM and the spectral method are combined; therefore, the spectral fixed point method (SFPM) is brought forward to investigate the nonlinear oscillation problem.
Let ω denote the frequency of u(t) governed by (1) and (2). When we introduce the transformation:
(4)τ=ωt,u=u(τ),
the original governing equation is rearranged as follows,
(5)𝒜[u]=u′′-φ(τ/ω,u,ωu′,ω2u′′)ω2=0,u(0)=1,u′(0)=0,
where the prime denotes the derivative with respect to τ and 𝒜[·] is a nonlinear operator. According to the above transformation (4), it is clear that u(τ) is a function with the period 2π that is, u(τ)=u(τ+2π).
Here, a contractive map 𝒯[·] is constructed as follows:
(6)𝒯[u]=u-β·ℒC-1[𝒜[u]],
where ℒC[·] is chosen as a linear continuous bijective operator, named as the linear characteristic operator. The operator ℒC-1[·] is the inverse operator of ℒC[·]. β is a real nonzero free parameter, named as relaxation factor, which can improve the convergence and stability of iteration procedure. The optimal value of relaxation factor βopt usually is dependent on the problem to be solved. From (6), an iteration procedure is built up as follows:
(7)un+1=𝒯[un]=un-βn+1·ℒC-1[𝒜[un]],un+1(0)=1,un+1′(0)=0,n=0,1,2,…(8)⟺{ℒC[un+1]=ℒC[un]-βn+1·𝒜[un],un+1(0)=1,un+1′(0)=0,n=0,1,2,…,
where {βn∣n=1,2,3,…} is a sequence of relaxation factors. According to (8), we can obtain a solution sequence {un∣n=0,1,2,…}. If the convergence of {un∣n=0,1,2,…} is ensured and we take the limit operation on both sides of (8), it is found that the limit value u* exactly is the zero point of the nonlinear operator 𝒜[·], that is,
(9)𝒜[u*]=0,u*(0)=1,u*′(0)=0.
Then u* is called as a fixed point of the contractive map 𝒯[u].
2.2. The Linear Characteristic Operator <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M62"><mml:msub><mml:mrow><mml:mi>ℒ</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mo mathvariant="bold">·</mml:mo></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:math></inline-formula>
Just as mentioned in Section 1, a 2π-periodic function u(τ) can be approximated by a sum of complex exponential functions {exp(ikτ)∣k=0,±1,±2,…} in the spectral method:
(10)u(τ)≈UN(τ)=∑k=-KKu^kexp(ikτ),N=2K+1.
The basis function exp(ikτ) satisfies the following 2nd-order differential equation:
(11)f′′+f=(1-k2)f
and has the following orthogonality relation:
(12)∫02πexp(ikτ)·exp(-imτ)dz={0,k≠m,2π,k=m.
The more relevant properties of exp(ikτ) can be found in the literature [9].
In consideration of the differential equation (11), here let us choose the linear characteristic operator ℒC[·] in (8) as follows:
(13)ℒC[u]=u′′+u.
It is clear that ℒC[u] is much simpler than the original nonlinear operator 𝒜[u]. The common solution of linear equation, ℒC[u]=0, is
(14)u=Cexp(iτ)+C-exp(-iτ),
where exp(iτ) and exp(-iτ) are named as the kernels of ℒC[u] and C is a complex constant of integration.
When the basis functions system {exp(ikτ)∣k=0,±1,±2,…} and the linear characteristic operator ℒC[u] are determined, the member of the solution sequence {un∣n=0,1,2,…} should be expressed by a sum of the basis functions:
(15)un=∑k=-MnMnan,kexp(ikτ),an,k=an,-k¯,
which is a so-called principle of completeness in SFPM. In Section 2.4, we will point out that this fundamental principle can heuristically provide a solvability condition to determine the unknown frequency ω.
On account of the well-known Euler’s formula, (15) is rearranged as
(16)un=∑k=0Mn[bn,kcos(kτ)+cn,ksin(kτ)],
where the coefficients bn,k and cn,k are real numbers.
2.3. The Initial Guess <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M89"><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>
The iteration procedure (8) can start with some arbitrary initial value u0, and usually the more closer to the u*, the more rapidly the solution sequence {un∣n=0,1,2,…} converges to u*. Here, the initial guess u0 could be conveniently chosen as follows:
(17)ℒC[u0]=0,u0(0)=1,u0′(0)=0.
Hence,
(18)u0=[exp(iτ)+exp(-iτ)]2=cos(τ).
2.4. Solvability Condition and Succession Property
Here, let us investigate the relationship between the lower order approximation and the higher order ones. If 𝒜[un] in (8) is expanded by a sum of basis functions,
(19)𝒜[un]=∑kξn,kexp(ikτ),ξn,k=ξn,-k¯,
we can obtain the solution of the inhomogeneous linear equation (8) as follows:
(20)un+1=un-βn+1∑k≠±1ξn,k1-k2exp(ikτ)+i2βn+1ξn,1τexp(iτ)-i2βn+1ξn,-1τexp(-iτ)+Cexp(iτ)+C-exp(-iτ).
The appearance of terms such as τsin(τ) and τcos(τ) will disobey the principle of completeness; that is to say, these terms cannot be expressed by a linear combination of the basis functions {exp(ikτ)∣k=0,±1,±2,…}, so the coefficients ξn,1 and ξn,-1 should vanish:
(21)ξn,1=ξn,-1=0,n=0,1,2,…,
which is named as the solvability condition. The idea to avoid the appearance of τsin(τ) and τcos(τ) is actually widely applied in perturbation methods, and its physical meaning is to render the solution uniformly valid.
The aforementioned derivation of the solvability condition is heuristic. In fact, it can be obtained in a strict manner. Given the expression ℒC[u] in (13), the iteration procedure (8) can be rewritten as follows:
(22)un+1′′+un+1=un′′+un-βn+1·𝒜[un;ωn+1]=un′′+un-βn+1[un′′-φ(τ/ωn+1,un,ωn+1un′,ωn+12un′′)ωn+12],un+1(0)=1,un+1′(0)=0,n=0,1,2,…,
where ωn is the nth-order approximation to the exact frequency ω. Equation (22) is linear and inhomogeneous, so it has a solution if and only if the inhomogeneous parts satisfy some solvability conditions. Let us multiply both sides of (22) by exp(-iτ) and integrate (22) over the range 0≤τ≤2π. By integration by parts on the left-hand side, we obtain
(23)∫02πexp(-iτ)(un+1′′+un+1)dτ=[exp(-iτ)(un+1′+iun+1)]|02π=0
on account of the periodic property of u(τ); that is, u(τ)=u(τ+2π). Hence, the right-hand side is
(24)∫02πexp(-iτ)(un′′+un-βn+1·𝒜[un;ωn+1])dτ=-βn+1∫02πexp(-iτ)·[∑kξn,kexp(ikτ)]dτ,=0,(ξn,k=ξn,-k¯).
In consideration of the orthogonality relation (12), we thus obtain
(25)ξn,1=0,n=0,1,2,….
Taking complex conjugates operation on (25), an equivalent expression is
(26)ξn,-1=0,n=0,1,2,….
Once again we deduce the solvability condition rigorously, which just provides an equation to determine the nth-order approximate frequency ωn.
Then, it follows from (20) and (21) that
(27)un+1=un-βn+1∑k≠±1ξn,k1-k2exp(ikτ)+Cexp(iτ)+C-exp(-iτ)=∑kan+1,kexp(ikτ)=∑k=0Mn+1[bn+1,kcos(kτ)+cn+1,ksin(kτ)],
where the complex constants C are determined by the initial condition:
(28)un+1(0)=1,un+1′(0)=0.
From (27), it is clear that the higher order approximation un+1 is the sum of the lower order approximation un and the correction terms -βn+1∑k≠±1(ξn,k/(1-k2))exp(ikτ)+Cexp(iτ) + C-exp(-iτ). In other words, SFPM has the succession property, which means that the valuable information provided by the lower order approximation is preserved and utilized sufficiently, and the accuracy of the approximation could be improved step-by-step to any possibility.
2.5. The Steepest Descent Seeking Algorithm
As mentioned in Section 2.1, the relaxation factors {βn∣n=1,2,3,…} can improve the convergence and stability of iteration procedure, and usually the optimal value of relaxation factors is dependent on the problem to be solved. In the present work, an algorithm, named as the steepest descent seeking algorithm (SDS), is adopted to determine the optimal value of the relaxation factor.
Let Resn denote the square residual error of the iteration procedure (8):
(29)Resn=Resn(β1,β2,…,βn)=∫02π(𝒜[un;ωn])2dτ,n=1,2,3,…
which is a kind of global residual error and can simultaneously evaluate the accuracy of un and ωn. Then it is suggested that the optimal value of relaxation factor βn,opt corresponds to the value βn at which the square residual error Resn obtains the minimum value min(Resn). For example, when n=1, the square residual error Res1(β1) is a function of β1 only and thus the optimal value β1,opt can be obtained by solving the nonlinear algebraic equation:
(30)dRes1dβ1=0.
When n=2, the square residual error Res2(β1,β2) is dependent on β1 and β2. Because the optimal value β1,opt has been acquired from the previous step, the optimal value β2,opt is governed by the following nonlinear algebraic equation:
(31)dRes2dβ2=0.
Similarly, for the nth-higher order, the square residual error Resn actually contains an unknown relaxation factor βn only, so the optimal value βn,opt is determined by the following nonlinear algebraic equation:
(32)dResndβn=0.
The name of the steepest descent seeking algorithm just comes from the aforementioned approach; that is, every optimal value βn,opt is sought to minimize the corresponding square residual error Resn. According to this approach, only one nonlinear algebraic equation should be solved in every iteration step, and the elements of the sequence {βn,opt∣n=1,2,3,…} are obtained sequentially and separately. For convenience, the spectral fixed point method with the steepest descent seeking algorithm is abbreviated to SFPM-SDS in this paper.
3. Some Examples
In this section, SFPM-SDS is used to investigate some examples. All the calculations are implemented on a laptop PC with 2 GB RAM and Intel Core2 Duo 1.80 GHz CPU.
3.1. Example 1
Let us start from the first example [3–5]:
(33)u¨+u+u˙2u=0,u(0)=1,u˙(0)=0.
Introducing the transformation (4), we obtain
(34)u′′+uω2+u′2u=0,u(0)=1,u′(0)=0.
Hence, the iteration procedure is built as follows:
(35)un+1′′+un+1=un′′+un-βn+1(un′′+unωn+12+un′2un),un+1(0)=1,un+1′(0)=0,n=0,1,2,…,
and the square residual error Resn is
(36)Resn=∫02π(un′′+unωn2+un′2un)2dτ,n=1,2,3,….
The initial guess is chosen as mentioned in Section 2.3,
(37)u0=[exp(iτ)+exp(-iτ)]2=cos(τ).
According to SFPM-SDS, the equation of the first order approximation u1 is governed by
(38)u1′′+u1=u0′′+u0-β1(u0′′+u0ω12+u0′2u0)=(34-1ω12)β1cos(τ)+β1cos(3τ)4
with the associated initial condition u1(0)=1, u1′(0)=0. Then the solvability condition is
(39)(34-1ω12)=0,⟹ω1=43≈1.154701,
and u1 is
(40)u1=(1+β132)cos(τ)-β132cos(3τ).
The corresponding Res1 is
(41)Res1=0.1963495-0.3436117β1+0.1641359β12-1.677791×10-3β13-3.445465×10-5β14+5.933596×10-6β15+1.210565×10-7β16,β1,opt=1.06449,min(Res1)=0.0145074.
Hence, we obtain
(42)u1=1.033265cos(τ)-0.03326531cos(3τ).
For the higher-order un and ωn, the procedure is similar and can be deduced by the symbolic computation software, such as MAXIMA, MAPLE, and MATHEMATICA. The first lower order approximations by SFPM-SDS are succinctly listed here,
(43)u2=1.033287cos(τ)-0.03583386cos(3τ)+0.002638345cos(5τ)-9.228024×10-5cos(7τ)+1.069526×10-6cos(9τ),u3=1.032637cos(τ)-0.03516805cos(3τ)+0.002767413cos(5τ)-2.543989×10-4cos(7τ)+1.894233×10-5cos(9τ)-1.2290549×10-6cos(11τ)+6.680303×10-8cos(13τ)-2.911103×10-9cos(15τ)+1.036885×10-10cos(17τ)-3.002914×10-12cos(19τ)+6.569981×10-14cos(21τ)-9.747357×10-16cos(23τ)+8.541457×10-18cos(25τ)-3.320339×10-20cos(27τ).
Further insight into the first lower order approximations (42)~(43), it is found that the absolute magnitude of the coefficients bn,k occurred in (16) decreasing very rapidly when k becomes larger, so it is not necessary to retain all the coefficients bn,k during the iteration evaluation. It is expected that the effect of the higher-order wave component cos[(2k-1)τ] on the accuracy of un(τ) is relatively negligible to the lower-order ones. Then a threshold value δ is introduced to decide which bn,k would be omitted, and the computational cost is shrunken if we only preserve the coefficients bn,k which satisfy |bn,k|≥δ. In the following calculation, we set δ=10-16. The detail of βopt, Resn, and CPU time (seconds) is shown in Table 1. From Table 1, it is also found that βn,opt corresponding to the higher order approximation un(τ) deviates a little from the lower order ones, so the value βn,opt corresponding to the higher order approximation can simply inherit from the lower order ones. It is expected that further diminishment of the computational cost will be gained in this manner. For convenience, let NSDS denote the executing times of the SDS algorithm. Here we consider the case NSDS=5, which means only the first five relaxation factors β1,opt~β5,opt are determined by the SDS algorithm, and the succeeding other relaxation factors βn,opt are set as follows:
(44)βn,opt=βNSDS,opt,n≥NSDS+1.
The result of the above approach is also shown in Table 1 for comparison. Meantime, some |bn,k| of three different order approximations un(τ) are given in Figure 1, which shows that |bn,k| actually decrease very rapidly when k becomes larger.
The βopt, Resn, and CPU time (seconds).
Ordern
SFPM-SDS (δ=10-16)
SFPM-SDS (NSDS=5; δ=10-16)
βn,opt
Resn
CPU (s)
βn,opt
Resn
CPU (s)
1
1.06449
0.0145074
0.42
1.06449
0.0145074
0.42
2
1.03305
3.70382e-4
0.42
1.03305
3.70382e-4
0.42
3
0.975697
1.60740e-5
0.64
0.975697
1.60740e-5
0.64
4
1.05497
6.31101e-7
2.4
1.05497
6.31101e-7
2.4
5
0.987702
1.66964e-8
3.5
0.987702
1.66964e-8
3.5
6
1.03241
8.14114e-10
4.0
(0.987702)
8.43906e-10
0.22
7
1.01344
1.77868e-11
4.5
(0.987702)
1.99754e-11
0.23
8
1.00465
8.70314e-13
4.5
(0.987702)
8.08111e-13
0.23
9
1.03958
2.15011e-14
4.5
(0.987702)
3.04354e-14
0.23
10
0.985266
8.52066e-16
4.5
(0.987702)
7.18302e-16
0.23
11
1.05171
2.86410e-17
4.5
(0.987702)
3.66701e-17
0.24
12
0.984677
8.28269e-19
4.5
(0.987702)
9.53726e-19
0.24
13
1.04180
3.69971e-20
4.6
(0.987702)
3.27288e-20
0.25
14
1.00273
8.34169e-22
4.6
(0.987702)
1.40366e-21
0.25
15
1.01658
4.17116e-23
4.7
(0.987702)
3.14676e-23
0.26
The coefficients bn,k of un(τ) (NSDS=5; δ=10-16).
The original equation (33) is also numerically integrated by the high-order Runge-Kutta method with high accuracy. Let uR-K and ωR-K denote the solution and the frequency calculated by the Runge-Kutta method, respectively. The comparison of ωn with ωR-K is given in Table 2, which shows that ωn is the same as ωR-K within 7 significant digits when n≥8. The comparison of un(t) with uR-K is given in Figures 2 and 3, which shows that the maximum relative error [un(t)/uR-K(t)-1] between un(t) and uR-K is about 2.2e-3% and 4.0e-7% for n=5, n=10, respectively. The accuracy of un can also be justified by the square residual error Resn. As shown in Table 1 and Figure 4, it is clear that the solution sequence {un∣n=0,1,2,3,…} converges to the exact solution very rapidly.
The frequency calculated by SFPM-SDS (NSDS=5; δ=10-16) and Runge-Kutta method.
Order n
ωn
ωR-K
Relative error (ωn/ωR-K-1)
1
1.154701
1.136775
1.6%
2
1.138810
0.18%
3
1.136445
-0.030%
4
1.136678
-0.0085%
5
1.136779
3.5e-4%
6
1.136779
3.5e-4%
7
1.136776
8.8e-5%
8
1.136775
0.0%
Comparison of u5(t) with uR-K (NSDS=5; δ=10-16).
Relative error [un(t)/uR-K(t)-1] between un(t) and uR-K, (NSDS=5; δ=10-16).
5th approximation u5(t)
10th approximation u10(t)
The square residual error Resn (NSDS=5; δ=10-16).
3.2. Example 2
The second example [4, 5] is governed by
(45)u¨(1+u2)+2u=0,u(0)=1,u˙(0)=0.
According to SFPM-SDS, the iteration procedure has the form:
(46)un+1′′+un+1=un′′+un-βn+1(un′′+2unωn+12+un′′un2),un+1(0)=1,un+1′(0)=0,n=0,1,2,….
The solution procedure is similar to Example 1, so only the results are provided. To investigate the effect of SDS algorithm on the convergence of the approximate solution, the results corresponding to two different NSDS values are shown in Table 3. It is found that NSDS has a little effect on the convergence of the approximate solution, so NSDS usually can take a small value to decrease the computational cost, especially when the first few βn,opt are close to each other. The comparison of the frequency ωn with ωR-K is given in Table 4, which shows that the value ωn is the same as ωR-K within 7 significant digits when n≥8.
The βopt, Resn, and CPU time (seconds).
Ordern
SFPM-SDS (NSDS=5; δ=10-16)
SFPM-SDS (NSDS=10; δ=10-16)
βn,opt
Resn
CPU (s)
βn,opt
Resn
CPU (s)
1
0.6195087
0.011092067
0.31
0.6195087
0.011092067
0.31
2
0.6696290
6.005123e-4
0.32
0.6696290
6.005123e-4
0.32
3
0.6319373
5.224107e-5
0.56
0.6319373
5.224107e-5
0.56
4
0.6797179
4.408895e-6
1.7
0.6797179
4.408895e-6
1.7
5
0.6317019
4.341748e-7
2.6
0.6317019
4.341748e-7
2.6
6
(0.6317019)
4.324532e-8
0.14
0.6874748
4.067318e-8
3.0
7
(0.6317019)
4.872231e-9
0.15
0.6299683
4.158791e-9
3.3
8
(0.6317019)
5.515471e-10
0.15
0.6928416
4.080293e-10
3.7
9
(0.6317019)
6.640356e-11
0.15
0.6281442
4.247500e-11
4.0
10
(0.6317019)
8.019986e-12
0.15
0.6969528
4.271866e-12
4.1
11
(0.6317019)
9.979502e-13
0.15
(0.6969528)
4.963906e-13
0.15
12
(0.6317019)
1.244679e-13
0.15
(0.6969528)
6.181212e-14
0.15
13
(0.6317019)
1.576183e-14
0.15
(0.6969528)
8.641084e-15
0.15
14
(0.6317019)
2.000453e-15
0.15
(0.6969528)
1.252247e-15
0.15
15
(0.6317019)
2.560610e-16
0.16
(0.6969528)
1.918912e-16
0.15
The frequency calculated by SFPM-SDS (NSDS=5;
δ=10-16) and Runge-Kutta method.
Ordern
ωn
ωR-K
Relative error (ωn/ωR-K-1)
1
1.069045
1.077035
-0.74%
2
1.075638
-0.13%
3
1.076791
-0.023%
4
1.076952
-0.0077%
5
1.077020
-0.0014%
6
1.077029
-5.6e-4%
7
1.077034
-9.3e-5%
8
1.077035
0.0%
4. Conclusion
In this paper, based on the fixed point concept in functional analysis, the spectral fixed point method (SFPM) is proposed for the nonlinear oscillation equation with periodic solution, and the steepest descent seeking (SDS) algorithm is brought forward in the framework of SFPM to improve the computational efficiency. Two typical examples are discussed in detail as the application of SFPM. The result shows the following.
The SFPM behaves with a high accuracy as the traditional spectral method.
The SFPM possesses the succession property, so the accuracy of the approximation can be improved step-by-step to any possibility with a low computational cost.
In the framework of SFPM, the spectral characteristic of oscillation equation is the byproduct without resorting to FFT algorithm.
SDS algorithm can greatly improve the computational efficiency, and therefore it is practicable to apply SFPM-SDS to handle some types of nonlinear oscillation equations with periodic solution.
So far, only the nonlinear ordinary differential equations are investigated by SFPM in this paper, but SFPM has the capability to handle the nonlinear system and partial differential equation, and it will be discussed in the future.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The work is supported by National Natural Science Foundation of China (Approval nos. 11102150 and 11302165) and the Fundamental Research Funds for the Central Universities.
TimoschenkoS.YoungD. H.WeaverW.FidlinA.NayfehA. H.MookD. T.NayfehA. H.NayfehA. H.LiaoS.-J.An analytic approximate technique for free oscillations of positively damped systems with algebraically decaying amplitudeLiaoS.-J.An analytic approximate approach for free oscillations of self-excited systemsHeJ.-H.Variational iteration method—a kind of non-linear analytical technique: some examplesBoydJ. P.ShenJ.TangT.TrefethenL. N.CanutoC.HussainiM. Y.QuarteroniA.ZangT. A.XuD.GuoX.Fixed point analytical method for nonlinear differential equationsXuD.GuoX.Application of fixed point method to obtain semi-analytical solution to Blasius flow and its variationGuoX.WangX.XuD.XieG. N.A Legendre series solution to Rayleigh stability equation of mixing layerZeidlerE.ZeidlerE.