MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 538716 10.1155/2013/538716 538716 Research Article Spectral Fixed Point Method for Nonlinear Oscillation Equation with Periodic Solution http://orcid.org/0000-0001-9090-617X Xu Ding 1 Wang Xian 1 http://orcid.org/0000-0003-1047-4990 Xie Gongnan 2 Scalia Massimo 1 State Key Laboratory for Strength and Vibration of Mechanical Structures School of Aerospace Xi’an Jiaotong University No. 28 Xianning West Road Xi’an 710049 China xjtu.edu.cn 2 School of Mechanical Engineering Northwestern Polytechnical University Xi’an Shaanxi 710072 China npu.edu 2013 3 12 2013 2013 16 09 2013 23 10 2013 2013 Copyright © 2013 Ding Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Based on the fixed point concept in functional analysis, an improvement on the traditional spectral method is proposed for nonlinear oscillation equations with periodic solution. The key idea of this new approach (namely, the spectral fixed point method, SFPM) is to construct a contractive map to replace the nonlinear oscillation equation into a series of linear oscillation equations. Usually the series of linear oscillation equations can be solved relatively easily. Different from other existing numerical methods, such as the well-known Runge-Kutta method, SFPM can directly obtain the Fourier series solution of the nonlinear oscillation without resorting to the Fast Fourier Transform (FFT) algorithm. In the meanwhile, the steepest descent seeking algorithm is proposed in the framework of SFPM to improve the computational efficiency. Finally, some typical cases are investigated by SFPM and the comparison with the Runge-Kutta method shows that the present method is of high accuracy and efficiency.

1. Introduction

Oscillation phenomena are very common in nature and industrial production , and they are of great interest to scientists and engineers. Most oscillation systems are inherently nonlinear, and the superposition principle is invalid, so they are more difficult to handle than linear ones [2, 3].

In this paper, we focus on the initial value problem of the free nonlinear oscillator with cyclic motion, governed by (1)u¨-φ(t,u,u˙,u¨)=0,t0, where the dot denotes the derivative with respect to the time t and u is a physical variable, such as displacement. Usually the free nonlinear oscillator with cyclic motion has a limit cycle, which is independent of initial conditions. Then without loss of generality, the following initial value condition is considered: (2)u(0)=1,u˙(0)=0.

Thus far, there are two branches on handling the aforementioned nonlinear oscillation equations. The first one is pure analytical, among which the well-known perturbation technique is widely applied to investigate the nonlinear oscillation equations . Besides, the homotopy analysis method [6, 7] and variational iteration method  are used to investigate the nonlinear oscillation equations recently.

On the other hand, numerical methods are adopted to solve the nonlinear oscillations, for example, the Runge-Kutta method and the spectral method. When the aforementioned problem is numerically integrated by the Runge-Kutta method, the solution u(t) in discrete time {tn=nΔt,n=0,1,2,3,,N} is obtained, where the time step Δt is restricted by the stability condition. It is well known that a function u(t) with the period T can be expressed by a Fourier series. To acquire some spectral characteristics of u(t), such as the frequency, amplitude-frequency distribution, the discrete Fourier analysis should be carried on the discrete solution {u(tn)tn=nΔt,n=0,1,2,3,,N}. When the number N becomes larger, the Fast Fourier Transform (FFT) algorithm, which is somewhat complex, should be adopted for the computational efficiency .

In the recent decade, the spectral method  is prevailing due to its high accuracy. The function u(t) is approximated by a sum of the orthogonal functions, for example, the complex exponential function for the periodic function: (3)u(t)UN(t)=k=-KKu^kexp(ikt),N=2K+1. Since the function u(t) is assumed to be real, the two Fourier coefficients with an opposite value of k are complex conjugates; that is, u^k=u^-k¯, where the bar denotes complex conjugate operation. The nonlinear terms, such as u3 and uu˙, are handled by the pseudospectral technique, which is involved in the FFT algorithm for the computational efficiency, especially for large N. Meantime, the aliasing removal technique is needed to alleviate the aliasing error . The accuracy of the approximate solution UN(t) is mainly decided by the number N, and usually the larger N, the more accurate of UN(t) is. The idea of the spectral methods is clear and straightforward, but the appropriate N is problem dependent and cannot be determined beforehand. Usually some different N, such as N1, N2, and N3  (N1<N2<N3) should be chosen to find the appropriate N to satisfy the accuracy demand. What is the relationship between UN1(t), UN2(t), and UN3(t)? It is expected that the computational cost is economical if a method has the succession property, which means the more accurate (higher order) approximation UN3(t) can be further acquired from the less accurate (lower order) approximations UN1(t) and/or UN2(t) by adding some more correction terms and without discarding the existing less accurate ones. Unfortunately, the traditional spectral method does not have this succession property, and the valuable information provided by the less accurate approximation is not utilized sufficiently when we seek the more accurate ones.

Recently, the fixed point method , which is based on the fixed point concept in functional analysis, is adopted to acquire the series solution of the differential equation. In this paper, the idea of the fixed point method and the traditional spectral method are combined; therefore, the spectral fixed point method (SFPM) is proposed. By SFPM, we could directly obtain an explicit Fouries series solution of the nonlinear oscillation with cyclic motion. Moreover, the high accuracy property of the traditional spectral method is inherited by SFPM, and the spectral characteristic of the solution is simultaneously obtained without resorting to the FFT algorithm. Furthermore, it is notable that SFPM possesses the succession property, which means the computational cost of SFPM is economical.

The organization of the rest of this paper is as follows. In Section 2 the idea of the spectral fixed point method is elaborated, and the steepest descent seeking algorithm is proposed to improve computational efficiency. In Section 3, two examples are investigated by SFPM in detail. Finally, Section 4 is devoted to concluding remarks.

2. The Spectral Fixed Point Method 2.1. The Key Idea of the Spectral Fixed Point Method

The fixed point is a basic concept in functional analysis [16, 17]. The famous Newton’s method for nonlinear algebra equations is just based on the Banach fixed point theorem. In , the fixed point concept is extended to solve nonlinear differential equations and the fixed point method (FPM) is proposed.

In the present work, the idea of FPM and the spectral method are combined; therefore, the spectral fixed point method (SFPM) is brought forward to investigate the nonlinear oscillation problem.

Let ω denote the frequency of u(t) governed by (1) and (2). When we introduce the transformation: (4)τ=ωt,u=u(τ), the original governing equation is rearranged as follows, (5)𝒜[u]=u′′-φ(τ/ω,u,ωu,ω2u′′)ω2=0,u(0)=1,u(0)=0, where the prime denotes the derivative with respect to τ and 𝒜[·] is a nonlinear operator. According to the above transformation (4), it is clear that u(τ) is a function with the period 2π that is, u(τ)=u(τ+2π).

Here, a contractive map 𝒯[·] is constructed as follows: (6)𝒯[u]=u-β·C-1[𝒜[u]], where C[·] is chosen as a linear continuous bijective operator, named as the linear characteristic operator. The operator C-1[·] is the inverse operator of C[·].  β is a real nonzero free parameter, named as relaxation factor, which can improve the convergence and stability of iteration procedure. The optimal value of relaxation factor βopt usually is dependent on the problem to be solved. From (6), an iteration procedure is built up as follows: (7)un+1=𝒯[un]=un-βn+1·C-1[𝒜[un]],un+1(0)=1,un+1(0)=0,n=0,1,2,(8){C[un+1]=C[un]-βn+1·𝒜[un],un+1(0)=1,un+1(0)=0,n=0,1,2,, where {βnn=1,2,3,} is a sequence of relaxation factors. According to (8), we can obtain a solution sequence {unn=0,1,2,}. If the convergence of {unn=0,1,2,} is ensured and we take the limit operation on both sides of (8), it is found that the limit value u* exactly is the zero point of the nonlinear operator 𝒜[·], that is, (9)𝒜[u*]=0,u*(0)=1,u*(0)=0. Then u* is called as a fixed point of the contractive map 𝒯[u].

2.2. The Linear Characteristic Operator <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M62"><mml:msub><mml:mrow><mml:mi>ℒ</mml:mi></mml:mrow><mml:mrow><mml:mi>C</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow><mml:mo mathvariant="bold">·</mml:mo></mml:mrow><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:math></inline-formula>

Just as mentioned in Section 1, a 2π-periodic function u(τ) can be approximated by a sum of complex exponential functions {exp(ikτ)k=0,±1,±2,} in the spectral method: (10)u(τ)UN(τ)=k=-KKu^kexp(ikτ),N=2K+1. The basis function exp(ikτ) satisfies the following 2nd-order differential equation: (11)f+f=(1-k2)f and has the following orthogonality relation: (12)02πexp(ikτ)·exp(-imτ)dz={0,km,2π,k=m. The more relevant properties of exp(ikτ) can be found in the literature .

In consideration of the differential equation (11), here let us choose the linear characteristic operator C[·] in (8) as follows: (13)C[u]=u+u. It is clear that C[u] is much simpler than the original nonlinear operator 𝒜[u]. The common solution of linear equation, C[u]=0, is (14)u=Cexp(iτ)+C-exp(-iτ), where exp(iτ) and exp(-iτ) are named as the kernels of C[u] and C is a complex constant of integration.

When the basis functions system {exp(ikτ)k=0,±1,±2,} and the linear characteristic operator C[u] are determined, the member of the solution sequence {unn=0,1,2,} should be expressed by a sum of the basis functions: (15)un=k=-MnMnan,kexp(ikτ),an,k=an,-k¯, which is a so-called principle of completeness in SFPM. In Section 2.4, we will point out that this fundamental principle can heuristically provide a solvability condition to determine the unknown frequency ω.

On account of the well-known Euler’s formula, (15) is rearranged as (16)un=k=0Mn[bn,kcos(kτ)+cn,ksin(kτ)], where the coefficients bn,k and cn,k are real numbers.

2.3. The Initial Guess <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M89"><mml:mrow><mml:msub><mml:mrow><mml:mi>u</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mrow></mml:math></inline-formula>

The iteration procedure (8) can start with some arbitrary initial value u0, and usually the more closer to the u*, the more rapidly the solution sequence {unn=0,1,2,} converges to u*. Here, the initial guess u0 could be conveniently chosen as follows: (17)C[u0]=0,u0(0)=1,u0(0)=0. Hence, (18)u0=[exp(iτ)+exp(-iτ)]2=cos(τ).

2.4. Solvability Condition and Succession Property

Here, let us investigate the relationship between the lower order approximation and the higher order ones. If 𝒜[un] in (8) is expanded by a sum of basis functions, (19)𝒜[un]=kξn,kexp(ikτ),ξn,k=ξn,-k¯, we can obtain the solution of the inhomogeneous linear equation (8) as follows: (20)un+1=un-βn+1k±1ξn,k1-k2exp(ikτ)+i2βn+1ξn,1τexp(iτ)-i2βn+1ξn,-1τexp(-iτ)+Cexp(iτ)+C-exp(-iτ). The appearance of terms such as τsin(τ) and τcos(τ) will disobey the principle of completeness; that is to say, these terms cannot be expressed by a linear combination of the basis functions {exp(ikτ)k=0,±1,±2,}, so the coefficients ξn,1 and ξn,-1 should vanish: (21)ξn,1=ξn,-1=0,n=0,1,2,, which is named as the solvability condition. The idea to avoid the appearance of τsin(τ) and τcos(τ) is actually widely applied in perturbation methods, and its physical meaning is to render the solution uniformly valid.

The aforementioned derivation of the solvability condition is heuristic. In fact, it can be obtained in a strict manner. Given the expression C[u] in (13), the iteration procedure (8) can be rewritten as follows: (22)un+1+un+1=un+un-βn+1·𝒜[un;ωn+1]=un+un-βn+1[un-φ(τ/ωn+1,un,ωn+1un,ωn+12un)ωn+12],un+1(0)=1,un+1(0)=0,n=0,1,2,, where ωn is the nth-order approximation to the exact frequency ω. Equation (22) is linear and inhomogeneous, so it has a solution if and only if the inhomogeneous parts satisfy some solvability conditions. Let us multiply both sides of (22) by exp(-iτ) and integrate (22) over the range 0τ2π. By integration by parts on the left-hand side, we obtain (23)02πexp(-iτ)(un+1+un+1)dτ=[exp(-iτ)(un+1+iun+1)]|02π=0 on account of the periodic property of u(τ); that is, u(τ)=u(τ+2π). Hence, the right-hand side is (24)02πexp(-iτ)(un′′+un-βn+1·𝒜[un;ωn+1])dτ=-βn+102πexp(-iτ)·[kξn,kexp(ikτ)]dτ,=0,(ξn,k=ξn,-k¯). In consideration of the orthogonality relation (12), we thus obtain (25)ξn,1=0,n=0,1,2,. Taking complex conjugates operation on (25), an equivalent expression is (26)ξn,-1=0,n=0,1,2,. Once again we deduce the solvability condition rigorously, which just provides an equation to determine the nth-order approximate frequency ωn.

Then, it follows from (20) and (21) that (27)un+1=un-βn+1k±1ξn,k1-k2exp(ikτ)+Cexp(iτ)+C-exp(-iτ)=kan+1,kexp(ikτ)=k=0Mn+1[bn+1,kcos(kτ)+cn+1,ksin(kτ)], where the complex constants C are determined by the initial condition: (28)un+1(0)=1,un+1(0)=0. From (27), it is clear that the higher order approximation un+1 is the sum of the lower order approximation un and the correction terms -βn+1k±1(ξn,k/(1-k2))exp(ikτ)+Cexp(iτ) + C-exp(-iτ). In other words, SFPM has the succession property, which means that the valuable information provided by the lower order approximation is preserved and utilized sufficiently, and the accuracy of the approximation could be improved step-by-step to any possibility.

2.5. The Steepest Descent Seeking Algorithm

As mentioned in Section 2.1, the relaxation factors {βnn=1,2,3,} can improve the convergence and stability of iteration procedure, and usually the optimal value of relaxation factors is dependent on the problem to be solved. In the present work, an algorithm, named as the steepest descent seeking algorithm (SDS), is adopted to determine the optimal value of the relaxation factor.

Let Resn denote the square residual error of the iteration procedure (8): (29)Resn=Resn(β1,β2,,βn)=02π(𝒜[un;ωn])2dτ,n=1,2,3, which is a kind of global residual error and can simultaneously evaluate the accuracy of un and ωn. Then it is suggested that the optimal value of relaxation factor βn,opt corresponds to the value βn at which the square residual error Resn obtains the minimum value min(Resn). For example, when n=1, the square residual error Res1(β1) is a function of β1 only and thus the optimal value β1,opt can be obtained by solving the nonlinear algebraic equation: (30)dRes1dβ1=0.

When n=2, the square residual error Res2(β1,β2) is dependent on β1 and β2. Because the optimal value β1,opt has been acquired from the previous step, the optimal value β2,opt is governed by the following nonlinear algebraic equation: (31)dRes2dβ2=0. Similarly, for the nth-higher order, the square residual error Resn actually contains an unknown relaxation factor βn only, so the optimal value βn,opt is determined by the following nonlinear algebraic equation: (32)dResndβn=0. The name of the steepest descent seeking algorithm just comes from the aforementioned approach; that is, every optimal value βn,opt is sought to minimize the corresponding square residual error Resn. According to this approach, only one nonlinear algebraic equation should be solved in every iteration step, and the elements of the sequence {βn,optn=1,2,3,} are obtained sequentially and separately. For convenience, the spectral fixed point method with the steepest descent seeking algorithm is abbreviated to SFPM-SDS in this paper.

3. Some Examples

In this section, SFPM-SDS is used to investigate some examples. All the calculations are implemented on a laptop PC with 2 GB RAM and Intel Core2 Duo 1.80 GHz CPU.

3.1. Example  1

Let us start from the first example : (33)u¨+u+u˙2u=0,u(0)=1,u˙(0)=0. Introducing the transformation (4), we obtain (34)u′′+uω2+u2u=0,u(0)=1,u(0)=0. Hence, the iteration procedure is built as follows: (35)un+1+un+1=un+un-βn+1(un+unωn+12+un2un),un+1(0)=1,un+1(0)=0,n=0,1,2,, and the square residual error Resn is (36)Resn=02π(un+unωn2+un2un)2dτ,n=1,2,3,.

The initial guess is chosen as mentioned in Section 2.3, (37)u0=[exp(iτ)+exp(-iτ)]2=cos(τ).

According to SFPM-SDS, the equation of the first order approximation u1 is governed by (38)u1′′+u1=u0′′+u0-β1(u0′′+u0ω12+u02u0)=(34-1ω12)β1cos(τ)+β1cos(3τ)4 with the associated initial condition u1(0)=1, u1(0)=0. Then the solvability condition is (39)(34-1ω12)=0,ω1=431.154701, and u1 is (40)u1=(1+β132)cos(τ)-β132cos(3τ). The corresponding Res1 is (41)Res1=0.1963495-0.3436117β1+0.1641359β12-1.677791×10-3β13-3.445465×10-5β14+5.933596×10-6β15+1.210565×10-7β16,β1,opt=1.06449,min(Res1)=0.0145074. Hence, we obtain (42)u1=1.033265cos(τ)-0.03326531cos(3τ).

For the higher-order un and ωn, the procedure is similar and can be deduced by the symbolic computation software, such as MAXIMA, MAPLE, and MATHEMATICA. The first lower order approximations by SFPM-SDS are succinctly listed here, (43)u2=1.033287cos(τ)-0.03583386cos(3τ)+0.002638345cos(5τ)-9.228024×10-5cos(7τ)+1.069526×10-6cos(9τ),u3=1.032637cos(τ)-0.03516805cos(3τ)+0.002767413cos(5τ)-2.543989×10-4cos(7τ)+1.894233×10-5cos(9τ)-1.2290549×10-6cos(11τ)+6.680303×10-8cos(13τ)-2.911103×10-9cos(15τ)+1.036885×10-10cos(17τ)-3.002914×10-12cos(19τ)+6.569981×10-14cos(21τ)-9.747357×10-16cos(23τ)+8.541457×10-18cos(25τ)-3.320339×10-20cos(27τ).

Further insight into the first lower order approximations (42)~(43), it is found that the absolute magnitude of the coefficients bn,k occurred in (16) decreasing very rapidly when k becomes larger, so it is not necessary to retain all the coefficients bn,k during the iteration evaluation. It is expected that the effect of the higher-order wave component cos[(2k-1)τ] on the accuracy of un(τ) is relatively negligible to the lower-order ones. Then a threshold value δ is introduced to decide which bn,k would be omitted, and the computational cost is shrunken if we only preserve the coefficients bn,k which satisfy |bn,k|δ. In the following calculation, we set δ=10-16. The detail of βopt, Resn, and CPU time (seconds) is shown in Table 1. From Table 1, it is also found that βn,opt corresponding to the higher order approximation un(τ) deviates a little from the lower order ones, so the value βn,opt corresponding to the higher order approximation can simply inherit from the lower order ones. It is expected that further diminishment of the computational cost will be gained in this manner. For convenience, let NSDS denote the executing times of the SDS algorithm. Here we consider the case NSDS=5, which means only the first five relaxation factors β1,opt~β5,opt are determined by the SDS algorithm, and the succeeding other relaxation factors βn,opt are set as follows: (44)βn,opt=βNSDS,opt,nNSDS+1. The result of the above approach is also shown in Table 1 for comparison. Meantime, some |bn,k| of three different order approximations un(τ) are given in Figure 1, which shows that |bn,k| actually decrease very rapidly when k becomes larger.

The βopt, Resn, and CPU time (seconds).

Ordern SFPM-SDS (δ=10-16) SFPM-SDS (NSDS=5; δ=10-16)
β n , opt Res n CPU (s) β n , opt Res n CPU (s)
1 1.06449 0.0145074 0.42 1.06449 0.0145074 0.42
2 1.03305 3.70382 e - 4 0.42 1.03305 3.70382 e - 4 0.42
3 0.975697 1.60740 e - 5 0.64 0.975697 1.60740 e - 5 0.64
4 1.05497 6.31101 e - 7 2.4 1.05497 6.31101 e - 7 2.4
5 0.987702 1.66964 e - 8 3.5 0.987702 1.66964 e - 8 3.5
6 1.03241 8.14114 e - 10 4.0 (0.987702) 8.43906 e - 10 0.22
7 1.01344 1.77868 e - 11 4.5 (0.987702) 1.99754 e - 11 0.23
8 1.00465 8.70314 e - 13 4.5 (0.987702) 8.08111 e - 13 0.23
9 1.03958 2.15011 e - 14 4.5 (0.987702) 3.04354 e - 14 0.23
10 0.985266 8.52066 e - 16 4.5 (0.987702) 7.18302 e - 16 0.23
11 1.05171 2.86410 e - 17 4.5 (0.987702) 3.66701 e - 17 0.24
12 0.984677 8.28269 e - 19 4.5 (0.987702) 9.53726 e - 19 0.24
13 1.04180 3.69971 e - 20 4.6 (0.987702) 3.27288 e - 20 0.25
14 1.00273 8.34169 e - 22 4.6 (0.987702) 1.40366 e - 21 0.25
15 1.01658 4.17116 e - 23 4.7 (0.987702) 3.14676 e - 23 0.26

The coefficients bn,k of un(τ) (NSDS=5; δ=10-16).

The original equation (33) is also numerically integrated by the high-order Runge-Kutta method with high accuracy. Let uR-K and ωR-K denote the solution and the frequency calculated by the Runge-Kutta method, respectively. The comparison of ωn with ωR-K is given in Table 2, which shows that ωn is the same as ωR-K within 7 significant digits when n8. The comparison of un(t) with uR-K is given in Figures 2 and 3, which shows that the maximum relative error [un(t)/uR-K(t)-1] between un(t) and uR-K is about 2.2e-3% and 4.0e-7% for n=5, n=10, respectively. The accuracy of un can also be justified by the square residual error Resn. As shown in Table 1 and Figure 4, it is clear that the solution sequence {unn=0,1,2,3,} converges to the exact solution very rapidly.

The frequency calculated by SFPM-SDS (NSDS=5; δ=10-16) and Runge-Kutta method.

Order n ω n ω R-K Relative error (ωn/ωR-K-1)
1 1.154701 1.136775 1.6%
2 1.138810 0.18%
3 1.136445 - 0.030 %
4 1.136678 - 0.0085 %
5 1.136779 3.5 e - 4 %
6 1.136779 3.5 e - 4 %
7 1.136776 8.8 e - 5 %
8 1.136775 0.0%

Comparison of u5(t) with uR-K (NSDS=5; δ=10-16).

Relative error [un(t)/uR-K(t)-1] between un(t) and uR-K, (NSDS=5; δ=10-16).

5th approximation u5(t)

10th approximation u10(t)

The square residual error Resn (NSDS=5; δ=10-16).

3.2. Example  2

The second example [4, 5] is governed by (45)u¨(1+u2)+2u=0,u(0)=1,u˙(0)=0. According to SFPM-SDS, the iteration procedure has the form: (46)un+1+un+1=un+un-βn+1(un+2unωn+12+unun2),un+1(0)=1,un+1(0)=0,n=0,1,2,. The solution procedure is similar to Example  1, so only the results are provided. To investigate the effect of SDS algorithm on the convergence of the approximate solution, the results corresponding to two different NSDS values are shown in Table 3. It is found that NSDS has a little effect on the convergence of the approximate solution, so NSDS usually can take a small value to decrease the computational cost, especially when the first few βn,opt are close to each other. The comparison of the frequency ωn with ωR-K is given in Table 4, which shows that the value ωn is the same as ωR-K within 7 significant digits when n8.

The βopt, Resn, and CPU time (seconds).

Ordern SFPM-SDS (NSDS=5; δ=10-16) SFPM-SDS (NSDS=10; δ=10-16)
β n , opt Res n CPU (s) β n , opt Res n CPU (s)
1 0.6195087 0.011092067 0.31 0.6195087 0.011092067 0.31
2 0.6696290 6.005123 e - 4 0.32 0.6696290 6.005123 e - 4 0.32
3 0.6319373 5.224107 e - 5 0.56 0.6319373 5.224107 e - 5 0.56
4 0.6797179 4.408895 e - 6 1.7 0.6797179 4.408895 e - 6 1.7
5 0.6317019 4.341748 e - 7 2.6 0.6317019 4.341748 e - 7 2.6
6 (0.6317019) 4.324532 e - 8 0.14 0.6874748 4.067318 e - 8 3.0
7 (0.6317019) 4.872231 e - 9 0.15 0.6299683 4.158791 e - 9 3.3
8 (0.6317019) 5.515471 e - 10 0.15 0.6928416 4.080293 e - 10 3.7
9 (0.6317019) 6.640356 e - 11 0.15 0.6281442 4.247500 e - 11 4.0
10 (0.6317019) 8.019986 e - 12 0.15 0.6969528 4.271866 e - 12 4.1
11 (0.6317019) 9.979502 e - 13 0.15 (0.6969528) 4.963906 e - 13 0.15
12 (0.6317019) 1.244679 e - 13 0.15 (0.6969528) 6.181212 e - 14 0.15
13 (0.6317019) 1.576183 e - 14 0.15 (0.6969528) 8.641084 e - 15 0.15
14 (0.6317019) 2.000453 e - 15 0.15 (0.6969528) 1.252247 e - 15 0.15
15 (0.6317019) 2.560610 e - 16 0.16 (0.6969528) 1.918912 e - 16 0.15

The frequency calculated by SFPM-SDS (NSDS=5; δ=10-16) and Runge-Kutta method.

Ordern ω n ω R-K Relative error (ωn/ωR-K-1)
1 1.069045 1.077035 - 0.74 %
2 1.075638 - 0.13 %
3 1.076791 - 0.023 %
4 1.076952 - 0.0077 %
5 1.077020 - 0.0014 %
6 1.077029 - 5.6 e - 4 %
7 1.077034 - 9.3 e - 5 %
8 1.077035 0.0%
4. Conclusion

In this paper, based on the fixed point concept in functional analysis, the spectral fixed point method (SFPM) is proposed for the nonlinear oscillation equation with periodic solution, and the steepest descent seeking (SDS) algorithm is brought forward in the framework of SFPM to improve the computational efficiency. Two typical examples are discussed in detail as the application of SFPM. The result shows the following.

The SFPM behaves with a high accuracy as the traditional spectral method.

The SFPM possesses the succession property, so the accuracy of the approximation can be improved step-by-step to any possibility with a low computational cost.

In the framework of SFPM, the spectral characteristic of oscillation equation is the byproduct without resorting to FFT algorithm.

SDS algorithm can greatly improve the computational efficiency, and therefore it is practicable to apply SFPM-SDS to handle some types of nonlinear oscillation equations with periodic solution.

So far, only the nonlinear ordinary differential equations are investigated by SFPM in this paper, but SFPM has the capability to handle the nonlinear system and partial differential equation, and it will be discussed in the future.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work is supported by National Natural Science Foundation of China (Approval nos. 11102150 and 11302165) and the Fundamental Research Funds for the Central Universities.

Timoschenko S. Young D. H. Weaver W. Vibration Problems in Engineering 1974 New York, NY, USA John Wiley & Sons Fidlin A. Nonlinear Oscillations in Mechanical Engineering 2006 Berlin, Germany Springer Nayfeh A. H. Mook D. T. Nonlinear Oscillations 1979 New York, NY, USA John Wiley & Sons MR549322 Nayfeh A. H. Perturbation Methods 1973 John Wiley & Sons MR0404788 Nayfeh A. H. Introduction to Perturbation Techniques 1981 New York, NY, USA John Wiley & Sons MR597894 Liao S.-J. An analytic approximate technique for free oscillations of positively damped systems with algebraically decaying amplitude International Journal of Non-Linear Mechanics 2003 38 8 1173 1183 10.1016/S0020-7462(02)00062-8 MR1955182 ZBL05138214 Liao S.-J. An analytic approximate approach for free oscillations of self-excited systems International Journal of Non-Linear Mechanics 2004 39 2 271 280 10.1016/S0020-7462(02)00174-9 MR2009253 ZBL05138450 He J.-H. Variational iteration method—a kind of non-linear analytical technique: some examples International Journal of Non-Linear Mechanics 1999 34 4 699 708 2-s2.0-0000092673 10.1016/S0020-7462(98)00048-1 ZBL05137891 Boyd J. P. Chebyshev and Fourier Spectral Methods 2001 Dover Publications MR1874071 Shen J. Tang T. Spectral and High-Order Methods with Applications 2006 Science Press MR2723481 Trefethen L. N. Spectral Methods in MATLAB 2000 Society for Industrial Mathematics 10.1137/1.9780898719598 MR1776072 Canuto C. Hussaini M. Y. Quarteroni A. Zang T. A. Spectral Methods: Fundamentals in Single Domains 2006 Springer MR2223552 Xu D. Guo X. Fixed point analytical method for nonlinear differential equations Journal of Computational and Nonlinear Dynamics 2013 8 1 9 011005 10.1115/1.4006337 Xu D. Guo X. Application of fixed point method to obtain semi-analytical solution to Blasius flow and its variation Applied Mathematics and Computation 2013 224 791 802 10.1016/j.amc.2013.08.066 Guo X. Wang X. Xu D. Xie G. N. A Legendre series solution to Rayleigh stability equation of mixing layer Applied Mathematics and Mechanics 2013 34 782 794 Zeidler E. Nonlinear Functional Analysis and Its Applications, I: Fixed-Point Theorems 1986 Springer 10.1007/978-1-4612-4838-5 MR816732 Zeidler E. Applied Functional Analysis: Applications to Mathematical Physics 1995 Springer MR1347691