This paper introduces higher-order solutions of the stochastic nonlinear differential equations with the Wiener-Hermite expansion and perturbation (WHEP) technique. The technique is used to study the quadratic nonlinear stochastic oscillatory equation with different orders, different number of corrections, and different strengths of the nonlinear term. The equivalent deterministic equations are derived up to third order and fourth correction. A model numerical integral solver is developed to solve the resulting set of equations. The numerical solver is tested and validated and then used in simulating the stochastic quadratic nonlinear oscillatory motion with different parameters. The solution ensemble average and variance are computed and compared in all cases. The current work extends the use of WHEP technique in solving stochastic nonlinear differential equations.
1. Introduction
Analysis of the response of linear and nonlinear systems subjected to random excitations is of considerable interest to the fields of mechanical and structural engineering [1]. Stochastic differential equations based on the white noise process provide a powerful tool for dynamically modeling complex and uncertain aspects. In many practical situations, it may be appropriate to assume that the nonlinear term affecting the phenomena under study is small enough; then its intensity is controlled by means of a small parameter, say ε [2].
According to [3], the solution of stochastic partial differential equations (SPDEs) using Wiener-Hermite expansion (WHE) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods. The main statistics, such as the mean, covariance, and higher-order statistical moments, can be calculated by simple formulae involving only the deterministic Wiener-Hermite coefficients. In WHE approach, there is no randomness directly involved in the computations. One does not have to rely on pseudorandom number generators, and there is no need to solve the stochastic PDEs repeatedly for many realizations. Instead, the deterministic system is solved only once.
The application of the WHE [4–10] aims at finding a truncated series solution to the solution process of a stochastic differential equation. The truncated series is composed of two major parts: the first is the Gaussian part which consists of the first two terms, while the rest of the series constitutes the non-Gaussian part. In nonlinear cases, there exist always difficulties in solving the resultant set of deterministic integrodifferential equations got from the applications of a set of comprehensive averages on the stochastic integrodifferential equation obtained after the direct application of WHE. Many authors introduced different methods to face these obstacles. Among them, the WHE with perturbation (WHEP) technique [4] was introduced using the perturbation technique to solve perturbed nonlinear problems.
The WHE was originally started and developed by Wiener in 1938 and 1958 [11]. Wiener constructed orthonormal random bases for expanding homogeneous chaos depending on white noise and used it to study problems in statistical mechanics. Cameron and Martin [12] developed a more explicit and intuitive formulation for WHE (now known as Wiener chaos expansion, WCE). Their development is based on an explicit discretization of the white noise process through its Fourier expansion, which was missed in Wiener’s original formalism. This approach is much easier to understand and more convenient to use and hence replaced Wiener’s original formulation. Since Cameron and Martin’s work, WHE has become a useful tool in stochastic analysis involving white noise (Brownian motion) [3]. We will denote it by Imamura formulation [13]. Another formulation was suggested and applied by Imamura and his coworkers [13, 14]. They have developed a theory of turbulence involving a truncated WHE of the velocity field. The randomness is taken up by a white-noise function associated, in the original version of the theory, with the initial state of the flow. The mechanical problem then reduces to a set of coupled integrodifferential equations for deterministic kernels. In [1], the WHE (Imamura formulation [13]) was used to compute the nonstationary random vibration of a Duffing oscillator which has cubic nonlinearity under white-noise excitation. Solutions up to second order are obtained by solving the equivalent deterministic system by an iterative scheme. El-Tawil and his coworkers [4–10, 15, 16] used the WHEP technique to solve a perturbed nonlinear stochastic diffusion equation and the quadratic and cubic nonlinear stochastic oscillatory equation with first-order approximations.
In [17], the analysis of nonlinear random vibration has been studied using several methods, such as equivalent linearization method [18], stochastic averaging method [19], the WHE approach with nonstationary excitations [1], the WHEP technique [15], eigenfunction expansions [20], and the method of detailed balance [21]. All of the above methods are applied and used for nonlinear random oscillations of real systems subjected to random nonstationary (or stationary) excitations.
According to [5, 6], quadrate oscillation arises through many applied models in applied sciences and engineering when studying oscillatory systems [22]. These systems can be exposed to a lot of uncertainties through the external forces, the damping coefficient, the frequency, and/or the initial or boundary conditions. These input uncertainties cause the output solution process to be also uncertain. For most of the cases, getting the probability density function (p.d.f.) of the solution process may be impossible. So, developing approximate techniques (through which approximate statistical moments can be obtained) is an important and necessary work. There are many techniques which can be used to obtain statistical moments of such problems. The main goal of this paper is to introduce higher-order WHEP solutions and to suggest a numerical solver suitable to handle the equivalent deterministic system.
In [16], the WHEP technique is generalized to nth nonlinearity, general order of WHE, and general number of corrections. Also, the extension to handle white noise in more than one variable and general nonlinearities are outlined. The generalized algorithm is implemented and linked to MathML [23] script language to print out the resulting equivalent deterministic system.
In the current work, the WHE formulation suggested by Meecham and his coworkers (Imamura formulation) is used to solve the stochastic nonlinear differential models of the form
(1)L(x(t))=-εxn+f(t)+g(t)N(t),t∈(0,T]
with the proper set of initial conditions which will be assumed to be deterministic. The differential operator L is a general linear operator. The nonlinearity is introduced as losses of degree n>1 strengthened by a deterministic small parameter (ε). For the quadratic nonlinearity, n will be equal 2. The uncertainty is introduced through white noise N(t) scaled by a deterministic envelope function g(t). The function f(t) is a deterministic forcing function. Theorem 1 will be used in the derivation of the WHEP technique.
Theorem 1.
The solution of (1), if exists, is a power series in ε; that is, x(t)=∑i=0∞εixi(t).
Proof.
Using Picard’s method which generates a sequence of approximations that converge to the solution; that is, the solution is obtained as x(t)=limk→∞x(k)(t) where the kth approximation is computed as
(2)L(x(k)(t))=-ε[x(k-1)(t)]n+f(t)+g(t)N(t),k≥1,
apply the inverse operator, L-1, on both sides to get
(3)x(k)(t)=-εL-1([x(k-1)(t)]n)+L-1(f(t)+g(t)N(t)).
Let x(0)(t)=L-1(f(t)+g(t)N(t)) which is the solution at ε=0.
Then the Picard’s kth approximation is computed as
(4)x(k)(t)=x(0)(t)-εL-1([x(k-1)(t)]n).
Now, we need to prove that the kth approximation is a power series in ε; that is, it can be written as
(5)x(k)(t)=∑i=0Mkεixi(k)(t),
where Mk+1 is the number of terms in the series. Using the mathematical induction, we need to prove that a power series solution will be obtained at k=1 and at k+1 provided that the kth approximation is a power series solution.
At k=1, the Picard’s 1st approximation will be x(1)(t)=x(0)(t)-εL-1([x(0)(t)]n), which is a power series in ε. Now we need to prove that x(k+1)(t) is a power series in ε given that x(k)(t) is a power series in ε. The Picard’s (k+1)th approximation is computed as
(6)x(k+1)(t)=x(0)(t)-εL-1([x(k)(t)]n),
or, after substituting the power series of x(k)(t), we get
(7)x(k+1)(t)=x(0)(t)-εL-1([∑i=0Mkεixi(k)(t)]n).
The second term in the right hand side can be expanded using the multinomial theorem to get
(8)[∑i=0Mkεixi(k)(t)]n=∑hch∏i=0Mk[εixi(k)(t)]jhi,
where ch=n!/∏i=0Mkjhi! and the counter h runs over all the (n+Mkn) combinations of the positive integers jh0,jh1,…,jhMk such that ∑i=0Mkjhi=n. In a more simplified form, we can write
(9)[∑i=0Mkεixi(k)(t)]n=∑hchεwh∏i=0Mk[xi(k)(t)]jhi,
where wh=∑i=0Mkijhi. Let ∏i=0Mk[xi(k)(t)]jhi=vh(t); then the expansion takes the form
(10)[∑i=0Mkεixi(k)(t)]n=∑hchεwhvh(t).
Substitute in Picard’s (k+1)th approximation to get
(11)x(k+1)(t)=x(0)(t)-εL-1(∑hchεwhvh(t))
or
(12)x(k+1)(t)=x(0)(t)-∑hchε1+whL-1(vh(t)),
which is a power series in ε. This completes the proof. We note that the previous theorem is applied also in case of deterministic force term; that is, g(t)=0. Also, the theorem is applied if the unknown function x is a function of more than one variable [16].
This paper is organized as follows. In Section 2, the WHEP technique is reviewed and the generalized WHEP derivation steps are outlined. The deterministic set of equations equivalent to the stochastic nonlinear oscillatory equation are tabulated in Section 3. The analytical and numerical solutions of the oscillatory equation are described in Section 4. The simulations up to third order and fourth correction are shown in Section 5.
2. WHEP Technique
As a consequence of the completeness of the Wiener-Hermite set [1], any arbitrary stochastic process can be expanded in terms of the Weiner-Hermite polynomial set, and this expansion converges to the original stochastic process with probability one.
The solution function x(t;w) can be expanded in terms of Wiener-Hermite functionals as [4]
(13)x(t;w)=x(0)(t)+∫-∞∞x(1)(t;t1)H(1)(t1;w)dt1+∬-∞∞x(2)(t;t1,t2)H(2)(t1,t2;w)dt1dt2+∭-∞∞x(3)(t;t1,t2,t3)×H(3)(t1,t2,t3;w)dt1dt2dt3+⋯
or after eliminating the parameters, for the sake of brevity, we get
(14)x(t;w)=x(0)(t)+∑k=1∞∫Rkx(k)H(k)dτk,
where dτk=dt1dt2⋯dtk and ∫Rk is a k-dimensional integral over the variables t1,t2,…,tk. The first term in the expansion (14) is the nonrandom part or ensemble mean of the function. The first two terms represent the normally distributed (Gaussian) part of the solution. Higher-order terms in the expansion depart more and more from the Gaussian form [16]. The Gaussian approximation is usually a bad approximation for nonlinear problems, especially when high-order statistics are concerned [18].
The components x(j)(t;t1,t2,…,tj) are called the deterministic kernels of the WHE of x(t). They are functions of time and space variables and fully account for the time dependence of x(t) as well as for its statistical properties [3]. w is a random output of a triple probability space (Ω,B,P), where Ω is a sample space, B is a σ-algebra associated with Ω, and P is a probability measure. For simplicity, w will be dropped later on.
The functional H(n)(t1,t2…,tn) is the nth order Wiener-Hermite time-independent functional. The WH functionals form a complete set [1], and they satisfy the following recurrence relation for n≥2:
(15)H(n)(t1,t2,…,tn)=H(n-1)(t1,t2,…,tn-1)H(1)(tn)-∑m=1n-1H(n-2)(tn1,tn2,…,tnn-2)×δ(tn-m-tn),
with H(0)=1 and H(1)(t1)=N(t1): the white noise. By construction, the Wiener-Hermite functionals are symmetric in their arguments and are statistically orthonormal with respect to the weighting function e-(1/2)∑i=1nξ2(ti); that is,
(16)E[H(i)H(j)]=0∀i≠j.
The average of almost all Wiener-Hermite functionals vanishes, particularly,
(17)E[H(i)]=0∀i≥1.
The expectation and variance of the solution will be
(18)E[x(t)]=x(0)(t),Var[x(t)]=∑k=1m(k!)∫Rk(xi(k))2dτk.
The WHE method can be elementarily used in solving stochastic differential equations by expanding the solution as well as the stochastic input processes via the WHE. The resultant equation is more complex than the original one due to being a stochastic integrodifferential equation. Taking a set of ensemble averages together with using the statistical properties of the WHE functionals, a set of deterministic integrodifferential equations are obtained in the deterministic kernels x(i)(t;t1,t2,…,ti), i=0,1,2,…. To obtain approximate solutions of these deterministic kernels, one can use perturbation theory in the case of having a perturbed system depending on a small parameter ε. Expanding the kernels as a power series of ε, another set of simpler iterative equations in the kernel series components are obtained. This is the main algorithm of the WHEP algorithm [4]. The technique was successfully applied to several nonlinear stochastic equations; see for example [4, 5, 10].
The WHEP technique for general nonlinear exponent (n), general order (m), and general number of corrections (NC) follows the following steps [16].
Truncate the expansion (14) to contain only m+1 (m≥1) kernels x(j), 0≤j≤m; that is, x(t;w)=x(0)(t)+∑k=1m∫Rkx(k)H(k)dτk.
Substitute into the stochastic oscillatory equation (1).
Use the multinomial theorem to expand the quadratic nonlinear term in (1), xn, n=2.
Multiply by H(j), 0≤j≤m, and then apply the ensemble average. This will lead to (m+1) equations in the kernels x(j), 0≤j≤m.
For each kernel x(j), 0≤j≤m, apply the perturbation technique up to NC corrections; that is, x(j)=∑i=0NCεixi(j).
Equate the coefficients of εk, 0≤k≤NC, in both sides to get NC+1 equations for each kernel x(j), 0≤j≤m.
This will lead to the following (m+1)(NC+1) equations:
(19)(j!)L(x0(j))=δj0f(t)+δj1g(t)δ(t-t1),0≤j≤m,(20)(j!)L(xb(j))=-∑fcfDf,b-1(j)Efj,0≤j≤m,1≤b≤NC,
where
(21)Df,b-1(j)=∫Rz(∑Varcg-∏i=0m∏p=0NC[xp(i)]hgp)dτz.
And the expectations Efj are computed as
(22)Efj=〈H(j)∏i=0m(H(i))kfj〉.
It was explained in [16] how to get Efj in terms of the Dirac delta functions and then use them to reduce the integrals that appear in Df,b-1(j). The summation ∑Var means that all variations hqp, 0≤q≤m, 0≤p≤NC, that satisfy the equality d=∑q=0m∑p=0NCphqp are selected. This can be done be a searching technique. For these variations, the factors cg=kfi!/∏p=0NChgp! will be multiplied by each other to get cg-; that is, cg-=∏Varcg. The Kronecker delta functions are defined as
(23)δjl={1,j=l0,j≠l.
The counter f, in the summation in the right hand side of (20), runs over all of the (n+mn) combinations of the positive integers kf0,kf1,…,kfm such that ∑i=0mkfi=n.
Equations (19) and (20) can always be solved using the proper sequence. The first m+1 equations of (19) are solved independently to get x0(j), 0≤j≤m; then they are used to compute the other components in (20). For j=0, the component u0(0) is obtained by solving L(x0(0))=f(t) with the original initial and boundary conditions. For j=1, the component x0(1) is obtained by solving L(x0(1))=g(t)δ(t-t1) with zero initial and boundary conditions. The other components u0(j), j≥2, will be zeros due to zero right hand side and zero initial and boundary conditions. Equation (20) specifies the solution sequence to be followed. The component xi(j) is evaluated in terms of the previously computed components xk(p), p≤j, k<i. This means that the 1st corrections for all kernels, x1(j), 0≤j≤m, are solved firstly and then the 2nd corrections for all kernels, x2(j), 0≤j≤m,… up to the (NC)th corrections for all kernels xNC(j), 0≤j≤m.
These results are consistent with the known results obtained using WHE. In WHE, higher-order kernels are driven by lower-order kernels, and at the bottom, the Gaussian kernels are driven by the random forcing directly. So, the lower-order kernels are usually dominant in magnitude [3].
The statistical properties of the solution will be calculated as
(24)E[x(t)]=∑i=0NCεixi(0),Var[x(t)]=∑k=1m(k!)∫Rk(∑i=0NCεixi(k))2dτk.
If x(j)=∑i=0∞εixi(j), then it will be convergent if [16]
(25)|ε|≤|xi(j)xi+1(j)|
for t∈[t0,T]. This means that |ε| should obey an upper bound condition after which divergence is obtained.
3. The Equivalent Deterministic Equations
Apply the previous WHEP algorithm to get the following systems of equations of the quadratic (n=2) nonlinear oscillatory equation and first-order (m=1) Gaussian approximation and different number of corrections (NC). The initial conditions are assumed deterministic and hence only the zero-order and zero-correction kernel equation (L(x0(0))=f(t)) will have the initial conditions (x0 and x˙0). Other kernels equations will have zero initial conditions.
For the quadratic nonlinear oscillatory stochastic equation, the application of the WHEP technique will result in the following set of equations (Tables 1, 2, and 3). The equations will be written for the first, second, and third orders. For a certain correction level, the deterministic system will include also the equations from previous levels.
The equivalent deterministic system for first-order approximation with different number of corrections (NC).
In case of zero initial conditions, f(t)=0 and g(t)=1; that is, RHS=-εω2x2+N(t), and we will have the following reduced system of equations:
(26)L(x0(0))=0,L(x0(1))=δ(t-t1),L(x1(0))=-ω2∫R[x0(1)(t1)]2dt1,L(x1(1))=0,L(x2(0))=0,L(x2(1))=-2ω2x1(0)x0(1)(t1),L(x3(0))=-ω2[x1(0)]2-2ω2∫Rx0(1)(t1)x2(1)(t1)dt1,L(x3(1))=0,L(x4(0))=0,L(x4(1))=-2ω2x1(0)x2(1)(t1)-2ω2x3(0)x0(1)(t1),L(x5(0))=-2ω2x1(0)x3(0)-2ω2∫Rx0(1)(t1)x4(1)(t1)dt1-ω2∫R[x2(1)(t1)]2dt1,L(x5(1))=0.
4. Oscillatory Equation and the Numerical Solver
For the linear oscillatory equation,
(27)x¨+2ωζx˙+ω2x=f(t),
the linear operator L will be
(28)L=d2dt2+2ωζddt+ω2.
This means that the linear oscillatory equation can be written as L(x)=f(t). The parameter ω is the undamped angular frequency of the oscillator and ξ is the damping ratio. The initial conditions will be considered as x(0)=x0 and x˙0(0)=x˙0. In case of zero initial conditions, the exact solution that can be obtained using different methods such as the theory of linear differential equations or the Laplace transform will be the convolution
(29)x(t)=h(t)∘f(t),
where h(t)=(1/ωd)e-ωξtsin(ωdt) with ωd=ω1-ξ2, assuming underdamping (ξ<1).
For f(t)=e-t, the solution will be x(t)=∫0th(t-τ)f(τ)dτ, which results in
(30)x(t)=11-2ωξ+ω2×(1-ωξωde-t-e-ωξtcos(ωdt)+1-ωξωde-ωξtsin(ωdt)).
The numerical solution can be obtained for a model equation and then used for all kernels in the proper sequence. The model equation in this case will take the form
(31)x¨+a1x˙+a0x=f(t),
where a1 and a0 are assumed constant. We can use any difference scheme, but as we are working with a white noise, the Dirac delta function δ(t-a) is expected to appear in the equations of some kernels. So, an integral numerical scheme such as FEM or FVM will be more suitable as the integration of the Dirac delta function is easier to be handled. In our case, the finite volume method (FVM) will be considered. The time axis where t∈[0,T] will be discretized into n equal intervals of size Δt. The interval extending from ti-1 to ti is taken as the control volume. This technique in one dimension will be equivalent to the trapezoidal integration rule. The second-order linear equation (31) will be decomposed into two simultaneous first-order equations. This can be done by substituting x˙=y, and then we will have
(32)x˙=ywithx(0)=x0,y˙+a1y+a0x=f(t)withy0=x˙(0).
Integrate (32) along the control volume to get
(33)∫ti-1tix˙dt=∫ti-1tiydt.
Approximate the integral with the trapezoidal rule which is of accuracy proportional to (Δt)2 to get
(34)xi=xi-1+Δt2(yi-1+yi).
Also, integrate (32) along the control volume and substitute with xi from (34) to get
(35)yi=4Fi-1+yi-1(4-2a1Δt-a0(Δt)2)-4a0Δtxi-14+2a1Δt+a0(Δt)2,
where Fi-1=∫ti-1tif(t)dt=0.5(fi-1+fi)Δt. If the Dirac delta function appears in the right hand side, a special treatment for Fi-1 is considered. In this case Fi-1=∫ti-1tiq(t)δ(t-t1)dt=q(ti) when ti=t1 and Fi-1=0 when ti≠t1. Equation (35) will be solved at each node ti to get yi and then (34) computed to get xi.
The numerical solution can be validated with the exact solution in case of f(t)=e-t (see (30)). With x0=x˙0=0, ω=1, ξ=0.5, T=10, and Δt=0.1, the comparison in Figure 1 shows that the numerical solution has sufficient accuracy with a relative error of 0.03% at Δt=0.1. The overall convergence rate of the developed scheme was tested, and it has a convergence order of 1.5 as shown in Figure 2.
Comparison between the exact and the numerical solutions at Δt=0.1.
The convergence order of the developed numerical solver.
5. Results
The following output is simulated using the previous developed numerical solver. The solution (34) of the model equation (31) is used to get all kernels with the proper right hand side. The mean response and the response variance are then calculated from the kernels using (24).
Figures 3 and 4 show the first-order response mean and variance for different values of ε. The simulations are done for the case of zero initial conditions, zero deterministic excitation, and unit envelope function multiplied by the white noise. The angular frequency is ω=1 and the damping ratio is ξ=0.5. For the response mean, the 3rd and 4th corrections are coincident. For the variance, the 2nd and 3rd corrections are coincident and also the 4th and 5th corrections.
First-order response mean and variance for (a) ε=0.1, (b) ε=0.3, and (c) ε=0.5.
First-order response mean and variance for (a) ε=0.7 and (b) ε=1.0.
As it is shown in the figures, the nonlinearity strength ε greatly affects the amplitudes of the mean and variance. It should not be increased after a certain value to obtain a convergent solution. This value depends on the different parameters of the problem. As the nonlinearity strength ε increases, we need higher number of corrections. For ε=0.1 and at t=10 seconds, the ratio between the response mean of each correction and the proceeding one is around 0.98, which is greater than ε. This means that the condition of convergence is satisfied and hence the solution converges in this case. Also, the convergence condition is satisfied for ε=0.3 and ε=0.5. In case ε=0.7, the ratio between the 4th and 5th corrections is 0.6 and between the 2nd and 3rd corrections is 0.58. Both ratios are lesser than 0.7, which means that we will have a divergent solution for ε=0.7. Also, ε=1.0 will produce a divergent solution.
Figures 5 and 6 show the first-order response mean and variance using the same parameters in Figures 3 and 4, but the envelope function g(t) is taken as e-0.5t. As it is clear from the figures, the effect of the nonlinearity strength ε on the response variance is negligible. The multiplication of e-0.5t attenuates the effect of the white noise and it becomes negligible with the time increase. The variance vanishes with the time and the solution becomes nearly deterministic. Also, the response variance is nearly invariant with the number of corrections.
The first-order response mean and variance for (a) ε=0.1, (b) ε=0.3, and (c) ε=0.5. Case of exponential function e-0.5t multiplied by the white noise.
The first-order response mean and variance for (a) ε=0.7 and (b) ε=1.0. Case of exponential function e-0.5t multiplied by the white noise.
Figures 7 and 8 show the second-order response mean and variance using the same parameters in the first-order simulations, Figures 3 and 4. Figure 9 shows the second-order response mean and variance for longer time interval, T=20 seconds. This was done to ensure that T=10 seconds is a sufficient interval for the response mean and variance to reach their steady state values. For the second-order approximation, the response variance is computed practically as
(36)Var[x(t)]=∫R1((∑i=0NCεixi(1))2+2∫R2(∑i=0NCεixi(2))2dt2)dt1.
Figures 10 and 11 show the third-order response mean and variance with the same parameters used in the first- and second-orders simulations. For the third-order approximation, the response variance is computed practically, for time and memory saving, as
(37)Var[x(t)]=∫R((∑i=0NCεixi(1))2+2∫R((∑i=0NCεixi(2))2+3∫R(∑i=0NCεixi(3))2dt3)dt2(∑i=0NCεixi(1))2)dt1.
Table 4 shows a comparison between the computing times for different levels of correction for the third-order approximation. The computing times for the response mean and variance are also shown. The table displays also the order of integrals in each level. The time step was 0.2 seconds and the computations are done on Intel Core i5, 2.4 GHz, machine with W7, 32 bits. We can note that around 90% of the computing time is consumed in level 4 (4th correction level). This is due to the higher orders of integrals computed in this level. As the level of correction increases, the order of integral increases and hence the computational time will also increase.
The computing time for each level and for the response mean and variance compared with the total time.
Time (sec.)
Order of integrals
Level 0
0.041
0
Level 1
0.015
1
Level 2
0.29
1 & 2
Level 3
0.82
1 & 2 & 3
Level 4
19.55
1 & 2 & 3 & 4
E[x]
0.0029
0
Var[x]
0.16
1 & 2 & 3
Total time
20.88
The second-order response mean and variance for (a) ε=0.1, (b) ε=0.3, and (c) ε=0.5.
The second-order response mean and variance for (a) ε=0.7 and (b) ε=1.0.
The second-order response mean and variance for ε=0.5 and time interval of 20 seconds.
The third-order response mean and variance for (a) ε=0.1, (b) ε=0.3, and (c) ε=0.5.
The third-order response mean and variance for (a) ε=0.7 and (b) ε=1.0.
Figures 12 and 13 show a comparison, at ε=0.3, between the response mean and variance for different orders and different number of corrections. As it was described earlier, the solution is convergent at ε=0.3. As it is shown in the figures, the response variance is more sensitive to the approximation order than the response mean. The variance converges as the approximation order increases. This means that, for a convergent solution, higher-order approximations are required for accurate prediction of the stochastic response.
Comparison between the response mean and variance for the first, second, and third orders with number of corrections 1 and 2, ε=0.3.
1st correction (mean of 1st, 2nd, and 3rd orders is coincident) (variance of 2nd and 3rd orders is coincident)
2nd correction (mean of 1st, 2nd, and 3rd orders is coincident)
Comparison between the response mean and variance for the first, second, and third orders with numbers of corrections 3 and 4, ε=0.3.
3rd correction (mean of 2nd and 3rd orders is coincident)
4th correction (mean of 2nd and 3rd orders is coincident)
Figures 14 and 15 show a comparison, at ε=1.0, between the response mean and variance for different orders and different number of corrections. In this case, the solution is divergent. The response variance diverges as the approximation order increases and also as the correction level increases.
Comparison between the response mean and variance for the first, second, and third orders with numbers of corrections 1 and 2, ε=1.0.
1st correction (mean of 1st, 2nd, and 3rd orders is coincident) (variance of 2nd and 3rd orders is coincident)
2nd correction (mean of 1st, 2nd, and 3rd orders is coincident)
Comparison between the response mean and variance for the first, second, and third orders with numbers of corrections 3 and 4, ε=1.0.
3rd correction (mean of 2nd and 3rd orders is coincident)
4th correction (mean of 2nd and 3rd orders is coincident)
It is worth to note that the WHEP technique used in the current work can be extended to solve stochastic PDEs with white noise in multiple dimensions and of different colors as described in [16].
6. Summary and Conclusions
The mean response of the quadratic nonlinear oscillatory system subjected to nonstationary random excitation was investigated using WHEP technique. The equivalent deterministic equations are derived up to third order. The solution is approximated up to fifth correction for the first order and up to fourth correction for the second and third orders. Numerical integral solution of the equivalent deterministic set of equations was applied using the FVM. The numerical treatment is validated after comparing the results with the analytical solution. The numerical solver is utilized in simulating the mean and variance of the nonlinear stochastic oscillatory motion with higher order, higher number of corrections, and different strengths of the nonlinear term. The values of the nonlinearity strength required for convergent solution are estimated. It was found that the numerical solution is efficient, and higher-order approximations are required for accurate prediction of the stochastic response.
JahediA.AhmadiG.Application of Wiener-Hermite expansion to nonstationary random vibration of a Duffing oscillator198350243644210.1115/1.3167056MR702995ZBL0547.73072CortesJ. C.RomeroJ. V.RoselloM. D.VillanuevaR. J.Applying the Wiener-Hermite random technique to study the evolution of excess weight population in the region of Valencia (Spain)20122427428110.4236/ajcm.2012.24037LueW.2006Pasadena, Calif, USACalifornia Institute of TechnologyEl-TawilM. A.The application of WHEP technique on stochastic partial differential equations200373325337MR2017160ZBL1045.60066El-TawilM. A.Al-JohaniA. S.Approximate solution of a mixed nonlinear stochastic oscillator20095811-122236225910.1016/j.camwa.2009.03.057MR2557354ZBL1189.65017El-TawilM. A.El-JohaniA. S.On solutions of stochastic oscillatory quadratic nonlinear equations using different techniques, a comparison study2008961El-TawilM. A.FareedA. F.Solution of stochastic cubic and quintic nonlinear diffusion equation using WHEP, Pickard and HPM methods20111162110.4236/ojdm.2011.11002MR2822701El-TawilM. A.El-ShikhipyA.Approximations for some statistical moments of the solution process of stochastic Navier-Stokes equation using WHEP technique20126310951100MR2973769El-TawilM. A.El-ShekhipyA. A.Statistical analysis of the stochastic solution processes of 1-D stochastic navier-stokes equation using WHEP technique20133785756577310.1016/j.apm.2012.08.015El-JohaniA. S.Comparisons between WHEP and homotopy perturbation techniques in solving stochastic cubic oscillatory problems1148Proceedings of the AIP Conference201074375210.1063/1.3225426WienerN.1958Cambridge, Mass, USAMIT Press, John Wileyix+131 ppMR0100912CameronR. H.MartinW. T.The orthogonal development of non-linear functionals in series of Fourier-Hermite functionals194748385392MR002023010.2307/1969178ZBL0029.14302ImamuraT.MeechamW. C.SiegelA.Symbolic calculus of the Wiener process and Wiener-Hermite functionals19656695706MR017517310.1063/1.1704327MeechamW. C.JengD. T.Use of the Wiener-Hermite expansion for nearly normal turbulence196832225235Abdel-GawadE. F.El-TawilM. A.General stochastic oscillatory systems1993176329335El-BeltagyM. A.El-TawilM. A.Toward a solution of a class of non-linear stochastic perturbed PDEs using automated WHEP algorithm20133712-1371747192YongX.WeiX.MahmoudG.On a complex duffing system with random excitation200835112613210.1016/j.chaos.2006.07.016SpanosP.Stochastic linearization in structural dynamics19803418ZhuW. Q.Recent developments and applications of the stochastic averaging method in random vibration19964910728010.1115/1.3101980AtkinsonJ.Eigenfunction expansions for randomly excited nonlinear systems197330215317210.1016/S0022-460X(73)80110-5BezenA.KlebanerF.Stationary solutions and stability of second order random differential equations19962333-480982310.1016/S0378-4371(96)00205-1NayfehA. H.1993New York, NY, USAJohn Wiley & Sonsxi+556MR809594MathML, http://www.w3.org/Math/