The nonlinear Klein-Gordon equation (KGE) models many nonlinear phenomena. In this paper, we propose a scheme for numerical approximation of solutions of the one-dimensional nonlinear KGE. A common approach to find a solution of a nonlinear system is to first linearize the equations by successive substitution or the Newton iteration method and then solve a linear least squares problem. Here, we show that it can be advantageous to form a sum of squared residuals of the nonlinear problem and then find a zero of the gradient. Our scheme is based on the Sobolev gradient method for solving a nonlinear least square problem directly. The numerical results are compared with Lattice Boltzmann Method (LBM). The L2, L∞, and Root-Mean-Square (RMS) values indicate better accuracy of the proposed method with less computational effort.
1. Introduction
Many nonlinear phenomena in solid state physics, plasma physics, electrostatics, chemical kinetics, and fluid dynamics can be modeled through partial differential equations (PDEs) [1]. One example of such a PDE is the nonlinear KGE which arises in relativistic quantum mechanics and field theory [2]. This equation has attracted the attention of many scientists and has been widely used to study various laws related to motions of elementary particles in condensed matter and particle physics [3]. The KGE models a wide class of scientific phenomena, including propagation of dislocations in crystals and the behavior of elementary particles. The equation also arises in the study of solitons and perturbation theory [4–6].
The initial value problem for the one-dimensional nonlinear KGE is given by (1)utt+αuxx+gu=fx,t,where u=u(x,t) shows the wave displacement at position x at time t, α is constant, and g(u) is nonlinear force. The residual of the equation can be written as (2)Mu=utt+αuxx+gu-fx,t.The nonlinear KGE also has a conserved Hamiltonian energy: (3)Eu=∫12ut2+12ux2+Gu,where G′(u)=g(u).
In this paper we find the numerical approximation of the nonlinear KGE with quadratic nonlinearity (4)utt+αuxx+βu+γu2=fx,t,and also with cubic nonlinearity (5)utt+αuxx+βu+γu3=fx,t,where α, β, and γ are known constants.
A number of different numerical schemes have been proposed for the solution of the KGE. It is mentioned in [1] that many methods like inverse scattering method, Bäcklund transformation, the auxiliary equation method [7, 8], pseudospectral method, and Riccati equation expansion methods have been used to solve these types of equations (see [1] and references therein). Other methods like spectral and pseudospectral approaches have recently been presented in [9]. In [10] Spectral methods using Legendre polynomials (SMLP) as basis functions are considered to solve the KGE. The solution of the nonlinear KGE, using θ-weighted finite difference discretization with radial basis functions (RBS) is discussed in [11]. Many difference schemes have been given in [12]. In these difference schemes, undesirable results like instability and loss of spatial symmetry for large number of parameters in initial conditions were observed.
Another approach to solving any boundary value problem is to formulate the problem in terms of minimizing a least square functional representing the residuals of the equation. The least square functional acts as an error representative. The boundary conditions are conveniently satisfied in this gradient descent approach.
One such technique is the Sobolev gradient method. Sobolev gradients have been used in minimization of Ginzburg-Landau energy functionals [13] related to phase separation and ordering in finite difference settings [14], electrostatic potential equations [15], nonlinear elliptic problems [16], inverse problems in elasticity [17], groundwater modeling [18], and simulation of Bose Einstein condensates [19].
The paper is organized as follows. In Section 2 Sobolev gradient method is discussed and some existence results for the proposed method are presented. In Section 3 we apply the method on the nonlinear KGE. In Section 4 results of the numerical experiments are presented by solving some numerical examples. Comparison with other standard methods like spectral or finite difference method is also given. Section 5 contains the summary and conclusion of the results obtained. And at the end some references are provided.
In this paper all the numerical experiments are carried out using Intel(R) Core i3 1.70 GHz processor with 4 GB RAM. All the codes and graphs are drawn in MATLAB and are available upon request.
2. Sobolev Gradient Method
In this section we throw some light on Sobolev gradients and steepest descent. A comprehensive procedure for the interpretation of Sobolev gradients is presented in [13] and the given theory is borrowed from [13].
Here we are giving the Riesz Representation theorem which is useful in our work.
Theorem 1.
Every bounded linear functional F on a Hilbert space H can be represented in terms of the inner product; namely,(6)Fx=x,z,where z depends on F, is uniquely determined, and has norm z=F.
Let m be a positive integer and F be a real-valued C(1) function on Rm. Then the gradient ∇F is defined as (7)limt→01tFx+th-Fx=F′xh=h,∇FxRm,x,h∈Rm.For F as mentioned above but with an inner product ·,·s different from the standard inner product ·,·Rm, there is a function(8)∇sF:Rm⟶Rmso that (9)F′xh=h,∇sFxs,x,h∈Rm.Since every linear functional defined on finite-dimensional vector space is bounded so by Theorem 1 the linear functional F′(x) can be represented using any inner product on Rm. Saying that ∇sF is the gradient of F with respect to the inner product ·,·s it is worth noticing that the gradient ∇sF(x) has the same set of properties as that of ∇F.
From linear algebra there is a linear transformation (10)T:Rm⟶Rmby which these two inner products are related by(11)x,ys=x,TyRm;x,y∈Rmand some reflection leads to(12)∇sFx=T-1∇Fx,x∈Rm.Every point x∈Rm possesses its own inner product ·,·x on Rm. Therefore, for x∈Rm, define (13)∇xF:Rm⟶Rmsuch that (14)F′xh=h,∇xFxx,x,h∈Rm.We have a collection of gradients which have tremendous differences in numerical properties for a function F depending upon the choice of metric.
The gradient of a function F defined in a finite or infinite dimensional Sobolev space is known as a Sobolev gradient.
The method of steepest descent can be classified into two categories: continuous steepest descent and discrete steepest descent.
Consider an inner product ·,·s, a real-valued C(1) function F, and its gradient ∇sF defined on a Hilbert space H.
Discrete steepest descent generates a sequence of points xm such that (15)xm=xm-1-δm-1∇sFxm-1,m=1,2,…,where initial guess x0 is given and δm-1 is selected so that it minimizes, if possible,(16)Fxm-1-δ∇sFxm-1,δ∈R.Continuous steepest descent is a process of finding a function z:[0,∞)→H such that(17)dzdt=-∇Fzt,z0=zinitial.Continuous steepest descent is regarded as a limiting case of discrete steepest descent and (15) can be used as a numerical technique to extract solutions of (17).
To prove the convergence of discrete steepest descent, we extract a theoretical initial point by using continuous steepest descent.
By (15) we desire a u=limk→∞xk, so that (18)Fu=0or ∇sFu=0.By (17), we desire a u=limt→∞z(t) so that (18) is satisfied.
The Sobolev gradient technique is a process in which we discretize a gradient in a Sobolev space rather than in a Euclidean space. For the solution of a PDE, we represent PDE using an error functional F for a least square formulation. We try to find a u in the domain of the PDE such that residual M is zero, while F and its gradient are given by (19)F=12M2,∇F=M′uMu.The following theorem gives the existence and convergence of several linear and nonlinear forms of F.
Theorem 2.
Let F be a nonnegative C(1) function (a differentiable function whose derivative is continuous) on a Hilbert space H which has a locally Lipschitzian gradient. Then, for each x∈H, there exists a unique function z:[0,∞)→H with(20)z0=x,z′t=-∇sFzt,t≥0.
Another observation which is useful in our work is as follows.
Theorem 3.
Suppose all the conditions in the hypothesis of the above theorem are satisfied and(21)limt→∞zt=uexists; then(22)∇sFzt=0.
3. The Nonlinear Klein-Gordon Equation
Consider the problem (23)∂2u∂t2+α∂2u∂x2+βu+γuk=fx,t,x∈Ω=a,b,0<t≤Twith initial conditions(24)ux,0=h1x,x∈Ωutx,0=h2x,x∈Ω,and Dirichlet boundary conditions(25)ux,t=gx,t,x∈∂Ω,0<t≤T,where α, β, and γ are constants and f, h1, h2, and g are known functions while u is unknown. For k=2,3 we have quadratic and cubic nonlinearity, respectively.
To solve the problem numerically, we work in a finite-dimensional vector space RN on a uniform grid. We denote by L2 or H02 the vector space RN equipped with the usual inner product x,y=∑ix(i)y(i). The operators D00,D11,D22:RN→RN-2 are defined by(26)D00ui=ui+2+2ui+1+ui4,D11ui=ui+2-ui2δx,D22ui=ui+2-2ui+1+uiδx2for i=1,2,…,N-2 and where δx=b-a/(N-1) is the spacing between the nodes. D00 is the average operator. D11 and D22 are standard central difference formulas for approximating the first and second derivatives. The choice of difference formula is not important to the theoretical development of this paper; other choices would also work.
The numerical version of the problem of evolving from a time t to t+δt is to solve (27)D001+βδt2u-2f1+f2+γδt2uk+αδt2D22u=0,where f1 and f2 in the equation are u at the previous and present time steps and u is the u desired at the next time. We can put the solution of this problem in terms of minimizing a functional via steepest descent. Define M∈RN-2 by (28)M=D001+βδt2u-2f1+f2+γδt2uk+αδt2D22u,where M represents the residual of the equation and when it approaches zero we have the desired u. The least square functional associated with the residual is given by (29)Fu=12Mu,Mu.F has a minimum of nearly zero when M(u) approaches zero. So we will look for the minimum of this functional.
3.1. Gradients and Minimization
The gradient ∇F(u)∈RN of a functional F(u) in L2 is found by solving (30)Fu+h=Fu+∇Fu,h+Oh2for test functions h. The gradient points in the direction of greatest increase of the functional. The direction of greatest decrease of the functional is -∇F(u). This is the corner stone of steepest descent algorithms.
We replace u by u-λ∇F to reduce F(u), where λ is a positive number. This process is repeated until F(u) or ∇F are reduced by a fair margin. To satisfy the boundary conditions in the finite-dimensional analogue of the original problem we use a projection ϕ:RN→RN which projects vectors in RN onto the subspace in which the initial and final entries of vectors are zero. We will use ϕ∇F(u) instead of using ∇F(u). In this particular case, (31)ϕ∇Fu=ϕ1+βδt2+kγδt2uk-1D00tMu+αδt2D22tMugives the desired gradient in L2.
For a nonlinear problem like the KGE the L2 gradient or Euclidean gradient is not a good choice. Standard steepest descent using the L2 gradient is inefficient when the spacing of the grid is refined and also when the dimensions of the problem are increased as it suffers from the CFL condition. The CFL condition requires that the time step should not be taken bigger than some computable quantity which depends on the discretization.
Rather than abandoning the steepest descent, the gradient is reconsidered in a different Sobolev space. We consider the Sobolev space H22 which is RN with the inner product (32)x,ys=D00x,D00y+D11x,D11y+D22x,D22y.It can be shown that the space defined by (32) is complete. The desired Sobolev gradient ∇sF(u) in H22 is found by solving (33)ϕD00tD00+D11tD11+D22tD22ϕ∇sFu=ϕ∇Fu.
4. Test Problems and Numerical Results
Numerical experiments for the solution of nonlinear KGE were performed as follows. A system of N nodes was set up with δx as internodal spacing. The function u was then evolved. The programs were terminated when the infinity norm of the residual function M(u) was less than 10-6.
Example 4.
In this example we consider the nonlinear KGE with cubic nonlinearity(34)∂2u∂t2+α∂2u∂x2+βu+γu3=fx,t,x∈Ω=0,1,0<t≤Twith constants α=-1, β=0, and γ=1 and the initial conditions(35)ux,0=BtanKx,0≤x≤1,utx,0=BcKsec2Kx,0≤x≤1.The analytical solution to this problem is (36)ux,t=BtanKx+ct,where B=β/γ and K=-β/2(α+c2).
Example 4 is solved by using both Euclidean and Sobolev gradients. We set δt=0.2 for the time increment. For five time steps, results are summarized in Table 1. Here λ is the maximum step size that could be used to obtain convergence with both L2 and H22 gradient. Iterations denote the total number of minimization steps to reach convergence. CPUs denotes the time in seconds taken to reach convergence.
Comparison of solution for Example 4 using Euclidean and Sobolev Gradients with δt=0.2, t=1, and c=0.05.
N
L2 gradient
H22 gradient
λ
Iterations
CPUs
λ
Iterations
CPUs
10
1.0×10-4
36015
0.1228
1.0
6469
1.4608
20
1.0×10-5
228336
6.6846
1.0
8060
4.7661
50
1.0×10-6
2181749
99.6085
1.0
9985
26.0143
70
—
—
—
1.0
10671
63.8609
From Table 1, it can be seen that as we increase the number of nodes the L2 gradient is going to be inefficient, but this is not in the case of Sobolev gradient.
The graphs of analytical solution and numerical solution obtained by the Sobolev gradient method for t=4 are shown in Figures 1 and 2. Figures 3 and 4 show the surface plot of the obtained solution.
Numerical and exact solution of Example 4 at t=4 with δt=0.1, N=20, and c=0.5.
Space-time graph of Example 4 up to t=4 with δt=0.1, N=20, and c=0.5.
Numerical and exact solution of Example 4 at t=4 with δt=0.5, N=30, and c=0.05.
Space-time graph of Example 4 up to t=4 with δt=0.5, N=30, and c=0.05.
In Tables 2 and 3, a comparison is shown between the Sobolev gradient method and LBM [20]. The L2,L∞, and Root-Mean-Square (RMS) errors are obtained for t=1,2,3,4.
Comparison of L∞, L2, and RMS errors for Example 4 at c=0.5 between Sobolev gradient method and LBM [20].
t
Sobolev descent
LBM
Nodes
δt
L∞
L2
RMS
Nodes
δt
L∞
L2
RMS
1
20
0.1
3.1518×10-4
9.3185×10-4
2.0838×10-4
100
5.0×10-5
1.4189×10-4
6.6508×10-4
6.6171×10-5
2
10
0.2
2.4966×10-4
5.4618×10-4
1.7272×10-4
100
5.0×10-5
4.6601×10-4
1.5438×10-3
1.5362×10-4
3
10
0.2
8.5185×10-4
1.8000×10-3
5.6788×10-4
100
5.0×10-5
1.9445×10-3
4.9588×10-3
4.9342×10-4
4
20
0.1
3.400×10-3
1.0300×10-2
2.3000×10-3
100
5.0×10-5
2.8219×10-2
7.1870×10-2
7.1513×10-3
Comparison of L∞, L2, and RMS errors for Example 4 at c=0.05 between Sobolev gradient method and LBM [20].
t
Sobolev descent
LBM
Nodes
δt
L∞
L2
RMS
Nodes
δt
L∞
L2
RMS
1
20
0.1
5.7169×10-6
1.3781×10-5
3.5582×10-6
100
5.0×10-5
5.6970×10-5
2.9718×10-4
2.9570×10-5
2
15
0.5
3.5225×10-6
8.3370×10-6
2.1526×10-6
100
5.0×10-5
7.4878×10-5
3.8699×10-4
3.8507×10-5
3
15
0.5
6.2838×10-6
1.3009×10-5
3.3589×10-6
100
5.0×10-5
1.1972×10-4
5.2203×10-4
5.1944×10-5
4
30
0.5
1.2084×10-6
3.4327×10-5
6.2672×10-7
100
5.0×10-5
1.4008×10-3
4.4143×10-4
4.3924×10-5
Results in Tables 2 and 3 show the better accuracy of the proposed method than from LBM.
In Table 4 RMS value obtained by the Sobolev gradient method is compared with RBF and SMLP methods given in [11, 21]. The values of energy E(u) obtained by the proposed method for different time levels are also given.
Comparison of RMS errors for Example 4 between Sobolev descent, SMLP [21], and RBF [11].
t
E(u)
RMS
SMLP
RBF
Sobolev descent
1
0.0585
5.5509×10-8
1.7772×10-7
3.5582×10-6
2
0.0686
1.1903×10-7
1.5306×10-7
2.1526×10-6
3
0.0795
1.3990×10-7
1.7190×10-7
3.3589×10-6
4
0.0815
1.0784×10-7
1.9997×10-7
6.2672×10-7
Table 4 shows that the results obtained by the proposed method in terms of accuracy are comparable with other numerical schemes. The energy E(u) remains invariant at different time levels.
Example 5.
Consider the nonlinear KGE with quadratic nonlinearity (37)∂2u∂t2+α∂2u∂x2+βu+γu2=fx,t,x∈Ω=0,1,0<t≤Twith constants α=-1, β=0, and γ=1 and initial conditions(38)ux,0=0,0≤x≤1,utx,0=0,0≤x≤1.The right-hand side function is (39)fx,t=6xtx2-t2+x6t6and the analytical solution to this problem is (40)ux,t=x3t3.We get the boundary function g(x,t) from the exact solution.
The graph of the approximated and exact solutions at t=4 is shown in Figure 5 and the space-time graph using Sobolev gradient method is shown in Figure 6.
Numerical solution of Example 5 at t=5 with δt=0.1, N=10, and c=0.05.
Space-time graph of Example 5 up to t=5 with δt=0.1, N=10, and c=0.05.
Table 5 shows the comparison between the Sobolev gradient method and LBM [20]. The L2,L∞, and Root-Mean-Square (RMS) errors are also calculated for t=1,2,3,4,5.
Comparison of L∞,L2, and RMS errors for Example 5 at c=0.05 between Sobolev gradient method and LBM [20].
t
Sobolev descent
LBM
Nodes
δt
L∞
L2
RMS
Nodes
δt
L∞
L2
RMS
1
10
0.1
2.4302×10-2
5.3120×10-2
1.6890×10-2
100
5.0×10-5
5.8742×10-4
1.9270×10-3
1.9174×10-4
3
10
0.1
8.6234×10-3
1.9334×10-2
6.1398×10-3
100
5.0×10-5
1.5139×10-2
4.9465×10-2
4.9219×10-3
5
10
0.1
4.3457×10-3
5.0021×10-3
1.6089×10-3
100
5.0×10-5
6.3219×10-2
9.3035×10-2
1.2970×10-2
7
10
0.1
1.1123×10-3
2.1457×10-3
6.7938×10-4
—
—
—
—
—
Once again the results show the better accuracy of the proposed method than LBM.
5. Summary and Conclusions
In this paper, we proposed a numerical scheme to solve the nonlinear KGE using Sobolev gradients. Table 1 shows that standard steepest descent using the L2 gradient compels us to choose a very small step size λ, leading to a huge number of iterations and sometimes a failure to reach convergence.
Steepest descent defined in an appropriate Sobolev space, on the other hand, uses a very large λ and fewer iterations. So the Sobolev gradient technique is preferable to the usual steepest descent technique as the spacing of the grid is made finer.
Sobolev gradient techniques may offer definite benefits in certain cases; for example, the choice of initial guess does not affect convergence of the method. The results of two examples are compared with the LBM in [20]. Our scheme uses a large time step δt as compared with the LBM and gives better numerical results in terms of accuracy. In these two examples results show that, for larger values of time t, the Sobolev gradient approach is better than the LBM. The choice of optimal metric defined in Sobolev space can further improve the numerical results, but how to choose such a metric is still an important question to be investigated.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The first author is thankful to University of Punjab for providing research grant via D/605/Est. 1 dated 24-02-2015.
WazwazA.-M.New travelling wave solutions to the Boussinesq and the Klein-Gordon equations200813588990110.1016/j.cnsns.2006.08.005MR23721352-s2.0-36248992919BulbulB.SezerM.GreinerW.20003rdBerlin, GermanySpringerCaudreyP. J.EilbeckJ. C.GibbonJ. D.The sine-Gordon equation as a model classical field theory197525249751210.1007/bf027247332-s2.0-51249191762BiswasA.YildirimA.HayatT.AldossaryO. M.SassamanR.Soliton perturbation theory for the generalized Klein-Gordon equation with full nonlinearity20121313241MR2911674BiswasA.ZonyC.ZerradE.Soliton perturbation theory for the quadratic nonlinear Klein-Gordon equation2008203115315610.1016/j.amc.2008.04.013MR24515482-s2.0-50249112061BiswasA.EbadiG.FessakM.JohnpillaiA. G.JohnsonS.KrishnanE. V.YildirimA.Solutions of the perturbed Klein-Gordon equations2012364431452MR3005809SirendaorejiAuxillary equation method and new solutions of Klein-Gordon equations200731494395010.1016/j.chaos.2005.10.048SirendaorejiA new auxiliary equation and exact travelling wave solutions of nonlinear equations2006356212413010.1016/j.physleta.2006.03.0342-s2.0-33745235497LiX.GuoB. Y.A legendre spectral method for solving nonlinear Klein-Gordon equation1997152105126GuoB. Y.LiX.VazquezL.A Legendre spectral method for solving the nonlinear Klein-Gordon equation19961511936DehghanM.ShokriA.Numerical solution of the nonlinear Klein–Gordon equation using radial basis functions2009230240041010.1016/j.cam.2008.12.011MR25323332-s2.0-67349194320LynchM. A. M.Large amplitude instability in finite difference approximations to the Klein-Gordon equation199931217318210.1016/s0168-9274(98)00128-7MR17089572-s2.0-0033208009NeubergerJ. W.19971670New York, NY, USASpringerLecture Notes in MathematicsMR1624197SialS.NeubergerJ.LookmanT.SaxenaA.Energy minimization using Sobolev gradients: application to phase separation and ordering20031891889710.1016/s0021-9991(03)00202-xMR19881412-s2.0-0037777657KaratsonJ.LocziL.Sobolev gradient preconditioning for the electrostatic potential equation20055071093110410.1016/j.camwa.2005.08.011MR21677462-s2.0-26444531625KaratsonJ.FaragoI.Preconditioning operators and Sobolev gradients for nonlinear elliptic problems20055071077109210.1016/j.camwa.2005.08.010MR21677452-s2.0-26444618152BrownB. M.JaisM.KnowlesI. W.A variational approach to an elastic inverse problem20052161953197310.1088/0266-5611/21/6/010MR21836612-s2.0-28544447050KnowlesI.YanA.On the recovery of transport parameters in ground-water modelling20041711-227729010.1016/j.cam.2004.01.038MR20772092-s2.0-3042668275Garcia-RipollJ.KonotopV.MalomedB.Perez-GarciaV.A quasi-local Gross-Pitaevskii equation for attractive Bose-Einstein condensates2003621-2213010.1016/s0378-4754(02)00190-8LiQ.JiZ.ZhengZ.LiuH.Numerical solution of nonlinear Klein-Gordon equation using lattice Boltzmann method20112121479148510.4236/am.2011.212210YinF.TianT.SongJ.ZhuM.Spectral methods using legendre wavelets for nonlinear Klein-Gordon/Sine-Gordon equations201527532133410.1016/j.cam.2014.07.014