Explicit Solution of Telegraph Equation Based on Reproducing Kernel Method

We propose a reproducing kernel method for solving the telegraph equation with initial conditions based on the reproducing kernel theory. The exact solution is represented in the form of series, and some numerical examples have been studied in order to demonstrate the validity and applicability of the technique. The method shows that the implement seems easy and produces accurate results.


Introduction
In this paper, we consider the telegraph equation of the following form: over a region Ω { x, t : 0 < x < 1 and 0 < t < T} and α, β are known constant coefficients with initial conditions as u x, 0 φ x , ∂u x, 0 ∂t ψ x , 1.2 where u x, t can be voltage or current through the wire at position x and time t.In 1.1 , we have where G is conductance of resistor, R is resistance of resistor, L is inductance of capacitor, C is capacitance of capacitor, and u x, t can be considered as a function depending on distance x and time t, and constants are depending on a given problem and f, φ, and ψ are known continuous functions.
The hyperbolic partial differential equations model the vibrations of structures e.g., buildings, beams, and machines and are the basis for fundamental equations of atomic physics.Equation 1.1 , referred to as the second-order telegraph equation with constant coefficients, models a mixture between diffusion and wave propagation by introducing a term that accounts for effects of finite velocity to the standard heat or mass transport equation 1 .However, 1.1 is commonly used in signal analysis for transmission and propagation of electrical signals 2, 3 .
In recent years, much attention has been given in the literature to the development, analysis, and implementation of stable methods for the numerical solution of second-order hyperbolic equations, see, for example, 4-11 .These methods are conditionally stable.In 12 , Mohanty carried over a new technique to solve 1.1 , which is unconditionally stable and is of second-order accuracy in both the time and space components.Mohebbi and Dehghan 13 presented a high-order accurate method for solving one-space-dimensional linear hyperbolic equations and proved the high-order accuracy due to the fourth-order discretization of spatial derivative and unconditional stability.A compact finite difference approximation was presented in 14 by using the fourth order discretizing spatial derivatives of the linear hyperbolic equation and collocation method for the time component.Another solution is approximated by suing a polynomial at each grid point such that its coefficients were determined by solving a linear system of equations 15 .By using collocation points and approximating the solution by using a thin plate splines radial basis function was presented in 16 .
In 17 , the author used the Chebyshev cardinal functions.Lakestani and Saray 18 used interpolating scaling function.Ding et al. 19 constructed a class of new difference scheme based on a new nonpolynomial spline method to solve 1.1 and 1.2 .Lakoud and Belakroum 20 studied the existence and uniqueness of the solution with integral condition by using the Ro the time discretization method.Dehghan et al.21 used to compute the solution for the linear, variable coefficient, fractional derivative, and multispace telegraph equations by using the variational iteration method.Further, Biazar et al. 22 obtained an approximate solution by using the variational iteration method.Recently, Yao and Lin 23 investigated a nonlinear hyperbolic telegraph equation with an integral condition by reproducing kernel space at α β 0 in 1.1 .In 24 , Yousefi presented a numerical method by using Legendre multiwavelet Galerkin method.
In this paper, the RKHSM 25-47 will be used to investigate the telegraph equation 1.1 .Several researches have been devoted to the application of RKHSM to a wide class of stochastic and deterministic problems involving fractional differential equation, nonlinear oscillator with discontinuity, singular nonlinear two-point periodic boundary value problems, integral equations, and nonlinear partial differential equations 27-41 .The method is well suited to physical problems.
The efficiency of the method was used by many authors to investigate several scientific applications.Geng and Cui 27 applied the RKHSM to handle the second-order boundary value problems.Yao  For more details about RKHSM and the modified forms and their effectiveness, see 25-47 .In the present work, we use the following equation: by transformation for homogeneous initial conditions of 1.1 and 1.2 , we get as follows 1.5 : where The paper is organized as follows.Section 2 is devoted to several reproducing kernel spaces and a linear operator is introduced.The solution representation in W Ω has been presented in Section 3. We prove that the approximate solution converges to the exact solution uniformly.Some numerical examples are illustrated in Section 4. We provide some conclusions in the last sections.

Preliminaries
Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the base.Since finite-dimensional Hilbert spaces are fully understood in linear algebra, and since morphisms of Hilbert spaces can always be divided into morphisms of spaces with Aleph-null κ 0 dimensionality, functional analysis of Hilbert spaces mostly deals with the unique Hilbert space of dimensionality Aleph-null and its morphisms.One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace.Many special cases of this invariant subspace problem have already been proven 48 .

Reproducing Kernel Spaces
In this section, we define some useful reproducing kernel spaces.
The last condition is called "the reproducing property" as the value of the function ϕ at the point t is reproduced by the inner product of ϕ with K •, t .
Then we need some notation that we use in the development of the paper.In the next we define several spaces with inner product over those spaces.Thus the space defined as is a Hilbert space.The inner product and the norm in respectively.Thus the space W 3 2 0, 1 is a reproducing kernel space, that is, for each fixed y ∈ 0, 1 and any v ∈ W 3 2 0, 1 , there exists a function R y such that and similarly we define the space The inner product and the norm in T 3 2 0, 1 are defined by respectively.The space T 3 2 0, 1 is a reproducing kernel Hilbert space and its reproducing kernel function r s is given by 26 as and the space is a Hilbert space where the inner product and the norm in G 1 2 0, 1 are defined by respectively.The space G 1 2 0, 1 is a reproducing kernel space and its reproducing kernel function Q y is given by 26 as Similarly, the space H 1 2 0, 1 defined by is a Hilbert space and then inner product and the norm in T 1 2 0, 1 are defined by respectively.The space H 1 2 0, 1 is a reproducing kernel space and its reproducing kernel function q s is given by 26 as

Journal of Function Spaces and Applications
Now we have the following theorem.
Theorem 2.2.The space W 3 2 0, 1 is a complete reproducing kernel space whose reproducing kernel R y is given by through iterative integrations by parts for 2.15 , we have

2.16
Note the property of the reproducing kernel as Then by 2.16 we obtain

2.21
Since we have

2.23
From 2.18 and 2.23 , the unknown coefficients c i y and d i y i 1, 2, . . ., 6 can be obtained.Thus R y is given by xy 4  1 120 y 5 , x > y.

Journal of Function Spaces and Applications
Now we note that the space given in 26 as is a binary reproducing kernel Hilbert space.The inner product and the norm in W Ω are defined by respectively.
Theorem 2.3.The W Ω is a reproducing kernel space and its reproducing kernel function is

2.28
Similarly, the space is a binary reproducing kernel Hilbert space.The inner product and the norm in W Ω are defined by 26 as respectively.W Ω is a reproducing kernel space and its reproducing kernel function G y,s is G y,s Q y q s .2.31

Solution Representation in W(Ω)
In this section, the solution of 1.1 is given in the reproducing kernel space W Ω .We define the linear operator L :

3.2
Lemma 3.1.The operator L is a bounded linear operator. Proof.Since Now similarly for i 0, 1, we obtain 3.5 and then

3.6
Therefore we conclude

3.7
Now, if we choose a countable dense subset { x 1 , t 1 , x 2 , t 2 , . ..} in Ω 0, 1 × 0, 1 and define where L * is the adjoint operator of L, then the orthonormal system 3.9 Then we have the following theorem.

3.13
Note that { x i , t i } ∞ i 1 is dense in Ω, hence, Lv x, t 0. It follows that v 0 from the existence of L −1 .So the proof is complete.

3.15
Now the approximate solution v n x, t can be obtained from the n-term intercept of the exact solution v x, t and

3.16
Of course it is also easy to show that v n x, t − v x, t −→ 0, n −→ ∞ . 3.17

Convergence Analysis
We assume that {x i , t i } ∞ i 1 is dense in Ω 0, 1 × 0, 1 .We discuss the convergence of the approximate solutions constructed in Section 3. Let v be the exact solution of 1.1 and v n the n-term approximation solution of 1.1 .Then we have the following theorem.

3.18
Moreover a sequence v − v n W Ω is monotonically decreasing in n.
Proof.From 3.14 and 3.16 , it follows that Thus

3.20
In addition,

3.21
Then clearly, v − v n W Ω is monotonically decreasing in n.

Experimental Results for the Telegraph Equation
In this section, three numerical examples are provided to show the accuracy of the present method.All the computations were performed by Maple 13.Since, the RKHSM does not require discretization of the variables, that is, time and space, it is also not effected by computation round-off errors and no need to face with necessity of large computer memory and time.The accuracy of the RKHSM for the problem 1.1 is controllable and absolute errors are small with present choice of x and t see Tables 1, 2, 3, 4, 5, and 6 .Thus the numerical results which we obtain justify the advantage of this methodology.Note that the solutions are very rapidly convergent by utilizing the RKHSM.Further, the series solution methodology can be applied to various types of linear or nonlinear system of partial differential equations and single partial differential equations, see, for example, 25-30 .
4.1  In Table 7, we compute the following relative errors:  Then the exact solution is given as u x, t sin πx sin πt .

4.5
Then we have the estimation in Table 1.
Now if we compare 13, 21 and the present scheme we have Table 2.
We have Figures 1, 2, and 3 for this example, where ES exact solution and AS approximate solution.
Example 4.2.Consider the following telegraph equation with initial conditions: The exact solution u x, t e −πt sin πx.If we apply v x, t u x, t − sin πx tπ sin πx to 4.6 , then we obtain then similarly, in Table 3 we have the estimation among the exact and approximate solutions and the error terms.
Then the comparison yields in Table 4.
We have Figures 4, 5, and 6 for this example.

4.8
The exact solution u x, t e 2t x 4 x − 1 4 .If we apply v x, t u x, t − x 4 x − 1 4 − 2tx 4 x − 1 4 4.9 One may view Tables 1, 2, 3, 4, 5 and 6 for the confidence of the method and the comparison with the other methods.In Table 7, computing time with relative error is also given for each example.

Conclusion
In this paper, the RKHSM was used for the telegraph equation with initial conditions.The approximate solutions to the equations have been calculated by using the RKHSM without any need to transformation techniques and linearization or perturbation of the equations.In closing, the RKHSM avoids the difficulties and massive computational work by determining the analytic solutions.We compare our solutions with the exact solutions and the results of 19 .
A clear conclusion can be drawn from the numerical results as the RKHSM algorithm provides highly accurate numerical solutions without spatial discretizations for the nonlinear partial differential equations.It is also worth noting that the advantage of this methodology displays a fast convergence of the solutions.The illustrations show the dependence of the rapid convergence depends on the character and behavior of the solutions just as in a closed form solutions.

Figure 1 :Figure 2 :
Figure 1: The compared results of the analytical and numerical solutions of Example 4.1 for t 0.5.

Figure 3 :Figure 4 :
Figure 3: The compared results of the analytical and numerical solutions of Example 4.1 for t 1.5.

Figure 5 :
Figure 5: The compared results of the analytical and numerical solutions of Example 4.2 for t 1.0.

Figure 6 :
Figure 6: The compared results of the analytical and numerical solutions of Example 4.2 for t 1.5.

Figure 7 :Figure 8 :
Figure 7: The compared results of the analytical and numerical solutions of Example 4.3 for t 0.5.
and Cui 28 and Wang et al. 29 investigated a class of singular boundary value problems by this method and the obtained results were good.Zhou et al. 30 used the RKHSM effectively to solve second-order boundary value problems.In 31 , the method was used to solve nonlinear infinite-delay-differential equations.Wang and Chao 32 , Li and Cui 33 , and Zhou and Cui 34 independently employed the RKHSM to variable-coefficient partial differential equations.Geng and Cui 35 and Du and Cui 36 investigated to the approximate solution of the forced Duffing equation with integral boundary conditions by combining the homotopy perturbation method and the RKHSM.Lv and Cui 37 presented a new algorithm to solve linear fifth-order boundary value problems.In 38, 39 , authors developed a new existence proof of solutions for nonlinear boundary value problems.Cui and Du 40 obtained the representation of the exact solution for the nonlinear Volterra-Fredholm integral equations by using the reproducing kernel space.Wu and Li 41 applied iterative reproducing kernel method to obtain the analytical approximate solution of a nonlinear oscillator with discontinuities.Recently, the method was applied the fractional partial differential equations and the multipoint boundary value problems 42-45 .

Table 1 :
The exact solution, approximate solution, absolute error, and relative error of Example 4.1 for initial conditions at t 0.5.

Table 2 :
The absolute error ofExample 7 for difference schemes and our scheme at t 0.5.

Table 3 :
The exact solution, approximate solution, absolute error, and relative error of Example 4.2 for initial conditions at t 0.8.

Table 4 :
The absolute error of Example 4.2 for difference schemes and our scheme at t 0.8.

Table 5 :
The exact solution, approximate solution, absolute error, and relative error of Example 4.3 for initial conditions at t 1.

Table 6 :
The absolute error of Example 4.3 for difference schemes and our scheme at t 1.0.

Table 7 :
The relative errors and computational times for Examples 4.1-4.3 at different values of x and t.
: x i t i i, i 0.1, . .., 1.0}.It is possible to refine the result by increasing the intensive points.We constructed the figures for different values of x and t.It can be concluded by figures that the speed of convergence is decreasing by increasing the values of x and t. i