Computation of Spectral Parameter of Discontinuous Dirac Systems with a Gaussian Multiplier

and Applied Analysis 3 The proposed method reduces the error bounds remarkably (see examples of Section 4). The basic idea is to write the function of eigenvalues as the sum of two terms, one known and the other unknown but an entire function of exponential type which satisfies (6). In other words, the unknown term is not necessarily an L-function. Then we approximate the unknown part using (5) and obtain better results. We would like to mention that the papers in computing eigenvalues by sinc-Gaussian method are few; see [22–26]. In Sections 2 and 3, we discuss some properties of the eigenvalues of the boundary value problems (12)–(15) and also, we derive the sinc-Gaussian technique to compute the eigenvalues of (12)–(15) with error estimates. The last section involves some illustrative examples. 2. Some Important Results In the following, we study some properties of the eigenvalues of the problems (12)–(15) which need in our method; see [26, 28]. For functions u(x), which defined on [−1, 0) ∪ (0, 1] and has finite limitu(±0) := lim x→±0 u(x), byu (1) (x) andu (2) (x), we denote the functions u (1) (x) = { u (x) , x ∈ [−1, 0) ;


Introduction
Problems with a spectral parameter in equations and boundary conditions form an important part of spectral theory of linear differential operators.A bibliography of papers in which such problems were considered in connection with specific physical processes can be found in [1,2].Let  > 0. The Paley-Wiener space B 2  is the space of all entire functions of exponential type  which lie in  2 (R) when restricted to R, that is, ( Assume that  ∈ B 2  , and Whittaker-Kotel'nikov-Shannon sampling theorem (WKS) states that any function  can be reconstructed via the classical sampling expansion, [3][4][5], In (2), the series is convergent uniformly on R and is convergent absolutely and uniformly on compact subsets of C, cf.[6].The WKS sampling series is widely used in approximation theory.It is used to approximate functions and their derivatives, solutions of differential and integral equations, and integral transforms (Fourier, Laplace, Hankel and Mellin) and to approximate the eigenvalues of boundary value problems; see, for example, [7][8][9][10].The use of WKS sampling theory is called sinc methods, cf.[11][12][13][14].The sincmethod has a slow rate of decay at infinity, which is as slow as (| −1 |).There are several attempts to improve the rate of decay.One of the interesting ways is to multiply the sincfunction in (2) by a kernel function, see, for example, [15][16][17].
In [23], Annaby and Tharwat computed the eigenvalues of a second-order operator pencil by sinc-Gaussian method.Also, the authors introduced some examples illustrating the sinc-Gaussian method accompanied by comparison with the sinc-method which explained that the sinc-Gaussian method gives remarkably better results; see also [24,25].
Tharwat and Al-Harbi [26] computed the eigenvalues of discontinuous Dirac system but with eigenparameter in one boundary condition.In [14,27] Tharwat et al. computed the eigenvalues of discontinuous Dirac system approximately, with eigenparameter appears in any of boundary conditions, by Hermite interpolations and regularized sinc-methods, respectively.In regularized sinc-method, also the same in Hermite interpolations method, the basic idea is that the eigenvalues are characterised as the zeros of an analytic function () which can be written in the form () = () + (), where () is known part.The ingenuity of the approach is in trying to choose the function () so that () ∈ B 2  (unknown part) and can be approximated by the sampling theorem if its values at some equally spaced points are known; see [11][12][13][14].Recall that, in regularized sinc and Hermite interpolations methods, it is necessary that () is  2 -function.In this paper we will use sinc-Gaussian sampling formula (5) to compute eigenvalues of ( 12)-( 15) numerically.
The proposed method reduces the error bounds remarkably (see examples of Section 4).The basic idea is to write the function of eigenvalues as the sum of two terms, one known and the other unknown but an entire function of exponential type which satisfies (6).In other words, the unknown term is not necessarily an  2 -function.Then we approximate the unknown part using (5) and obtain better results.We would like to mention that the papers in computing eigenvalues by sinc-Gaussian method are few; see [22][23][24][25][26].In Sections 2 and 3, we discuss some properties of the eigenvalues of the boundary value problems ( 12)-( 15) and also, we derive the sinc-Gaussian technique to compute the eigenvalues of ( 12)-( 15) with error estimates.The last section involves some illustrative examples.
In the following lemma, we will prove that the eigenvalues of the problems ( 12)- (15) are real; see [29,30].
Then, we may introduce to the consideration the characteristic function Ω() as Note that all eigenvalues of problems ( 12)-( 15) are just zeros of the function Ω().Indeed, since the functions  1 (, ) and  2 (, ) satisfy the boundary condition (13) and both transmission conditions (15), to find the eigenvalues of the ( 12)-( 15), we have to insert the functions  1 (, ) and  2 (, ) in the boundary condition (14) and find the roots of this equation.In the following lemma, we show that all eigenvalues of the problems ( 12)-( 15) are simple.
Lemma 3. The eigenvalues of the boundary value problems (12)-( 15) form an at most countable set without finite limit points.All eigenvalues of the boundary value problems (12)-( 15) (of Ω()) are simple.

The Numerical Scheme
In this section we derive the method of computing eigenvalues of problems ( 12)-( 15) numerically.The basic idea of the scheme is to split Ω() into two parts a known part K() and an unknown one U().Then we approximate U() using ( 5) to get the approximate Ω() and then compute the approximate zeros.We first split Ω() into two parts: where U() is the unknown part involving integral operators and K() is the known part Then, from Lemmas 4 and 5, we have the following result.
Lemma 6.The function U() is entire in  and the following estimate holds: where Using the inequalities | sin | ≤  |I| and | cos | ≤  |I| for  ∈ C, and Lemmas 4 and 5 imply (66).
Here we use a computer algebra system, mathematica, to obtain the approximate solutions with the required accuracy.However, a separate study for the effect of different numerical schemes and the computational costs would be interesting.Accordingly we have the explicit expansion Therefore we get, cf.(10), Now let Ω () := K() + (G ℎ, Ũ)().From ( 69) and ( 72) we obtain Let  * be an eigenvalue and   its desired approximation, that is, Ω( * ) = 0 and Ω (  ) = 0. From (73) we have . Define the curves The curves  + (),  − () enclose the curve of Ω() for suitably large .Hence the closure interval is determined by solving  ± () = 0, giving an interval It is worthwhile to mention that the simplicity of the eigenvalues guarantees the existence of approximate eigenvalues, that is, the   's for which Ω (  ) = 0. Next we estimate the error | * −   | for the eigenvalue  * .
Theorem 7. Let  * be an eigenvalue of (12)-( 15) and let   be its approximation.Then, for  ∈ R, we have the following estimate: where the interval  , is defined above.

Numerical Examples
In this section, we introduce two examples illustrating the the above method.Also, in the following examples, we observe that the exact solutions   are all inside the interval [ − ,  + ].
In these two examples, we indicate the effect of the amplitude error in the method by determining enclosure intervals for different values of .We also indicate the effect of  and ℎ by several choices.The eigenvalues of the following examples cannot computed concretely, then we use mathematica to obtain the exact values.mathematica is also used in rounding off the exact eigenvalues, which are square roots.
Both numerical results and the associated figures prove the credibility of the method.
Example 1.Consider the system Here As is clearly seen, the eigenvalues cannot be computed explicitly.Tables 1, 2, and 3 indicate the application of our technique to this problem and the effect of .By exact we mean the zeros of Δ() computed by Mathematica.
Figures 1 and 2 illustrate the enclosure intervals dominating  0 for  = 25, ℎ = 0.1 and  = 10 −2 ,  = 10 −5 , respectively.The middle curve represents Δ(), while the upper and lower curves represent the curves of  + (),  − (), respectively.We notice that when  = 10 −5 , the two curves are almost identical.Similarly, Figures 3 and 4     Example 2. In this example we consider the system where   Tables 4 and 6 give the exact eigenvalues {  } 1 =−2 and their approximate ones for different values of ℎ, , and .In Table 5, we give the absolute error for different values of ℎ and .

Conclusion
With a simple analysis and with values of solutions of initial value problems computed at a few values of the eigenparameter, we have computed the eigenvalues of a Dirac system which has a discontinuity at one point and contains a spectral parameter in all boundary conditions, with a certain estimated error.The method proposed is a shooting procedure; that is, the problem is reformulated as two initial value ones, due to the interior discontinuity, of size two and a miss-distance is defined at the right end of the interval of integration whose roots are the eigenvalues to be computed.The unknown part U() of the miss-distance can be written in terms of a function which is an entire function of exponential type.Therefore, we propose to approximate such term by means of a truncated cardinal series with sampling values approximated by solving numerically corresponding suitable initial value problems.Finally, in Section 4 we introduced two instructive examples, where both numerical results and the associated figures have proved the credibility of the method.

Table 1 :
The approximation  , and the exact solutions   for different choices of ℎ and .

Table 3 :
For  = 25 and ℎ = 0.1, the exact solutions   are all inside the interval [ − ,  + ] for different values of .

Table 4 :
The approximation  , and the exact solutions   for different choices of ℎ and .

Table 6 :
For  = 25 and ℎ = 0.2, the exact solutions   are all inside the interval [ − ,  + ] for different values of .