The Distance between Points of a Solution of a Second Order Linear Differential Equation Satisfying General Boundary Conditions

and Applied Analysis 3 Lemma 3. If any of the following conditions


Introduction
In a recent paper of the authors (see [1]) it was shown that the recursive application of the operator : [ , ] → 2 [ , ] defined by where ( ) is continuous on [ , ] and strictly positive almost everywhere on that same interval, provided a method to determine if the second order linear differential equation was either left disfocal or left nondisfocal in the interval [ , ], the concept of left disfocal alluding to the nonexistence of a nontrivial solution ( ) of (2) with zeroes in [ , ] such that ( ) = 0 (a similar definition exists for right disfocal). The method was based on two features of the operator , namely, (i) the fact that is positive and monotonic, so that for > > > 0 on ] , [ one has ( ) > ( ) > ( ) on ] , ]; (ii) the fact that lim → ∞ ( ) = ∞ if (2) is left nondisfocal in an interval interior to [ , ] and lim → ∞ ( ) = 0 for any value of ∈ [ , ] as long as (2) is left disfocal in [ , ].
The purpose of this paper is to extend the results of [1] to (2) with the more general boundary conditions: where ( , ) (or ( , ), as we will name it in some sections) is the Green function of the problem In consequence this paper will yield criteria (in fact infinitely many) to obtain lower and upper bounds for the minimum distance between points and satisfying (3) when ( ) is a solution of (2), bounds of which will be shown to converge to the exact values of such a minimum distance as the index grows. Besides extending the results of [1] to the problem (2)-(3), other results based on the same strategies will also be introduced and it will be shown that they improve the results of [1] in many cases.
Operators of the type (4) have often been used in the determination of Lyapunov inequalities for different types of equations and boundary conditions since Nehari [2] first noted that a solution of (2) such that ( ) = ( ) = 0 satisfied which implied that is, Beesack [3], Hinton [4], Levin [5,6], and Reid [7][8][9] are other remarkable examples of such a use, as the excellent monography on Lyapunov inequalities of [10,Chapter 1] shows. Although formulae like (6) can be applied recursively if ( ) ≥ 0 and ( , ) ≥ 0 to obtain more complex versions of (9), the fact is that the iterative application of has rarely been proposed in any papers, with the exception of Harris [11], who suggested its application for the disfocal case-( ) = ( ) = 0 or ( ) = ( ) = 0-without getting to prove that it guaranteed any improvement. We will show in this paper that under certain conditions (3) the function ( , ) is positive and the recursive application is possible and provides lower and upper bounds which improve all existing results as of today. We will also show that even in the case that ( , ) gets to be negative it is still sometimes possible to obtain upper and lower bounds for the distance between and , which are as close to the real distance between and as desired.
It is also worth remarking the significant interest that the calculation of lower bounds (i.e., Lyapunov kind-of inequalities) has enjoyed in comparison with the problem of the determination of upper bounds, regardless of the type of boundary conditions (3). This fact was already noted by Došlý in [12] for the conjugate case ( ( ) = ( ) = 0) and by the authors in [1,13,14] for the nondisfocal case. References [12,[15][16][17][18][19][20] are notable exceptions to this trend.
As indicated in the first paragraph, throughout the paper we will assume that ( ) is continuous on an interval ⊂ such that [ , ] ⊂ and that ( ) is strictly positive almost everywhere on [ , ]. This allows defining the internal product for , continuous on [ , ] (it is easy to prove that (10) satisfies all the conditions required by an internal product) and the associated norm ‖ ⋅ ‖ 2 defined by Likewise, we will use the notation to name the operator defined in (4), or { } to name the function with domain [ , ] resulting from the application of to ( ) ∈ [ , ], and ( ) or { }( ) to name the value of the function at the point and , when other extremes of integration , potentially different from , are used in (4). In that latter case, any ‖ ⋅ ‖ 2 norm appearing in the same formula will be assumed to be calculated with and as integration extremes in (11).
We will say that the points , are interior to an interval . The organization of the paper is as follows. Section 2 will present the main properties of the operator . Sections 3 and 4 will apply these properties in different ways to find upper and lower bounds for the minimum distance between points and satisfying (3). Section 5 will introduce some formulae which simplify the calculations required in Section 3. Section 6 will apply the method to several examples. Finally Section 7 will draw several conclusions.

The Operator
= ∫ ( , ) ( ) ( ) The purpose of this section will be to present the main properties of the operator defined in (4) for ( ) as specified in the Introduction. As was done in [1], for the sake of clarity such properties will be presented in several lemmas which will lead to Theorem 5, which can be regarded as the key result of this section.  Proof. Let us define 1 ( ) as the solution of and 2 ( ) as the solution of A straightforward calculation shows that if any of the conditions (12) are met, then the Wronskian of 1 and 2 , namely, is not zero, which implies that 1 and 2 are not linearly dependent and therefore no nontrivial solutions of (2)-(3) exist. In consequence from [22,Theorem IX.3] one has Now, in order to prove self-adjointness, we need to prove that, given , ∈ [ , ], ⟨ , ⟩ = ⟨ , ⟩. Thus, from (10) we have Combining (17) and (16) one gets to And integrating by parts the right-hand side of (18) one finally has Lemma 4. The operator is bounded with the ‖ ⋅ ‖ ∞ norm and verifies where the ‖ ⋅ ‖ 2 norm is defined as in (11).
Theorem 5. If any of conditions (12) are met, then the operator has a countably infinite number of eigenvalues 1/ and associated orthonormal eigenfunctions Φ ( ), which allow expressing , ≥ 1, as Moreover, (ii) if there is a nontrivial solution of (2) that satisfies (3) at some , interior to [ , ], then the sign corresponding to that of ⟨ , Φ 1 ⟩ Φ 1 ( ).
Proof. Let us consider the eigenvalue problem From the theory of ordinary differential equations (see [22, Theorems V.8 and V.9]) it is known that there exist a countably infinite number of eigenvalues which form an increasing sequence with lim → ∞ = ∞, each of which has its corresponding orthonormal (with the norm (11)) eigenfunction Φ ( ), and that the set of eigenfunctions Φ forms an orthonormal basis of [ , ]. Applying the operator to these eigenfunctions Φ ( ) and integrating by parts it is easy to show that which implies that Φ are also the eigenfunctions of the operator with corresponding eigenvalues 1/ . Since from Lemmas 1, 2 and 3 is linear, compact, and self-adjoint, we can apply [21, Theorem 7.5.2] and represent in the canonical form Applying again to (30) it yields given that ⟨Φ , Φ ⟩ = 1 and ⟨Φ , Φ ⟩ = 0 for ̸ = . Applying recursively to (31) one gets to which is in fact (22). And the application of Parseval's identity (see [21,Lemmas 1.5.14]) to (32) leads to Since { } form an increasing sequence, from (33) it is clear that Now, let us note that if (2) does not have any nontrivial solution that satisfies (3) either at , or at any , interior to [ , ], the first eigenvalue 1 (and therefore all the others) must be strictly greater than 1. In that case, from (34) one has (23) and From (35) and Lemma 4 one gets (24). Likewise, if there is a nontrivial solution of (2) that satisfies (3) at some , interior to [ , ], then 1 must be such that 1 < 1. From this and (33) one gets (25). If, in addition, ⟨ , Φ 1 ⟩ ̸ = 0, from (33) we get to which is (26). On the other hand, we can write We can divide both sides of (37) by ⟨ , Φ 1 ⟩/ 1 to yield Abstract and Applied Analysis 5 Applying Parseval's identity to (38) one gets (39) which implies that there exists an index 0 such that From Lemma 4 and (41) one has for > 0 + 1; that is, for > 0 + 1. Since Φ 1 ( ) ̸ = 0 by hypothesis and 1 < 1, (43) leads to (27). Remark 6. Theorem 5 provides two types of methods to obtain upper and lower bounds for the minimum distance between points , satisfying (3), one based on comparing the norm of with some other constants and another based on comparing the value of at concrete points of [ , ] for different functions . These will be addressed separately in the next two sections.

Bounds for the Distance between
and Based on ‖ ‖ 2 As stated before, Theorem 5 provides methods to get upper and lower bounds for the minimum distance between points , satisfying (3), which are based on the comparison of ‖ ‖ 2 with some thresholds for different values of the extremes of integration and of (4) and (11).
Thus, on the one hand, if , are such that there is a nontrivial solution of (2) which satisfies (3) at some , interior to [ , ], from (26) (and as long as ⟨ , Φ 1 ⟩ ̸ = 0, i.e., is "close" to Φ 1 ) it is clear that ‖ ‖ 2 will grow with the index regardless of the choice of and accordingly there will be an index 0 such that (23) is violated. This allows us to define an algorithm to find progressively better "outer" bounds of the values satisfying (3) by fixing one of them, say , and calculating the minimum values of the extremes which give for different values of , as the following theorem shows. Proof. The fact that ≥ is obvious from (23). Now, let us pick a > 0. From (26) lim → ∞ ‖ , + ‖ 2 = ∞, which means that there exists an index 1 such that Given that ‖ , − ‖ 2 − ‖ ‖ 2 < 0 for ≥ 1, from (23) and the continuity of ‖ , ‖ 2 − ‖ ‖ 2 on (continuity guaranteed by the hypothesis), there must exist a value ∈ [ , + [ such that ‖ , ‖ 2 − ‖ ‖ 2 = 0 for each > 1 . This proves the second assertion of the theorem.
The application of the method based on Theorem 7 is quite straightforward given that the right-hand side of (44), that is, ‖ ‖ 2 2 , is easy to calculate once fixed . However, it is worth remarking that, from (34), the closer the selected function is to Φ 1 , the smaller the terms ⟨ , Φ ⟩ for > 1 will be and the faster the sequence will converge to . Therefore, although the method can work with any continuous function such that ⟨ , Φ 1 ⟩ ̸ = 0, it is desirable to select one that may be as close as possible to the expected Φ 1 associated to the problem (28).
On the other hand, if , are such that no nontrivial solution of (2) satisfies (3) either at , or at some , interior to [ , ], from (24) it is clear that ‖ ‖ 2 will shrink with the index regardless of the choice of and accordingly there will be an index 0 such that (25) is violated. As happened before, this allows us to define an algorithm to find progressively better "inner" bounds of the values satisfying (3) by fixing one of them, say , and calculating the values of the extremes which give where , is any lower bound of the right-hand side of (25), which has (in turn) a positive lower bound that does not depend on . This is shown in the following theorem.  (24), lim → ∞ ‖ , − ‖ 2 = 0. This and the fact that , is bounded below by a positive amount which does not vary with imply that there exists an index 2 such that Given that ‖ , + ‖ 2 − , + > 0 for ≥ 1, from (25) and the continuity of ‖ , ‖ 2 − , on , there must exist a value ∈ ] − , ] such that ‖ , ‖ 2 − , = 0 for each > 2 . This proves the second assertion of the theorem.
Unlike what happens with the method based on Theorem 7, the method based on Theorem 8 presents some difficulties in its application due to the fact that neither ⟨ , Φ ⟩ nor the eigenvalues are known. We can overcome them partially by discarding, in the right-hand side of (25), the term of the series of the eigenfunctions beyond the first one, that is, by converting (25) into given that the discarded terms are positive (in fact ⟨ , Φ 1 ⟩ 2 may be a possible positive lower bound for some types of terms , used in (46), which does not depend on ). The resulting method to obtain lower bounds for the distance between and will work in the same way as the one described in the previous paragraphs, at the expense of requiring greater values of the index to violate (48). But even with such a simplification there is still a need to obtain a lower bound for ⟨ , Φ 1 ⟩ which is not evident at all. The following lemmas aim at finding cases (fundamentally choices of and boundary conditions (3)) where the values of ⟨ , Φ 1 ⟩, ⟨ , Φ ⟩, and are bounded in a way that allows getting a lower bound for the right-hand side of (25) (or for the right-hand side of (48) if there is no way to do it with (25)), overcoming the problem.
Proof. From (10) one has And since Φ satisfies (28) one yields As ( , ) is the Green function of the problem (5), is the value at of the function satisfying (3) whose second derivative is Φ / , that is, Φ ( )/ . This proves (49). As for (50), note simply that Abstract and Applied Analysis , ≥ 1. (57) Proof. First of all, let us note that ( ) can always be decomposed in the mentioned manner, given that ( ) > 0 and therefore ln ( ) exists in [ , ] and can be expressed as the difference between two increasing functions ( ) and ( ); that is, which, taking exponentials, becomes with ( ) and ( ) nondecreasing. Now let us consider the functional defined by Using integration by parts and the fact that Φ is orthonormal with respect to the norm (11) it is easy to prove that From (3), (61), and the hypothesis one gets And given that ( ) and ( ) are increasing on [ , ] and that Φ ( ) verifies (28) for = , one has [. The application of (60), (63), and (64) to (62) leads to From (3), (60), and (65) it is straightforward to obtain (54)-(57).

Lemma 11.
Let and Φ be defined as in Theorem 5. Then one has Proof. From [23,Theorem 8], if we define the function angle ( ) by where > 0 is a real constant, then ( ) satisfies the equation being, therefore, a decreasing function. Let us fix ] − ( /2), ( /2)[as the range of the arctan function. In that case, (i) if 12 = 0 we will set ( ) = /2. Else, we will set ( ) = arctan(− 11 / 12 ); With this in mind, we will also define The problem for (72) to be used to get upper bounds for is the dependence of (√ ) on √ . We can eliminate it by obtaining an upper bound for (√ ). Accordingly we will define Taking squares in (74) one finally gets to (66).
Finally, by (25) and Lemmas 9-11 we are in a position to prove the next theorem, which allows a straightforward application of the method based on Theorem 8.

Remark 13.
It is possible to improve the results of Theorem 12 by using better lower bounds for Φ 2 ( ), Φ 2 ( ), Φ 2 ( ), and Φ 2 ( ) than the ones displayed in Lemma 10 and better upper bounds for than the one presented in Lemma 11 or calculating more terms in the sums of (75)-(78) before using the integrals to get lower bounds for the remainders of the series.

Bounds for the Distance between and Based on Concrete Values of ( )
This section will elaborate on the results of Section 2 in order to obtain upper and lower bounds for the distance between the extremes , in a similar way as it was done in [1]. A critical condition for that is the positivity of the Green function ( , ), which is ensured under certain boundary conditions by the next lemma.

Lemma 14. Let , be any real numbers such that ̸ = and , ∈ [ , ]. If the boundary conditions (3) verify any of the following hypotheses for , ,
respectively, then we can represent ( , ) as follows: We can divide the analysis of (91) in four cases.
is a linear function positive at the extremes 0 and − by the hypothesis, and therefore it must be positive for any − ∈ [0, − ]. (12) of Lemma 3. In consequence they will replace the latter in the rest of results of this section.

Remark 15. Conditions (86)-(89) imply conditions
Once the conditions for the positivity of the Green function have been determined, we can proceed with the key results of this method.  In a similar manner we can show that ( , )/ ≥ 0; that is, fixing , , and , either ( , ) increases ( 21 ̸ = 0) or ( , ) stays constant ( 21 = 0) as increases. This proves (100) and the lemma. Proof. We will only prove the last case (the proof of the other cases is very similar). Thus, from (97) it follows that, fixing , ( , ) is a function increasing with for ≤ and decreasing with for < . Therefore it will have a maximum at = , which is exactly (102).
where is defined by .
In both cases the convergence is uniform for ∈ [ , − ]. Therefore, for every > 0 there will exist infinitely many such that −1 , which contradicts (120) and (121) and, taking into account that also contradicts (122).
the sign depending on that of ⟨ , Φ 1 ⟩Φ 1 ( ). Let us suppose that such a sign is positive (the negative case can be treated in the same manner). In that case for every > max > 0 there will exist infinitely many such that , , Since the values at the left-hand side of (126)-(127) are upper bounds of , , ( ), (130) leads to a contradiction and proves the assertion.

The Calculation of ( )
A common key point of Sections 3 and 4 is the importance of the calculation of ( ) for different functions . Bearing this in mind, our aim for this section is to find manners to facilitate such a calculation. That will be done with the following theorems, in all of which the internal product ⟨⋅, ⋅⟩ is assumed to be calculated on the variable .
Remark 27. The advantage of (133) is that, given (2) and fixing the value we want to apply to , it allows testing easily different functions , (as many as we want) in Theorems 7 and 18-23 in a simple way, just leaving the complication of the method to the calculation of −1 ( , ).

Some Examples
Throughout this section we will introduce examples where Theorems 7 and 12 and Corollary 20 will be used to determine upper and lower bounds for the distance between extremes and of a solution of (2) for different functions ( ) and boundary conditions (3). For the sake of simplicity, the analysis will fix the value of the starting point (in all cases zero) and will search for upper and lower bounds of the adjacent right extreme . The boundary conditions addressed will be disfocality, in the first three examples, and disconjugacy, in the two last ones, and the examples used to illustrate the disfocal case will be the same as those that were introduced in [1], in order to compare the bounds obtained via Theorems 7 and 12 with those coming from Corollary 20, which was already obtained in [1] for the disfocal case. A comparison between these bounds and the bounds obtained via other existing methods, like Brown and Hinton's one (see [24]) for lower bounds, will also be done.
In all examples the calculation of will be done numerically and the number of iterations will be restricted case by case.

Example 1. Let us consider the following boundary value problem
for different values of the constant . The application of Theorem 7 to the function ( ) = , together with Theorem 12 and Corollary 20 (already applied to this problem in [1] with ( ) = ( / ) and ( ) equal to the function Φ( ) of [1, Equation (47)]) gives Table 1.
As can be seen from Table 1 [24] or, when equal, require a higher number of iterations than [1]. And as was mentioned in [1], it is worth remarking the good accuracy that Brown and Hinton's method gives for lower bounds with relatively low calculation effort.
Example 2. Let us consider the following boundary value problem: for different values of the constant . The application of Theorem 7 to the function ( ) = , together with Theorem 12 and Corollary 20 (already applied to this problem in [1]), gives Table 2, where one can observe results and trends similar to those commented in the previous example.
Example 3. Let us consider the following boundary value problem: for different values of the constant . The application of Theorem 7 to the function ( ) = , together with Theorem 12 and Corollary 20 (already applied to this problem in [1]), gives Table 3.
In this third example one can observe that Theorem 12 finally gets to yield greater (i.e., better) lower bounds of than those provided by Corollary 20 and Brown and Hinton's method [24] with a lesser number of iterations .
for different values of the constant . The application of Theorem 7 and Corollary 20-(118)to the function ( ) defined by together with Theorem 12 and Corollary 20-(117)-gives Table 4.
In this example the methods based on comparison of norms yield better results than those based on comparison of values of . The reason for this may be that the boundary conditions of this case do not allow knowing in advance the value of the maximum of to be used in the comparison (note that in the three previous examples the maximum was always placed at ), causing the selection of (( + )/2) as comparison value in the case of Corollary 20-(118)-and the use of ( , ) in the integral in the case of Corollary 20 -(117)-both of which require further iterations to surpass the threshold value 1.
Example 5. Let us consider the following boundary value problem for different values of the constant . The application of Theorem 7 and Corollary 20-(118)to the function V( ) defined by (140) together with Theorem 12 and Corollary 20-(117)-gives Table 5, where one can observe results and trends similar to those commented in the previous example.

Conclusions
Throughout this paper two different types of methods have been provided to assess whether there are nontrivial solutions of (2) satisfying the boundary conditions (3) at extremes , inner, equal, or outer to other fixed extremes , . Both rely on an iterative application of the operator defined in (4) and allow obtaining sequences of extremes , that converge to the exact values , for which ( ) satisfies (3). The second characteristic is by far the most relevant one when one compares them with other methods present in the literature, since these latter only provide bounds (in fact very often only lower bounds or Lyapunov inequalities since the number of methods dealing with upper bounds is quite low) which cannot be improved following the same approach.
In addition it is worth remarking that the set of methods presented over almost all possible boundary conditions (3), despite the fact that some of them (namely, the method based on Theorem 12 and those based on Theorems 18 and 19) only apply under a limited subset of boundary conditions (3). The reason for this is, on the one hand, that the only conditions common for all these methods are those of Lemma 3 (which indeed are relatively easy to satisfy) and, on the other hand, that Theorem 7 does not require any additional conditions at all and Theorem 12 requires additional conditions on (3) which complement those required by Theorems 18 and 19. As for what method is better in what case, it is difficult to decide, but the examples show that in the quest for outer bounds the method based on Theorem 7 converges normally much faster than the one based on Theorem 18. In the search of inner bounds, as indicated before, Theorems 12 and 19 cannot be applied simultaneously to most of the boundary conditions (3), but in the cases where that is possible, the speed of convergence depends on the concrete boundary conditions and the concrete function ( ). In general it seems that boundary conditions aiming for a maximum in different from or tend to favour the method based on Theorem 12 versus the method based on Theorem 19, the opposite occurring (with exceptions, as Example 3 shows) when the boundary conditions force maxima of at either or .
As for the main drawbacks, we can comment on two. On the one hand, the fact that the speed of convergence of the method depends vastly on the selected starting function and can be low in some cases, many iterations being necessary to approximate the values of the extremes and that satisfy (3). On the other hand, as they have been presented, the methods cannot deal with functions ( ) which are zero or negative in a subset of positive measure. However it can be shown, as was done in [1], that these methods can be extended to the case ( ) zero or negative in a way similar to most of the criteria published so far for the same problem.
All in all and taking the mentioned constraints into consideration, we the authors believe that the advantages of the methods presented in this paper surpass their drawbacks largely and that they can become a very powerful tool to assess problems of conjugacy/disconjugacy, nondisfocality/disfocality, and so forth, of (2), whether directly or by means of other methods based on them which in turn improve them.