Delta-Nabla Type Maximum Principles for Second-Order Dynamic Equations on Time Scales and Applications

and Applied Analysis 3 where A := {t ∈ T : t is left-dense and right-scattered} , TA := T \ A, B := {t ∈ T : t is left-scattered and right-dense} , TB := T \ B. (17) Corollary 10 (see [22]). If f : T → R is Δ-differentiable and f Δ is continuous on T, g : T → R is ∇-differentiable, and g is continuous on Tk, then f ∇ (t) = f Δ (ρ (t)) for t ∈ Tk, g Δ (t) = g ∇ (σ (t)) for t ∈ T k . (18) Theorem 11 (see [21]). Assume f, g : T → R are differentiable at t ∈ T. Then (i) the sum f + g : T → R is differentiable at t with (f + g) Δ (t) = f Δ (t) + g Δ (t) ; (19) (ii) for any constant α, αf: T → R is differentiable at t with (αf) Δ (t) = αf Δ (t) ; (20) (iii) the product fg : T → R is differentiable at t with (fg) Δ (t) = f Δ (t) g (t) + f (σ (t)) g Δ (t) = f (t) g Δ (t) + f Δ (t) g (σ (t)) ; (21) (iV) if g(t)g(σ(t)) ̸ = 0, then f/g is differentiable at t with ( f g ) Δ (t) = f Δ (t) g (t) − f (t) g Δ (t) g (t) g (σ (t)) . (22) Theorem 12 (see [22]). Assume f, g : T → R are nabla differentiable at t ∈ Tk. Then (i) the sum f + g: T → R is nabla differentiable at t with (f + g) ∇ (t) = f ∇ (t) + g ∇ (t) ; (23) (ii) for any constant α, αf: T → R is nabla differentiable at t with (αf) ∇ (t) = αf ∇ (t) ; (24) (iii) the product fg : T → R is nabla differentiable at t with (fg) ∇ (t) = f ∇ (t) g (t) + f (ρ (t)) g ∇ (t) = f (t) g ∇ (t) + f ∇ (t) g (ρ (t)) ; (25) (iV) if g(t)g(ρ(t)) ̸ = 0, then f/g is nabla differentiable at t with ( f g ) ∇ (t) = f ∇ (t) g (t) − f (t) g ∇ (t) g (t) g (ρ (t)) . (26) Theorem 13 (see [22]). If f, f, and f are continuous, then (i) [∫ t a f(t, s)Δs] Δ = ∫ t a f Δ (t, s)Δs + f(σ(t), t); (ii) [∫ t a f(t, s)Δs] ∇ = ∫ t a f ∇ (t, s)Δs + f(ρ(t), ρ(t)); (iii) [∫ t a f(t, s)∇s] Δ = ∫ t a f Δ (t, s)∇s + f(σ(t), σ(t)); (iV) [∫ t a f(t, s)∇s] ∇ = ∫ t a f ∇ (t, s)∇s + f(ρ(t), t). Definition 14 (see [21]). One says that a function p : T → R is regressive provided 1+μ(t)p(t) ̸ = 0 for all t ∈ T holds.The set of all regressive and rd-continuous functions f : T → R will be denoted byR = R(T) = R(T ,R). Definition 15 (see [21]). One defines ξh(z) = (1/h) log(1 + zh)(ξh : Ch → Zh), where h > 0. If p ∈ R, then one defines the exponential function by


Introduction
Maximum principles are a well known tool for studying differential equations, which can be used to receive prior information about solutions of differential inequalities and to obtain lower and upper solutions of differential equations and so on.Maximum principles include continuous maximum principles and discrete maximum principles.It is well known that there are many results and applications for continuous and discrete maximum principles.For example, about these theories and applications, we can refer to [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15] and the references therein.On the other hand, Hilger [16] established the theory of time scales calculus to unify the continuous and discrete calculus in 1990.After that, ordinary dynamic equations and partial dynamic equations on time scales have been extensively studied by some authors.For example, about these, we can refer to [17][18][19][20][21][22][23] and the references therein.However, the study on the maximum principles on time scales is very little, about these, we can refer to Stehik and Thompson's recent works [24,25].
Inspired by the above works, we will be devoted to study delta-nabla type maximum principles for second-order dynamic equations on one-dimensional time scales and the applications of these maximum principles.
This paper is organized as follows.In Section 2, we state and prove some basic notations and results on time scales.In Section 3, we will first prove some delta-nabla type maximum principles for second-order dynamic equations on time scales; then, by using these maximum principles, we get some maximum principles for second-order mixed forward and backward difference dynamic system and discuss the oscillation of second-order mixed delta-nabla differential equations.In Section 4, we apply the maximum principles proved in Section 3 to obtain uniqueness of the solutions, the approximating techniques of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear initial value problems.In Section 5, we apply the maximum principles proved in Section 3 to obtain uniqueness of the solutions, the approximating techniques of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear boundary value problems.Finally, in Section 6, we extended the results of linear operator established in Sections 4 and 5 to nonlinear operators.

Preliminaries
Definition 1 (see [22]).A time scale T is a nonempty closed subset of the real numbers.Throughout this paper, T denotes a time scale.Definition 2 (see [22]).Let T be a time scale.For  ∈ T one defines the forward jump operator  : T → T by  −  (12) exists as a finite number.In this case (iv) If  is nabla differentiable at , then  ( ()) =  () − ] ()  ∇ () .
According to the above theorems and definitions, we can obtain the following corollary.
Corollary 19.Suppose (28) is regressive and fix  0 ∈ T   , and if one chooses () = , where  is a positive constant, then the following equality holds on T   .
According to the above theorems and definitions, we can obtain the following corollary.
Corollary 26.Suppose (35) is regressive and fix  0 ∈ T   , and if one chooses () = , where  is a negative constant, then the following equality holds on T   .
and we have which can obtain and thus and we have which can obtain and therefore, we get and then And hence, we get Theorem 27 (see [22]).Let  be a continuous function on [, ] T , that is, delta differentiable on [, ) T .Then  is increasing, decreasing, nondecreasing, and nonincreasing on Definition 28.One says that a function  : (ii) if  0 is left-dense, then there is a neighbourhood  of  0 such that ( 0 ) > () for all  ∈  with  0 > .
Theorem 31.Suppose  : Proof.Suppose that  attains its local left-minimum at  0 .To show that  ∇ ( 0 ) ≤ 0, we assume the opposite, that is,  ∇ ( 0 ) > 0. Then  is left-increasing by Theorem 29, contrary to the assumption that  attains its local left-minimum at  0 .Thus, we must have  ∇ ( 0 ) ≤ 0. The second statement can be shown similarly.

Delta-Nabla Type Maximum Principles
In this paper, we denote Λ := [, ] T as an interval on time scales.We study those functions defined on Λ which belong to D(Λ), where D(Λ) is the set of all functions  : Λ → R, such that  Δ is continuous on [, ) T ,  ∇ is continuous on (, ] T , and  Δ∇ exists in (, ) T .
First we give a necessary condition that () ∈ D(Λ) attains its maximum at some point  0 ∈ (, ) T .
Lemma 34.If () ∈ D(Λ) attains a maximum at a point  0 ∈ (, ) T , then The strict inequality in the last two inequalities can occur only at left-scattered points.
Proof.Let us divide our proof into three parts.
(i) If  0 is left-scattered, then the maximality of  at  0 implies that  ∇ ( 0 ) ≥ 0 and  Δ ( 0 ) ≤ 0 and consequently (ii) If  0 is left-dense and right-scattered, in this case, we have  Δ ( 0 ) ≤ 0. If there is no positive number sequence {ℎ  } such that lim  → ∞ ℎ  = 0 and  Δ ( 0 − ℎ  ) ≥ 0, then there exists a  > 0 such that  Δ () < 0 for each  ∈ [ 0 − ,  0 ) T ; by Theorem 27, a contraction with  attains its maximum at interior point  0 of (, ) T .Thus, there exists {ℎ  } such that lim  → ∞ ℎ  = 0 and  Δ ( 0 − ℎ  ) ≥ 0. This yields Furthermore, the continuity of the delta derivative  Δ () implies that and consequently  Δ ( 0 ) = 0. Then by using Corollary 10 we have that (iii) If  0 is left-dense and right-dense, in this case the maximality of  at  0 and standard continuous necessary conditions imply that According to Lemma 34, we can obtain the first simple maximum principle for the time scale.
We give a variant of Corollary 35 where we weaken the condition  Δ∇ > 0.
Finally,   ( 0 ,  0 ) = 1 derives It shows that  attains its maximum in (,  1 ) T .However, Then we have that Let us define a function () ∈ D(Λ) by  () :=  () +  () , where  > 0 is chosen so that Furthermore, the definition of  yields that Finally, since ( 0 ) = 0, we derive It shows that  attains its maximum in ( 1 , ) T .However, which is a contradiction with Corollary 35.The proof is completed.
As a natural extension of the above simple maximum principle, we consider the operator of the following type: By the above results, we can obtain Theorem 37.
Proof.We suppose that []( 0 ) > 0 at some point  0 ∈ (, ) T and  attains its maximum at a point  0 .We divide our proof into two parts.(i) If  0 is left-scattered, in this case, we have Multiplying []( 0 ) by ]( 0 ), we obtain However, it follows from Lemma 34 and the conditions that ]( 0 )[]( 0 ) ≤ 0, which is a contradiction.
(ii) If  0 is left-dense, then by Lemma 34 we know that Therefore, []( 0 ) reduces to which is a contradiction with Lemma 34.Combining the proof of (i) and (ii), we get that  cannot attain its maximum at  0 .The proof is completed.
Proof.Assume that  attains its maximum at a point  0 in (, ) T but does not identically equal .That is, ( 0 ) = , and there exists   ∈ (, ) T such that (  ) < .Let us assume first that  0 <   and let us define a function () ∈ D(Λ) by Therefore, we have Thus, by (93) we can take arbitrary  > 0, such that  > −( where  > 0 is chosen so that If  1 = () =  0 , since   (,  0 ) < 1, we have that () < 0 and Moreover, the definition of  yields that Finally,   ( 0 ,  0 ) = 1 implies that ( 0 ) = .It follows that  has a maximum in (,   ) T .However, which is a contradiction with Theorem 37.If  1 <  0 , then we have ( 1 ) < .It follows that  has a maximum in ( 1 ,   ) T .This is again a contradiction with Theorem 37. Thus, we have proved that if  0 ∈ (, ) T is a maximum point, then () =  for any  ≥  0 .Let From this, we obtain that () =  and  Δ () = 0. Then we have that  0 ≥  >   and () <  for any Since []() ≥ 0, we multiply []() by ]() and get that This is a contradiction.If  is left-dense, let where  > 0, such that  > ( where  > 0 such that Therefore, we have (107) By Theorem 37 we know that  cannot attain its maximum in (, ) T .Note that We get that () = () =  is the maximum of  on [, ] T .Since () =  for any  ≥  and () is increasing for  ≥ , we have that  Δ () ≥ 0; however, we also have that This is a contradiction.The proof is completed.
In Theorem 38, if we take T = R, we have the following corollary which is the result that appeared in [3].
In Theorem 38, if we take T = Z, where Z is the set of all integral numbers, we can obtain the following new maximum principle for second-order mixed Δ and ∇ difference dynamic system.

Corollary 40. Assume that the functions 𝑔 1 and 𝑔
then  cannot attain its maximum  in (, ) Z , unless  ≡ .
To show that conditions (91), (92), and (93) are necessary for the validity of our results, we give the following examples.
Example 41.Let T = {  :  ∈ Z} ∪ {0}, where Z is the set of all integral numbers and  > 1, and  is defined by Then Letting is bounded on any closed subinterval of [1,  9 ] T .Thus, conditions (91) and (93) hold, but (92) does not hold.The conclusion of Theorem 38 also does not hold, since  attains its maximum  8 in (1,  9 ) Z , but  is not constant.
In Theorem 44, if we take T = R, we have the following corollary which is an improvement for the result that appeared in [3].
In Theorem 44, if we take T = Z, where Z is the set of all integral numbers, we can obtain the following new maximum principle for second-order mixed Δ and ∇ difference dynamic system.
If   >  0 , we define a function () ∈ D(Λ) by Then It is similar to the proof of Theorem 38; we choose sufficiently larger  such that where  > 0 is chosen so that Since   (,  0 ) < 1, we have Moreover, the definition of  yields that Finally,   ( 0 ,  0 ) = 1 implies that ( 0 ) = .It implies that  has a maximum in (,   ) T .However, holds on ( 1 ,   ) T .This is a contradiction with Theorem 37. Thus, we have proved that if  0 ∈ (, ) T is a maximum point, then () =  for any  ≥  0 .Let From this, we obtain that () =  and  Δ () = 0. Then we have that  0 ≥  >   and () <  for any  0 ∈ (, ) T .If  is left-scattered, then it is similar to the proof of Theorem 38; we have that This is a contradiction.If  is left-dense, let where  > 0, such that in [  , ] T .We choose  closely enough to , such that 1 −  > 0, 1 − ] > 0 on [, ] T and where  > 0 such that Therefore, we have we get that () = () =  is the maximum of  on [, ] T .This implies that  Δ () ≥ 0; however, we also have that This is a contradiction.The proof is completed.
In Theorem 47, if we take T = R, we have the following corollary which is the result that appeared in [3].In Theorem 47, if we take T = Z, where Z is the set of all integral numbers, we can obtain the following new maximum principle for second-order mixed Δ and ∇ difference dynamic system.
All of the above results investigate the behavior of functions inside the considered interval.Now, we will discuss the behavior of functions by providing the information about the boundary points.
Proof.We suppose that  attains its nonnegative maximum  at , that is, () = , and there exists a point  0 ∈ [, ] T , such that ( 0 ) < ; we define a function () ∈ D(Λ) by where  > 0. It is similar to the proof of Theorem 38; we can choose a larger enough , such that Moreover, we define a function () ∈ D(Λ) by where Thus and by using Theorem 47 to  on [,  0 ] T , we get that  attains its maximum at  or  0 .Note that () =  > ( 0 ), and thus  attains its maximum at .Therefore, unilateral derivative of () is not positive: However, and hence If () = , we can prove  ∇ () > 0 as the similar way above.
The proof is completed.
In Theorem 51, if we take T = R, we have the following corollary which is the result that appeared in [3].(1) If  attains its nonnegative maximum at a point of , then   () < 0; (2) If  attains its nonnegative maximum at a point of , then   () > 0.
In Theorem 51, if we take T = Z, where Z is the set of all integral numbers, we can obtain the following new maximum principle for second-order mixed Δ and ∇ difference dynamic system.
From Theorem 47, Theorem 51, and Lemma 54 we obtain the following theorem.
In Theorem 55, if we take T = Z, where Z is the set of all integral numbers, we can obtain the following new maximum principle for second-order mixed Δ and ∇ difference inequality.
Corollary 57.Assume that the functions ℎ, ,  1 , and To show the value of Theorem 55, we need the following definition.
Remark 59. Theorem 55 shows that a function  which satisfies (151) cannot oscillate too rapidly.In fact, assuming that  > 0 between two of its change sign points  = ,  = , then / must have a positive maximum between them.Hence, Theorem 55 will be violated.Thus, we have the following corollary.

Corollary 60. Assuming 𝑢(𝑥) ∈ D(Λ) satisfies (𝐿+ℎ)[𝑢] ≥ 0, then 𝑢 can have at most two change sign points (between which 𝑢 is negative) in any interval (𝑎, 𝑏) T in which Theorem 55 holds.
By applying the same reasoning to both  and −, we can obtain the following corollary.(183) Then ∀ : 2 <  < (/( − ) 2 ); we have and hence Lemma 63.Let () be a solution of equation where  1 ,  2 , ℎ, and  satisfy the conditions of Theorem 55.If  is not identically zero and then  cannot vanish in some right neighbourhood of .
Proof.If  is right-scattered, then (()) ̸ = 0. Otherwise, we have that  Δ () = 0; this shows that Then we can obtain V() ≡ 0. In fact, according to Theorem 55, V = / cannot attain its maximum nor minimum at .If V attains its maximum in (, ) T , then V ≡ 0 since V() = 0.If V attains its maximum at , hence −V attains its maximum in (, ) T .Next we apply Theorem 47 to −V and obtain that −V() is constant; then V() ≡ 0 since −V() = 0.
Thus, in all cases we get that V() ≡ 0; this implies that () ≡ 0 which is contradiction with the assumption.
If  is right-dense, we obtain that  cannot vanish in some right neighbourhood of .In fact, if it is not so, then there exists a sequence   →  + , and (  ) = 0; then  Δ () = lim  → ∞ (() − (  ))/( −   ) = 0. Again we obtain that () ≡ 0 by a similar proof of above, which is contradiction with the assumption.Thus,  cannot vanish in some right neighbourhood of .On the other hand, if  is any point in (,  * ) T , a function  can be found so that / satisfies the maximum principle of Theorem 55.To see this, we observe first that () is bounded from below by a positive number on any subinterval [, ] T contained in (,  * ) T .Consequently, for sufficiently small  > 0, the function () = () + (2 −   (, )) is positive on [, ] T .If  is selected so that (+ℎ)[2−  (, )] ≤ 0 in (, ) T , then  is a function for which Theorem 55 holds.Thus, we get the following result.In Theorem 65, if we take T = R, we have the following corollary which is the result that appeared in [3].In Theorem 65, if we take T = Z, where Z is the set of all integral numbers, we can obtain the following new maximum principle for second-order mixed Δ and ∇ difference inequality.

Applications to Initial Value Problems
In this section, as an application of the maximum principles established in section three, firstly, we will prove some uniqueness theorem of the solution for initial value problem: in D(Λ).Secondly, we will discuss the existence of the lower and upper solutions of (192).Thirdly, we will give a general scheme for obtaining upper and lower solutions.Proof.We define a function V() ∈ D(Λ) by Since both  1 and  2 satisfy (192), the function V satisfies According to Theorem 51, V cannot attain its maximum nor minimum at .If V attains its maximum at an interior point of Λ, V ≡ 0 since V() = 0.If V attains its maximum at , hence −V attains its maximum at an interior point of Λ. Next we apply Theorem 47 to −V and obtain that −V() is constant; then V() ≡ 0 since −V() = 0.The proof is completed.Proof.We define a function () ∈ D(Λ) by Since both  1 and  2 satisfy (192), the function  satisfies (194).We give our proof by two steps.
(2) Let If  = , then the conclusion of Theorem 70 will be proved.
Remark 71.Theorems 68 and 69 show that On the other hand, in many cases, it is difficult to find a solution of the initial value problem directly, and therefore, it becomes important to find a lower and upper solution.
Assume that  1 ,  2 , and ℎ are bounded on (, ) T , ℎ() ≤ 0 on (, ) T and satisfy (91), ( 92), (93), and (130) for each  ∈ (, ) T .If we can find a function  1 () ∈ D(Λ) satisfying we define a function where () is the solution of (192).Thus, has a nonnegative maximum at any interval [,  0 ] T , and using Theorem 47, we know that the maximum point must be  or  0 .However, V Δ 1 () ≥ 0, and from Theorem 51 maximum point cannot be  unless V 1 () ≡ constant.Thus, we obtain max Since  0 ∈ (, ) T is arbitrary, we can deduce that Using  to take the place of  0 , inequality (204) implies and inequality (205) implies inequality (206) implies Similarly, assume that we can find a function  2 () ∈ D(Λ) satisfying The same as the above statement, define and we obtain Therefore, we have established the following theorem, which gives a sufficient condition for the lower and upper solutions.
In the following, we will discuss the existence of the lower and upper solutions.
Proof.It follows from (166) that so we can select  > 0 large enough, such that  > 0, where  is defined by Let we show that, under the stated assumptions, the function satisfies ( 200) and (201).To see that ( 200) is satisfied, we note that To see that ( 201) is satisfied, we note that Similarly, we can choose where To see that (210) is satisfied, we note that To see that (211) is satisfied note that Thus, conclusion (1) holds.Conclusion (2) can be deduced from Theorem 72.The proof is completed.
As we all know, the accuracy of the approximation will depend on how well we can choose the functions  1 () and  2 ().So we next search for the following general scheme for obtaining upper and lower bounds.Suppose we divide the interval [, ] T into  subintervals On each subinterval, we will select  1 () as the following form: and choose the coefficients , ,  so that  1 () =  1 ,  Δ 1 () =  2 , and  1 ∈ D(Λ).Also,  1 will be selected so that inequality (200) holds in each subinterval ( −1 ,   ) T .We set The constants   ,   ,   ,  = 0, 1, 2, . . .,  − 1, and the number  of subintervals will be chosen so that all the required conditions are satisfied.We proceed in a step by step manner starting with the interval ( 0 ,  1 ) T .The initial conditions require that  0 =  1 and  0 =  2 .Next, we divide our proof into three parts.
(ii) If  0 is right-scattered and ( 0 ) is right-dense, we let  1 > ( 0 ), and then the inequality becomes If  = ( 0 ), we have that Thus, if  1 ,  2 , , and ℎ are bounded, then  1 can be selected so close to ( 0 ), and  0 can be taken so large that (231) holds for  ∈ ( 0 ,  1 ) T .Moreover, when  1 is sufficiently close to ( 0 ), we can properly select  0 , such that (231) is close to an equality; then  1 () is also close to the solution of (192) in ( 0 ,  1 ) T .
(iii) If  0 is right-dense, the inequality becomes If  1 ,  2 , and ℎ are bounded, then  1 can be selected so close to  0 that where  > 0 is a positive constant.If, in addition,  is bounded, then  0 can be taken so large that (234) holds for all  in ( 0 ,  1 ) T .Moreover, when  1 is sufficiently close to  0 , we can properly select  0 , such that (234) is close to an equality; then  1 () is also close to the solution of (192) in ( 0 ,  1 ) T .Following all of the above proof, we have proved that there exists an  1 >  0 and a large enough  0 , such that (200) holds for all  in ( 0 ,  1 ) T for We now turn to the interval ( 1 ,  2 ) T , with  1 () being defined by To insure the continuity of  1 ,  Δ 1 , and  ∇ 1 at  1 , we choose In fact, by computing we get that lim Thus,  1 ,  Δ 1 are continuous at  1 , and  ∇ 1 is left-dense continuous at  1 .In the interval ( 1 ,  2 ) T , we apply the same reasoning of ( 0 ,  1 ) T to ( 1 ,  2 ) T and get that there exists an  2 >  1 and a large enough  1 , such that (200) holds for all  in ( 1 ,  2 ) T .
Proceeding in this fashion, we determine each   ,   so that  1 and  Δ 1 are continuous everywhere;  ∇ 1 is leftdense continuous everywhere, and if   is a left-dense point, we always take interval (  ,  +1 ) T so small, such that the coefficient of   satisfies: where   > 0 is a positive constant.Also, we take the constant   to be large enough, so that ( + ℎ)[ 1 ] ≥ () holds on (  ,  +1 ) T .In fact, the quantities   ,   are determined by the recursion formulas In an actual computation to determine the   , it is convenient to replace  by its maximum in the th subinterval and to replace  1 ,  2 , and ℎ by either their maximum or minimum, whichever may be appropriate for making ( + ℎ)[ 1 ] ≥ () throughout.
In a similar manner we may construct lower bounds.The constants   ,   are selected in exactly the same way, and the quantities −  are taken so large that ( + ℎ)[ 2 ] ≤ () holds everywhere.
If ,  1 ,  2 , and ℎ are continuous, by the above process, it can be shown that, as the maximum length of the subintervals, the upper and lower bounds both tend to the solution .The above discussion leads to the following theorem.
Thus far in this section, we have assumed that ℎ() ≤ 0. We now take up the problem of approximating the solution of the equation with initial conditions when the function ℎ() may be positive.Under these circumstances we employ the generalized maximum principle (Theorem 51).To do so, we suppose that there is a function  which is positive on [, ] T and which has property that For example, we can take the function defined in Lemma 62.We saw in Section 3 that V = / satisfies an equation of the form with  1 = ( Δ∇ + The first of these sets of inequalities gives the bounds The second set yields Since  is positive on [, ] T , we find If  Δ () ≤ 0, we may substitute the upper bound of () as given in (251) into the left side of (253) and we may substitute the lower bound of () into the right side of (253).
If  Δ () ≥ 0, we use the lower bound of () on the left and the upper bound of () on the right.We thus find that Inequalities ( 251) and (254) give the bounds for () and  Δ () which are more precise when  1 () −  2 () and  Δ 1 () −  Δ 2 () are smaller.
It is always possible to find a positive function  which satisfies ( + ℎ)[] ≤ 0 on a sufficiently small interval, but in general, there is no such function if the interval is too large.Once more we resort to breaking up the interval and piecing together functions defined on subintervals.Let  > 0 and ( + ℎ)[] ≤ 0 on an interval [,  * ] T , and let  * be another positive function which satisfies (+ℎ)[ * ] ≤ 0 on an interval [ * , ] T .We wish to find bounds for the solution  of the initial value problem (192), on the whole interval [, ] T .
Let  1 () and  2 () satisfy the conditions on the interval [,  * ] T , and Then From these, we get the bounds for ( * ) and  Δ ( * ).

Applications to Boundary Value Problems
In this section, by using the maximum principles proved in Section 3 to some general boundary value problems, the uniqueness of the solutions, the existence of the upper and lower solutions, and some necessary and sufficient conditions for the existence of the approximation solutions are discussed.First, we consider the following boundary value problems: Proof.We define a function V() ∈ D(Λ) by Since both  1 and  2 satisfy (264) and (265), the function V() satisfies It follows from Theorem 47 that V() ≤ 0, for each  ∈ (, ) T .Since −V() satisfies the same boundary value problem, we have −V() ≤ 0, for each  ∈ (, ) T , and thus V() ≡ 0, for each  ∈ [, ] T .
Next we study general boundary value problems of the form where , ,  Proof.We define a function V() ∈ D(Λ) by Since both  1 and  2 satisfy (268) and (269), the function V() satisfies It is clear that V() ≡  satisfies all the above conditions, if and only if ℎ() ≡ 0,  =  = 0. Then we assume first that V() > 0 at some point and V() is not constant.Using Theorem 47 we know that V() attains its maximum  at  or .Suppose that V() = , and by using Theorem 51 we get V Δ () < 0, which do not satisfy (272).Suppose that V() = , and by using Theorem 51 we get V ∇ () > 0, which do not satisfy (273).Thus, we obtain V() ≤ 0. We can also prove that −V() ≤ 0, and then V() ≡ 0, for each  ∈ [, ] T .
Similar to the initial value problems, in most cases it is impossible to find such a solution explicitly.But, it is frequently desirable to approximate a solution in such a way that an explicit bound for the error is known.Such an approximation is equivalent to the determination of both upper and lower bounds for the values of the solution.Thus, in the following, we will discuss the existence of the upper and lower solutions for boundary value problems.
We will assume that the functions ,  1 ,  2 , ℎ, are bounded and ℎ() ≤ 0 in [, ] T .Under these circumstances it is possible to use the maximum principle in Theorem 55 to obtain a bound for a solution  without any actual knowledge of  itself.
Suppose we can find a function  1 () ∈ D(Λ) satisfying Then the function satisfies The maximum principles as given in Theorem 47 in Section 3 may be applied to V 1 , and we conclude that V 1 () ≤ 0 on [, ] T .That is, The function  1 () is an upper bound for ().
Similarly, a lower bound for () may be obtained by finding a function  2 () with the properties unless θ = φ = 0 and () ≡ 0.
If () satisfies (302) and (303) with equality rather than inequality, we may add a multiple of  to a solution  of (268) and (269) to obtain another solution.That is, the solution is not unique.Of course, there may be no solution at all, but if there is at least one, then there are many.Therefore, if there is a positive function () that satisfies (302) and (303) but such that not all the inequalities are equations, we obtain the bounds must be nonnegative.

Remark 64 .
Under the conditions of Lemma 63, if  has any change sign point at the right of , we denote the first one by  * and call it the conjugate change sign point of .Thus,  does not change its sign in the interval (,  * ) T .Without loss of the generality, we assume that  () > 0 for  ∈ (,  * ) T .(189) Then function / is positive in (,  * ) T and  * is also a change sign point of /.By the definition of change sign point, we have that ( * )/( * ) ≤ 0. Hence, / has a maximum in (,  * ) T .Therefore by Theorem 55,  cannot satisfy ( + ℎ)[] ≤ 0. That is, under these cases, there is no function  satisfying the condition of Theorem 51.

Theorem 65 .
If  * is the conjugate change sign point of , letting  1 ,  2 , and ℎ() be bounded on (, ) T , such that (92) and (166) hold, then there exists a () > 0 such that Theorem 55 holds on the interval [, ] T if and only if  <  * .If () (the solution of (186) which satisfies () = 0) has no change sign point at the right of , one sets  * = ∞, and Theorem 55 holds on every interval [, ] T .

Corollary 66 .
Assume that  * is the conjugate change sign point of , and the functions g, ℎ : [, ] → R are bounded in (, ); then there exists a function () > 0 such that Corollary 52 holds on the interval [, ] if and only if  <  * .If () (the solution of   +   + ℎ = 0, which satisfies () = 0) has no change sign point at the right of , one sets  * = ∞, and Corollary 52 holds on every interval [, ].
which satisfies () = 0) has no change sign point at the right of , one sets  * = ∞, and Corollary 53 holds on every interval [, ] Z .
Let  1 ,  2 , and ℎ() be bounded on (, ) T , such that (92) and (166) hold.Assuming that  1 and  2 are solutions of the initial value problem (192), if  <  * , where  * is the conjugate change sign point of , then  1 ≡  2 .More generally, we can prove the following theorem which shows that the conclusion of Theorem 69 holds on any interval [, ] T .Let  1 ,  2 , and ℎ() be bounded on (, ) T , such that (92) and (166) hold.Assuming that  1 and  2 are solutions of the initial value problem (192), then  1 ≡  2 .
It follows from Theorem 65; we get Theorem 69.Theorem 69.