We give further improvements of the Jensen inequality and its converse on time scales, allowing also negative weights. These results generalize the Jensen inequality and its converse for both discrete and continuous cases. Further, we investigate the exponential and logarithmic convexity of the differences between the left-hand side and the right-hand side of these inequalities and present several families of functions for which these results can be applied.
1. Preliminaries
The combined dynamic derivative, also called diamond-α(⋄α) dynamic derivative (α∈[0,1]), was introduced as a linear convex combination of the well-known delta and nabla dynamic derivatives on time scales. By a time scale T we mean any nonempty closed subset of real numbers. Using the delta and nabla derivatives, the notions of delta and nabla integrals were defined (see [1]). We assume, throughout this paper, that the basic notions of the time scales are well known and understood.
Definition 1 (diamond-α integral [1, Definition 3.2]).
Let f:T→R be continuous function and let a,b∈T. Then, the diamond-α integral of f from a to b is defined by
(1)∫abf(t)⋄αt=α∫abf(t)Δt+(1-α)∫abf(t)∇t,aaaaaaaaaaaaaaaaaiaaaaaii0≤α≤1.
Remark 2.
From the above definition it is clear that for α=1 the diamond-α integral reduces to the standard delta integral and for α=0 the diamond-α integral reduces to the standard nabla integral.
Moreover, if T=R, then
(2)∫abf(t)⋄αt=∫abf(t)dt;
if T=Z, then
(3)∫abf(t)⋄αt=α∑t=ab-1f(t)+(1-α)∑t=a+1bf(t);
if T=hZ, where h>0, then
(4)∫abf(t)⋄αt=h(α∑k=a/hb/h-1f(kh)+(1-α)∑k=a/h+1b/hf(kh));
if T=qN0, where q>1, then
(5)∫abf(t)⋄αt=(q-1)(α∑k=logq(a)logq(b)-1qkf(qk)aaaaaaiaaa+(1-α)∑k=logq(a)+1logq(b)qk-1f(qk)).
Recently, the Jensen inequality, the improvement of the Jensen inequality, and their converses are given for time scale integrals (see [2–4]).
Let T be a time scale, and let a,b∈T. Then the time scale interval is denoted and defined by [a,b]T=[a,b]∩T.
Theorem 3 (see [2]).
Let c,d∈R. If g∈C([a,b]T,(c,d)),w∈C([a,b]T,R) with ∫ab|w(t)|⋄αt>0 and Φ∈C((c,d),R) is convex, then
(6)Φ(∫abg(t)|w(t)|⋄αt∫ab|w(t)|⋄αt)≤∫abΦ(g(t))|w(t)|⋄αt∫ab|w(t)|⋄αt.
Note that in this Jensen inequality we have nonnegative weights. In order to give a better version of this Jensen inequality on time scales, Dinu [4] gives the definition of α-Steffensen-Popoviciu (α-SP) weight.
Definition 4 (α-SP weight [4, Definition 1]).
Let g∈C(T,R). Then, w∈C(T,R) is an α-Steffensen-Popoviciu (α-SP) weight for g on [a,b]T if
(7)∫abw(t)⋄αt>0,∫abΦ(g(t))+w(t)⋄αt≥0,
for every convex function Φ∈C([m,M],R), where
(8)m=inft∈[a,b]Tg(t),M=supt∈[a,b]Tg(t).
In the following lemma he gives a characterization for α-SP weight for a nondecreasing function g on time scales.
Lemma 5 (see [4, Lemma 2]).
Let w∈C(T,R) such that ∫abw(t)⋄αt>0. Then, w is an α-SP weight for a nondecreasing function g∈C([a,b]T,R) if and only if it verifies the following condition:
(9)∫as(g(s)-g(t))w(t)⋄αt≥0,∫sb(g(t)-g(s))w(t)⋄αt≥0,
for every s∈[a,b]T.
If the following stronger (but more suitable) condition holds
(10)0≤∫asw(t)⋄αt≤∫abw(t)⋄αtforeverys∈[a,b]T,
then w is also an α-SP weight for the nondecreasing continuous function g.
As given in [4], all positive weights are α-SP weights, for any continuous function g and every α∈[0,1]. But there are some α-SP weights that are allowed to take the negative values. The Jensen inequality on time scales, where it is allowed that the weight function takes some negative values, is given in the following theorem.
Theorem 6 (see [4, Theorem 2]).
Let g∈C([a,b]T,[m,M]) and let w∈C([a,b]T,R) such that ∫abw(t)⋄αt>0. Then, the following two statements are equivalent:
w is an α-SP weight for g on [a,b]T;
for every convex function Φ∈C([m,M],R), it holds
(11)Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)≤∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt.
Remark 7.
Let g be nondecreasing function. If T=N, then Theorem 6 is equivalent to the Jensen-Steffensen inequality given by Steffensen in [5] (see also [6, page 57]). On the other hand if we take T=R in Theorem 6, we obtain the integral version of the Jensen-Steffensen inequality given by Boas [7] (see also [6, page 59]).
Considering the converse of the Jensen inequality, Dinu [4] gives the following definition of α-Hermite-Hadamard (α-HH) weight, its characterization for a nondecreasing function g on time scales, and the improvement of the converse of the Jensen inequality for some negative weights.
Definition 8 (α-HH weight [4, Definition 2]).
Let g∈C(T,R). Then w∈C(T,R) is an α-Hermite-Hadamard (α-HH) weight for g on [a,b]T if
(12)∫abw(t)⋄αt>0,∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt≤M-g¯αM-mΦ(m)+g¯α-mM-mΦ(M),
for every convex function Φ∈C([m,M],R), where
(13)m=inft∈[a,b]Tg(t),M=supt∈[a,b]Tg(t),g¯α=∫abg(t)w(t)⋄αt∫abw(t)⋄αt.
Lemma 9 (see [4, Lemma 3]).
Let w∈C([a,b]T,R) be such that ∫abw(t)⋄αt>0. Then w is an α-HH weight for a nondecreasing function g∈C([a,b]T,R) if and only if it verifies the following condition:
(14)g(b)-g(s)g(b)-g(a)∫as(g(t)-g(a))w(t)⋄αt+g(s)-g(a)g(b)-g(a)∫sb(g(b)-g(t))w(t)⋄αt≥0
for every s∈[a,b]T.
In the next result Dinu [4] gives the connection between these two classes of weights on a time scale.
Theorem 10 (see [4, Theorem 3]).
Let g∈C(T,R). Then every α-SP weight for g on [a,b]T is an α-HH weight for g on [a,b]T, for all α∈[0,1].
In the following two sections of our paper we give some further generalizations of the Jensen-type inequalities on time scales allowing negative weights, and we also give the mean-value theorems of the Lagrange and Cauchy type for the functionals obtained by taking the difference of the left-hand side and right-hand side of these new inequalities. These results also generalize the results given in [8] for continuous and discrete cases. Section 4 in our paper deals with exponential convexity and logarithmic convexity of the functionals obtained in two previous sections. Finally, in Section 5 we present several families of exponentially convex functions which fulfil the conditions of our results. The results from Sections 4 and 5 generalize the results given in [9] for continuous and discrete cases.
2. Improvement of the Jensen Inequality on Time Scales
Let m,M∈R, where m≠M. Consider the Green function G:[m,M]×[m,M]→R defined by
(15)G(x,y)={(x-M)(y-m)M-mform≤y≤x,(y-M)(x-m)M-mforx≤y≤M.
The function G is convex and continuous with respect to both x and y.
It is well known that (see, e.g., [8–11]) any function Φ∈C2([m,M],R) can be represented by
(16)Φ(x)=M-xM-mΦ(m)+x-mM-mΦ(M)+∫mMG(x,y)Φ′′(y)dy,
where the function G is defined in (15). Using (16), we now derive several interesting results concerning the Jensen-type inequalities.
In the following theorem, we give the generalization of the Jensen inequality on time scales, where negative weights are also allowed.
Theorem 11.
Let g∈C([a,b]T,R) be such that g([a,b]T)⊆[m,M]. Let w∈C([a,b]T,R) be such that ∫abw(t)⋄αt≠0 and ∫abg(t)w(t)⋄αt/∫abw(t)⋄αt∈[m,M]. Then, the following two statements are equivalent:
for every convex function Φ∈C([m,M],R)(17)Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)≤∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt
holds;
for all y∈[m,M](18)G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,y)≤∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt
holds, where G:[m,M]×[m,M]→R is defined in (15).
Furthermore, the statements (i) and (ii) are also equivalent if we change the sign of inequality in both (17) and (18).
Proof.
(i) ⇒ (ii): let (i) hold. As the function G(·,y)(y∈[m,M]) is continuous and convex, it follows that (18) holds.
(ii) ⇒ (i): let (ii) hold. Let Φ∈C2([m,M],R). Then by using (16) we get
(19)∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt-Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)=∫mM[∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αtaaaaaaaii-G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,y)]Φ′′(y)dy.
If the function Φ is also convex, then Φ′′(y)≥0 for all y∈[m,M], and hence it follows that for every convex function Φ∈C2([m,M],R) inequality (17) holds. Moreover, it is not necessary to demand the existence of the second derivative of the function Φ (see [6, page 172]). The differentiability condition can be directly eliminated by using the fact that it is possible to approximate uniformly a continuous convex function by convex polynomials.
The last part of our theorem can be proved analogously.
Remark 12.
Let the conditions of Theorem 11 hold. Then the following two statements are equivalent.
For every concave function Φ∈C([m,M],R) the reverse inequality in (17) holds.
For all y∈[m,M] inequality (18) holds.
Moreover, statements (i′) and (ii′) are also equivalent if we change the sign of inequality in both statements (i′) and (ii′).
Remark 13.
Consider (19). Suppose that g is nondecreasing and that it has the first derivative. Let m=g(a) and M=g(b), and make the substitution y=g(s). Then we get
(20)∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt-Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)=∫ab[∫abG(g(t),g(s))w(t)⋄αt∫abw(t)⋄αtaaaaaaai-G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,g(s))]aaaaaaai×Φ′′(g(s))g′(s)ds.
Since g is nondecreasing, we have that g′(s)≥0 for all s∈[a,b]T. If Φ∈C2([m,M],R) is convex, then Φ′′(g(s))≥0 for all s∈[a,b]T. Hence, if and only if
(21)G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,g(s))≤∫abG(g(t),g(s))w(t)⋄αt∫abw(t)⋄αt
holds for all s∈[a,b]T, then for every continuous convex function Φ inequality (17) holds.
Combining the result from Theorem 11 with Theorem 6 and Lemma 5, we get the following two corollaries.
Corollary 14.
Let g∈C([a,b]T,[m,M]) and let w∈C([a,b]T,R) be such that ∫abw(t)⋄αt>0. Then w is an α-SP weight for g on [a,b]T if and only if
(22)G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,y)≤∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt
holds for all y∈[m,M], where G is defined in (15).
Corollary 15.
Let g∈C([a,b]T,[m,M]) be nondecreasing function and w∈C([a,b]T,R) such that ∫abw(t)⋄αt>0. Then
(23)∫as(g(s)-g(t))w(t)⋄αt≥0,∫sb(g(t)-g(s))w(t)⋄αt≥0
hold for all s∈[a,b]T, if and only if
(24)G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,y)≤∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt
holds for all y∈[m,M], where G is defined in (15).
To shorten the notation, in the sequel we will use the following notation:
(25)g¯α=∫abg(t)w(t)⋄αt∫abw(t)⋄αt.
Under the assumptions of Theorem 11, we define the following functional J1(g,Φ):
(26)J1(g,Φ)={∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt-Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)if∀y∈[m,M]inequality(18)holds,Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αtif∀y∈[m,M]thereverseinequalityin(18)holds,
where the function Φ is defined on [m,M]. Clearly, if Φ is continuous and convex, then J1(g,Φ) is nonnegative.
Theorem 16.
Let g∈C([a,b]T,R) and Φ∈C2([m,M],R). Let J1 be defined as in (26). Then there exists ξ∈[m,M] such that
(27)J1(g,Φ)=Φ′′(ξ)J1(g,Φ0)
holds, where Φ0(x)=x2/2.
Proof.
Since the function Φ′′ is continuous and
(28)∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt-G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,y)
does not change its positivity on [m,M], applying the integral mean-value theorem on (19) we get that there exists ξ∈[m,M] such that
(29)∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt-Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)=Φ′′(ξ)∫mM[∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αtaaaaaaaaaaaaaai-G(∫abg(t)w(t)⋄αt∫abw(t)⋄αt,y)]dy
holds. As in [12], it can be easily checked that it holds
(30)∫mMG(x,y)dy=∫mx(x-M)(y-m)M-mdy+∫xM(y-M)(x-m)M-mdy=12(x-m)(x-M).
Calculating the integral on the right-hand side of (29), we get
(31)∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt-Φ(∫abg(t)w(t)⋄αt∫abw(t)⋄αt)=Φ′′(ξ)[1∫abw(t)⋄αtaaaaaaaaiiaa×∫ab(∫mMG(g(t),y)dy)w(t)⋄αtaaaaaaaaaaa-∫mMG(g¯α,y)dy1∫abw(t)⋄αt]=Φ′′(ξ)[1∫abw(t)⋄αt∫ab12(g(t)-m)aaaaaaaaaaaaaaaaaaaaaai×(g(t)-M)w(t)⋄αtaaaaaaaaaai-12(g¯α-m)(g¯α-M)1∫abw(t)⋄αt]=12Φ′′(ξ)[∫ab(g(t))2w(t)⋄αt∫abw(t)⋄αt-g¯α2]
and the proof is completed.
Remark 17.
Theorem 16 can also be proved by using the following two convex functions:
(32)ϕ1(x)=ζ2x2-Φ(x),ϕ2(x)=Φ(x)-η2x2,
where
(33)η=minx∈[m,M]Φ′′(x),ζ=maxx∈[m,M]Φ′′(x).
Since ϕ1 and ϕ2 are continuous and convex, we have
(34)J1(g,ϕ1)≥0,J1(g,ϕ2)≥0.
This implies that
(35)ηJ1(g,Φ0)≤J1(g,Φ)≤ζJ1(g,Φ0).
Hence, as the function Φ′′ is continuous, there exists ξ∈[m,M] such that (27) holds.
Theorem 18.
Let g∈C([a,b]T,R) and Φ,Ψ∈C2([m,M],R). Let J1 be defined as in (26). Then there exists ξ∈[m,M] such that
(36)J1(g,Φ)J1(g,Ψ)=Φ′′(ξ)Ψ′′(ξ)
holds, provided that the denominator on the left-hand side of (36) is nonzero.
Proof.
Let χ be defined as the linear combination of functions Φ and Ψ by
(37)χ(x)=J1(g,Ψ)Φ(x)-J1(g,Φ)Ψ(x).
Then χ∈C2([m,M],R). By applying Theorem 16 on χ, it follows that there exists ξ∈[m,M] such that
(38)J1(g,χ)=χ′′(ξ)J1(g,Φ0).
After a short calculation we get that J1(g,χ)=0. By hypothesis J1(g,Φ0)≠0 (otherwise, we have a contradiction with J1(g,Ψ)≠0), so it follows that
(39)χ′′(ξ)=0,
which is equivalent to (36).
Remark 19.
In Theorem 18, if the inverse of the function Φ′′/Ψ′′ exists, then (36) gives
(40)ξ=(Φ′′Ψ′′)-1(J1(g,Φ)J1(g,Ψ)).
Remark 20.
Note that setting the function Ψ as Ψ(x)=x2/2 in Theorem 18, we get the statement of Theorem 16.
As a consequence of the above two mean-value theorems, the following corollaries easily follow.
Corollary 21.
Let g∈C([a,b]T,[m,M]), Φ,Ψ:[m,M]→R, and let w∈C([a,b]T,R) be an α-SP weight for g. Let J1 be defined as in (26). Then the following two statements hold.
If Φ∈C2([m,M],R), then there exists ξ∈[m,M] such that (27) holds.
If Φ,Ψ∈C2([m,M],R), then there exists ξ∈[m,M] such that (36) holds.
Proof.
The statement (i) (statement (ii), resp.,) directly follows from Theorem 16 (Theorem 18, resp.,) and Corollary 14.
Corollary 22.
Let g∈C([a,b]T,[m,M]) be monotone function, Φ,Ψ:[m,M]→R, and w∈C([a,b]T,R) such that ∫abw(t)⋄αt>0. Let (9) hold for all s∈[a,b]T. Let J1 be defined as in (26). Then the following two statements hold.
If Φ∈C2([m,M],R), then there exists ξ∈[m,M] such that (27) holds.
If Φ,Ψ∈C2([m,M],R), then there exists ξ∈[m,M] such that (36) holds.
Proof.
The statement (i) (statement (ii), resp.,) directly follows from Theorem 16 (Theorem 18, resp.,) and Corollary 15.
3. Improvement of the Converse of the Jensen Inequality on Time Scales
Using the similar method as in previous section, in the following theorem we obtain the generalization of the converse of the Jensen inequality on time scales, where negative weights are also allowed.
Theorem 23.
Let g∈C([a,b]T,R) be such that g([a,b]T)⊆[m,M] and let c,d∈[m,M](c≠d) be such that c≤g(t)≤d for all t∈[a,b]T. Let w∈C([a,b]T,R) be such that ∫abw(t)⋄αt≠0. Then, the following two statements are equivalent.
For every convex function Φ∈C([m,M],R)(41)∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt≤d-g¯αd-cΦ(c)+g¯α-cd-cΦ(d)
holds.
For all y∈[m,M](42)∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt≤d-g¯αd-cG(c,y)+g¯α-cd-cG(d,y)
holds, where the function G:[m,M]×[m,M]→R is defined in (15).
Furthermore, the statements (i) and (ii) are also equivalent if we change the sign of inequality in both (41) and (42).
Proof.
The idea of the proof is very similar to the proof of Theorem 11.
(i) ⇒ (ii): let (i) hold. As the function G(·,y)(y∈[m,M]) is continuous and convex, it follows that (42) holds.
(ii) ⇒ (i): let (ii) hold. Let Φ∈C2([m,M],R). Then by using (16) we get
(43)d-g¯αd-cΦ(c)+g¯α-cd-cΦ(d)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt=∫mM[∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αtd-g¯αd-cG(c,y)+g¯α-cd-cG(d,y)aaaaaaaa-∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt]Φ′′(y)dy.
If the function Φ is also convex, then Φ′′(y)≥0 for all y∈[m,M], and hence it follows that for every convex function Φ∈C2([m,M],R) the inequality (41) holds. Moreover, it is not necessary to demand the existence of the second derivative of the function Φ (see [6, page 172]). The differentiability condition can be directly eliminated by using the fact that it is possible to approximate uniformly a continuous convex function by convex polynomials.
The last part of our theorem can be proved analogously.
Remark 24.
Let the conditions of Theorem 23 hold. Then the following two statements are equivalent.
For every concave function Φ∈C([m,M],R), the reverse inequality in (41) holds.
For all y∈[m,M] inequality (42) holds.
Moreover, statements (i′) and (ii′) are also equivalent if we change the sign of inequality in both statements (i′) and (ii′).
Remark 25.
Note that in all the results in this section we allow that the mean value g¯α goes out of the interval [m,M], while in the results from the previous section we demanded that g¯α∈[m,M].
Setting c=m and d=M in Theorem 23, we get the following result.
Corollary 26.
Let g∈C([a,b]T,R) be such that g([a,b]T)⊆[m,M]. Let w∈C([a,b]T,R) be such that ∫abw(t)⋄αt≠0. Then, the following two statements are equivalent.
For every convex function Φ∈C([m,M],R)(44)∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt≤M-g¯αM-mΦ(m)+g¯α-mM-mΦ(M)
holds.
For all y∈[m,M](45)∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt≤0
holds, where the function G:[m,M]×[m,M]→R is defined as in (15).
Furthermore, statements (i) and (ii) are also equivalent if we change the sign of inequality in both (44) and (45).
Remark 27.
As a consequence of Corollary 26, we get the result from Lemma 9.
Let c=m and d=M. Then (43) transforms into
(46)M-g¯αM-mΦ(m)+g¯α-mM-mΦ(M)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt=-∫mM∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αtΦ′′(y)dy.
Let ∫abw(t)⋄αt>0, and suppose that g is nondecreasing and that it has the first derivative. Now, similarly as in [4], we derive the result from Lemma 9. Let m=g(a) and M=g(b), and make the substitution y=g(s). Then we get
(47)M-g¯αM-mΦ(m)+g¯α-mM-mΦ(M)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt=-1∫abw(t)⋄αt∫ab(∫abG(g(t),g(s))w(t)⋄αt)aaaaaaaaaaaaaaaaa×Φ′′(g(s))g′(s)ds.
Since g is nondecreasing, we have that g′(s)≥0 for all s∈[a,b]T. If Φ∈C2([m,M],R) is convex, then Φ′′(g(s))≥0 for all s∈[a,b]T. Hence, if and only if
(48)∫abG(g(t),g(s))w(t)⋄αt≤0
holds for all s∈[a,b]T, then for every continuous convex function Φ inequality (44) holds. Note that (48) is equivalent to condition (14).
Corollary 28.
Let g∈C([a,b]T,[m,M]) and let w∈C([a,b]T,R) be an α-SP weight for g on [a,b]T. Then
(49)∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt≤0
holds for all y∈[m,M].
Proof.
The proof follows directly from Theorem 10 and Corollary 26.
Under the assumptions of Theorem 23, we define the following functional J2(g,Φ):
(50)J2(g,Φ)={d-g¯αd-cΦ(c)+g¯α-cd-cΦ(d)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αtaaaaaaaif∀y∈[m,M]inequality(42)holds,∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt-d-g¯αd-cΦ(c)-g¯α-cd-cΦ(d)aaaaaaaif∀y∈[m,M]thereverseinequalityaaaaaaain(42)holds,
where the function Φ is defined on [m,M]. Clearly, if Φ is continuous and convex, then J2(g,Φ) is nonnegative.
Theorem 29.
Let g∈C([a,b]T,R) and Φ∈C2([m,M],R). Let J2 be defined as in (50). Then there exists ξ∈[m,M] such that
(51)J2(g,Φ)=Φ′′(ξ)J2(g,Φ0)
holds, where Φ0(x)=x2/2.
Proof.
The idea of the proof is very similar to the proof of Theorem 16.
Since the function Φ′′ is continuous and
(52)d-g¯αd-cG(c,y)+g¯α-cd-cG(d,y)-∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt
does not change its positivity on [m,M], applying the integral mean-value theorem on (43) we get that there exists ξ∈[m,M] such that
(53)d-g¯αd-cΦ(c)+g¯α-cd-cΦ(d)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt=Φ′′(ξ)∫mM[∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αtd-g¯αd-cG(c,y)+g¯α-cd-cG(d,y)aaaaaaaaaaaaai-∫abG(g(t),y)w(t)⋄αt∫abw(t)⋄αt]dy.
Calculating the integral on the right-hand side of (53), we get
(54)d-g¯αd-cΦ(c)+g¯α-cd-cΦ(d)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt=12Φ′′(ξ)[∫ab(g(t))2w(t)⋄αt∫abw(t)⋄αtd-g¯αd-c·c2+g¯α-cd-c·d2aaaaaaaaaaaa-∫ab(g(t))2w(t)⋄αt∫abw(t)⋄αt]
and we get statement (51) of our theorem.
Remark 30.
Note that (54) can also be expressed as
(55)d-g¯αd-cΦ(c)+g¯α-cd-cΦ(d)-∫abΦ(g(t))w(t)⋄αt∫abw(t)⋄αt=12Φ′′(ξ)[g¯α(c+d)-cd-∫ab(g(t))2w(t)⋄αt∫abw(t)⋄αt].
Theorem 31.
Let g∈C([a,b]T,R) and Φ,Ψ∈C2([m,M],R). Let J2 be defined as in (50). Then there exists ξ∈[m,M] such that
(56)J2(g,Φ)J2(g,Ψ)=Φ′′(ξ)Ψ′′(ξ)
holds, provided that the denominator on the left-hand side of (56) is nonzero.
Proof.
The proof is very similar to the proof of Theorem 18.
4. Exponential and Logarithmic Convexity
First we recall some definitions and facts about exponentially convex and logarithmically convex functions (see, e.g., [13, 14] or [9]) which we need for our results.
Definition 32.
A function f:I→R(I⊆R) is n-exponentially convex in the Jensen sense on I, if
(57)∑i,j=1nξiξjf(xi+xj2)≥0
holds for all choices ξi∈R and xi∈I, i=1,…,n.
A function f:I→R is n-exponentially convex if it is n-exponentially convex in the Jensen sense and continuous on I.
Remark 33.
It is clear from the definition that 1-exponentially convex functions in the Jensen sense are in fact nonnegative functions. Also, n-exponentially convex functions in the Jensen sense are k-exponentially convex in the Jensen sense for every k∈N,k≤n.
By definition of positive semidefinite matrices and some basic linear algebra, we have the following proposition.
Proposition 34.
If f is an n-exponentially convex function in the Jensen sense, then the matrix [f((xi+xj)/2)]i,j=1k is positive, semidefinite for all k∈N,k≤n. Particularly, det[f((xi+xj)/2)]i,j=1k≥0 for all k∈N,k≤n.
Definition 35.
A function f:I→R is exponentially convex in the Jensen sense on I, if it is n-exponentially convex in the Jensen sense for all n∈N.
A function f:I→R is exponentially convex if it is exponentially convex in the Jensen sense and continuous.
Remark 36.
Some examples of exponentially convex functions are (see [15]) as follows:
f:I→R defined by f(x)=cekx, where c≥0 and k∈R;
f:R+→R defined by f(x)=x-k, where k>0;
f:R+→R+ defined by f(x)=e-kx, where k>0.
Remark 37.
It is known (and easy to show) that f:I→R+ is log-convex in the Jensen sense on I, if and only if
(58)ρ2f(x)+2ρσf(x+y2)+σ2f(y)≥0
holds for every ρ, σ∈R and x,y∈I. It follows that a positive function is log-convex in the Jensen sense if and only if it is 2-exponentially convex in the Jensen sense. Also, using basic convexity theory, it follows that a positive function is log-convex if and only if it is 2-exponentially convex.
The following lemma is equivalent to the definition of convex function (see [6, page 2]).
Lemma 38.
If x1,x2,x3∈I are such that x1<x2<x3, then the function f:I→R is convex if and only if the following inequality holds:
(59)(x3-x2)f(x1)+(x1-x3)f(x2)+(x2-x1)f(x3)≥0.
We will also need the following result (see, e.g., [6]).
Proposition 39.
If f:I→R is a convex function and x1,x2,y1,y2∈I are such that x1≤y1, x2≤y2, x1≠x2, and y1≠y2, then the following inequality is valid:
(60)f(x2)-f(x1)x2-x1≤f(y2)-f(y1)y2-y1.
When dealing with functions with different degree of smoothness, divided differences are found to be very useful.
Definition 40.
The second order divided difference of a function f:I→R at mutually different points x0,x1,x2∈I is defined recursively by
(61)[xi;f]=f(xi),i=0,1,2,[xi,xi+1;f]=f(xi+1)-f(xi)xi+1-xi,i=0,1,[x0,x1,x2;f]=[x1,x2;f]-[x0,x1;f]x2-x0.
Remark 41.
The value [x0,x1,x2;f] is independent of the order of the points x0, x1, and x2. This definition may be extended to include the case in which some or all the points coincide (see [6, page 16]). Namely, taking the limit x1→x0 in (61), we obtain
(62)limx1→x0[x0,x1,x2;f]=[x0,x0,x2;f]=f(x2)-f(x0)-f′(x0)(x2-x0)(x2-x0)2,aaaaaaaaaaaaaaaaaaaaaaaaax2≠x0
provided that f′ exists, and furthermore, taking the limits xi→x0, i=1,2, in (61), we obtain
(63)limx2→x0limx1→x0[x0,x1,x2;f]=[x0,x0,x0;f]=f′′(x0)2
provided that f′′ exists.
A function f:I→R is convex if and only if for every choice of three mutually different points x0,x1,x2∈I[x0,x1,x2;f]≥0 holds.
Now, we use an idea from [15] to give an elegant method of producing an n-exponentially convex and exponentially convex functions applying the functionals J1 and J2 on a given family of functions with the same property.
Theorem 42.
Let Ji, i=1,2, be linear functionals defined in (26) and (50), respectively. Let Ω={Φρ:ρ∈J}, where J is an interval in R, be a family of functions Φρ∈C([m,M],R) such that the function ρ↦[x0,x1,x2;Φρ] is n-exponentially convex in the Jensen sense on J for every choice of three mutually different points x0,x1,x2∈[m,M]. Then ρ↦Ji(g,Φρ) is an n-exponentially convex function in the Jensen sense on J. If the function ρ↦Ji(g,Φρ) is also continuous on J, then it is n-exponentially convex on J.
Proof.
Define the function ν:I→R by
(64)ν(x)=∑j,k=1nξjξkΦrjk(x),
where ξj,ξk∈R, rj,rk∈J, 1≤j,k≤n, rjk=(rj+rk)/2, and Φrjk∈Ω. Using the assumption that for every choice of three mutually different points x0,x1,x2∈[m,M]ρ↦[x0,x1,x2;Φρ] is n-exponentially convex in the Jensen sense on J, we obtain
(65)[x0,x1,x2;ν]=∑j,k=1nξjξk[x0,x1,x2;Φrjk]≥0.
Therefore, ν is convex (and continuous) function on I. Hence Ji(g,ν)≥0,i=1,2, which implies that
(66)∑j,k=1nξjξkJi(g,Φrjk)≥0.
We conclude that the function ρ↦Ji(g,Φρ) is n-exponentially convex on J in the Jensen sense.
If ρ↦Ji(g,Φρ) is continuous on J, then ρ↦Ji(g,Φρ) is n-exponentially convex by definition.
The following corollary is an immediate consequence of Theorem 42.
Corollary 43.
Let Ji, i=1,2, be linear functionals defined in (26) and (50), respectively. Let Ω={Φρ:ρ∈J}, where J is an interval in R, be a family of functions Φρ∈C([m,M],R) such that the function ρ↦[x0,x1,x2;Φρ] is exponentially convex in the Jensen sense on J for every choice of three mutually different points x0,x1,x2∈[m,M]. Then ρ↦Ji(g,Φρ) is an exponentially convex function in the Jensen sense on J. If the function ρ↦Ji(g,Φρ) is also continuous on J, then it is exponentially convex on J.
Corollary 44.
Let Ji, i=1,2, be linear functionals defined in (26) and (50), respectively. Let Ω={Φρ:ρ∈J}, where J is an interval in R, be a family of functions Φρ∈C([m,M],R) such that the function ρ↦[x0,x1,x2;Φρ] is 2-exponentially convex in the Jensen sense on J for every choice of three mutually different points x0,x1,x2∈[m,M]. Then, the following statements hold.
ρ↦Ji(g,Φρ) is a 2-exponentially convex function in the Jensen sense on J.
If ρ↦Ji(g,Φρ) is continuous on J, then it is also 2-exponentially convex on J. If ρ↦Ji(g,Φρ) is additionally strictly positive, then it is also log-convex on J, and for r,s,t∈J such that r<s<t, we have
(67)(Ji(g,Φs))t-r≤(Ji(g,Φr))t-s(Ji(g,Φt))s-r.
If ρ↦Ji(g,Φρ) is strictly positive and differentiable function on J, then for every p,q,u,v∈J such that p≤u, q≤v, we have
(68)Mp,q(g,Ji,Ω)≤Mu,v(g,Ji,Ω),i=1,2,
where
(69)Mp,q(g,Ji,Ω)={(Ji(g,Φp)Ji(g,Φq))1/(p-q),p≠q;exp((d/dp)Ji(g,Φp)Ji(g,Φp)),p=q,
for Φp,Φq∈Ω.
Proof.
(i) and the first part of (ii) are immediate consequences of Theorem 42. If ρ↦Ji(g,Φρ) is continuous and strictly positive, its log-convexity is an immediate consequence of Remark 37. Now applying Lemma 38 on the function f(x)=logJi(g,Φx) and r,s,t∈J (r<s<t), we get
(70)(t-s)Ji(g,Φr)+(r-t)Ji(g,Φs)+(s-r)Ji(g,Φt)≥0,
which is equivalent to inequality (67).
To prove (iii), let ρ↦Ji(g,Φρ) be strictly positive and differentiable and therefore continuous too. By (ii), the function ρ↦Ji(g,Φρ) is log-convex on J; that is, the function ρ↦logJi(g,Φρ) is convex on J, and by Proposition 39 we obtain
(71)logJi(g,Φp)-logJi(g,Φq)p-q≤logJi(g,Φu)-logJi(g,Φv)u-v
for p≤u, q≤v, p≠q, and u≠v, concluding that
(72)Mp,q(g,Ji,Ω)≤Mu,v(g,Ji,Ω).
The cases p=q and u=v follow from (71) as limit cases.
Remark 45.
Note that the results from Theorem 42, Corollary 43, and Corollary 44 still hold when two of the points x0,x1,x2∈[m,M] coincide, for a family of differentiable functions Φρ such that the function ρ↦[x0,x1,x2;Φρ] is n-exponentially convex in the Jensen sense (exponentially convex in the Jensen sense and log-convex in the Jensen sense), and furthermore, they still hold when all three points coincide for a family of twice differentiable functions with the same property. The proofs are obtained by recalling Remark 41 and suitable characterization of convexity.
5. Examples
In this section we will vary on choice of a family Ω={Φρ:ρ∈J} in order to construct different examples of exponentially convex functions and construct some means.
Example 46.
Consider a family of functions
(73)Ω1={κρ:R⟶[0,∞);ρ∈R}
defined by
(74)κρ(x)={1ρ2eρx,ρ≠0;12x2,ρ=0.
It is (d2/dx2)κρ(x)=eρx>0 which shows that κρ is convex on R for every ρ∈R. From Remark 36, it follows that ρ↦(d2/dx2)κρ(x) is exponentially convex. Therefore, ρ↦[x0,x1,x2;κρ] is exponentially convex (see [15]) (and so exponentially convex in the Jensen sense). Now using Corollary 43 we conclude that ρ↦Ji(g,κρ),(i=1,2) are exponentially convex in the Jensen sense. It is easy to verify that these mappings are continuous, so they are exponentially convex.
For this family of functions, Mp,q(g,Ji,Ω) from (69) becomes
(75)Mp,q(g,Ji,Ω1)={(Ji(g,κp)Ji(g,κq))1/(p-q),p≠q;exp(Ji(g,id·κp)Ji(g,κp)-2p),p=q≠0;exp(Ji(g,id·κ0)3Ji(g,κ0)),p=q=0,
and using (68) we have that it is monotonous in parameters p and q.
If Ji(i=1,2) are positive, using Theorems 18 and 31 applied for Φ=κp∈Ω1 and Ψ=κq∈Ω1, it follows that
(76)ℵp,q(g,Ji,Ω1)=logMp,q(g,Ji,Ω1)(i=1,2)
satisfy ℵp,q(g,Ji,Ω1)∈[m,M]. If we set g([a,b]T)=[m,M] then we have that ℵp,q(g,Ji,Ω1) are means (of the function g). Note that by (68) they are monotonous means.
Example 47.
Consider a family of functions
(77)Ω2={βρ:(0,∞)⟶R;ρ∈R}
defined by
(78)βρ(x)={xρρ(ρ-1),ρ≠0,1;-logx,ρ=0;xlogx,ρ=1.
It is (d2/dx2)βρ(x)=xρ-2=e(ρ-2)logx>0, which shows that βρ is convex function for x>0. Also, from Remark 36 it follows that ρ↦(d2/dx2)βρ(x) is exponentially convex. Therefore ρ↦[x0,x1,x2;βρ] is exponentially convex (and so exponentially convex in the Jensen sense). Here we assume that [m,M]⊂(0,∞), so our family Ω2 of βρ fulfills the conditions of Corollary 43. In this case Mp,q(g,Ji,Ω) from (69) becomes
(79)Mp,q(g,Ji,Ω2)={(Ji(g,βp)Ji(g,βq))1/(p-q),p≠q;exp(1-2pp(p-1)-Ji(g,β0βp)Ji(g,βp)),p=q≠0,1;exp(1-Ji(g,β02)2Ji(g,β0)),p=q=0;exp(-1-Ji(g,β0β1)2Ji(g,β1)),p=q=1.
If Ji(i=1,2) are positive, by applying Theorems 18 and 31 for Φ=βp∈Ω2 and Ψ=βq∈Ω2, it follows that for i=1,2 there exist ξi∈[m,M] such that
(80)ξip-q=Ji(g,βp)Ji(g,βq).
Since the function ξi↦ξip-q is invertible for p≠q, we have
(81)m≤(Ji(g,βp)Ji(g,βq))1/(p-q)≤M.
Also, Mp,q(g,Ji,Ω2) is continuous, symmetric, and monotonous (by (68)). If we set g([a,b]T)=[m,M], then we have that
(82)m=mint∈[a,b]T{g(t)}≤(Ji(g,βp)Ji(g,βq))1/(p-q)≤maxt∈[a,b]T{g(t)}=M,aaaaaaaaifori=1,2,
which shows that Mp,q(g,Ji,Ω2) are means (of the function g).
Now we impose one additional parameter r in case g([a,b]T)=[m,M]. For r≠0 by substituting g↦gr, p↦p/r, and q↦q/r in (82), we get
(83)m=mint∈[a,b]T{gr(t)}≤(Ji(gr,βp)Ji(gr,βq))r/(p-q)≤maxt∈[a,b]T{gr(t)}=M,aaaaaaaaaifori=1,2.
We define new generalized means as follows:
(84)Mp,q;r(g,Ji,Ω2)={(Mp/r,q/r(gr,Ji,Ω2))1/r,r≠0;(Mp/r,q/r(logg,Ji,Ω1)),r=0.
These new generalized means are also monotonic.
Example 48.
Consider a family of functions
(85)Ω3={γρ:(0,∞)⟶(0,∞):ρ∈(0,∞)}
defined by
(86)γρ(x)={ρ-x(logρ)2,ρ≠1;x22,ρ=1.
It is (d2/dx2)γρ(x)=ρ-x>0, which shows that γρ is convex function for ρ>0. Also, from Remark 36 it follows that ρ↦(d2/dx2)γρ(x) is exponentially convex. Therefore ρ↦[x0,x1,x2;γρ] is exponentially convex (and so exponentially convex in the Jensen sense). Here we assume that [m,M]⊂(0,∞), so our family Ω3 of γρ fulfills the conditions of Corollary 43. In this case Mp,q(g,Ji,Ω) from (69) becomes
(87)Mp,q(g,Ji,Ω3)={(Ji(g,γp)Ji(g,γq))1/(p-q),p≠q;exp(-Ji(g,id·γp)pJi(g,γp)-2plogp),p=q≠1;exp(-2Ji(g,id·γ1)3Ji(g,γ1)),p=q=1,
and by (68) it is monotonous function in parameters p and q.
If Ji(i=1,2) are positive, using Theorems 18 and 31 applied for Φ=γp∈Ω3 and Ψ=γq∈Ω3, it follows that
(88)ℵp,q(g,Ji,Ω3)=-L(p,q)logMp,q(g,Ji,Ω3)
satisfies ℵp,q(g,Ji,Ω3)∈[m,M]. Here L(p,q) is the logarithmic mean defined by L(p,q)=(p-q)/(logp-logq) for p≠q, L(p,p)=p.
Example 49.
Consider a family of functions
(89)Ω4={δρ:(0,∞)⟶(0,∞):ρ∈(0,∞)}
defined by
(90)δρ(x)=e-xρρ.
It is (d2/dx2)δρ(x)=e-xρ>0, which shows that δρ is convex function for ρ>0. Also, from Remark 36 it follows that ρ↦(d2/dx2)δρ(x) is exponentially convex. Therefore ρ↦[x0,x1,x2;δρ] is exponentially convex (and so exponentially convex in the Jensen sense). Here we assume that [m,M]⊂(0,∞), so our family Ω4 of δρ fulfills the conditions of Corollary 43. In this case Mp,q(g,Ji,Ω) from (69) becomes
(91)Mp,q(g,Ji,Ω4)={(Ji(g,δp)Ji(g,δq))1/(p-q),p≠q;exp(-Ji(g,id·δp)2pJi(g,δp)-1p),p=q,
and it is monotonous function in parameters p and q by (68).
If Ji(i=1,2) are positive, using Theorems 18 and 31 applied for Φ=δp∈Ω4 and Ψ=δq∈Ω4, it follows that
(92)ℵp,q(g,Ji,Ω4)=-(p+q)logMp,q(g,Ji,Ω4)
satisfies ℵp,q(g,Ji,Ω4)∈[m,M].
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
ShengQ.FadagM.HendersonJ.DavisJ. M.An exploration of combined dynamic derivatives on time scales and their applications20067339541310.1016/j.nonrwa.2005.03.008MR2235865ZBL1114.26004AmmiM. R. S.FerreiraR. A. C.TorresD. F. M.Diamond-α Jensen's inequality on time scales200820081357687610.1155/2008/576876MR2410768AnwarM.BibiR.BohnerM.PečarićJ.Integral inequalities on time scales via the theory of isotonic linear functionals201120111648359510.1155/2011/483595MR2819769ZBL1221.26026DinuC.A weighted Hermite Hadamard inequality for Steffensen-Popoviciu and Hermite-Hadamard weights on time scales20091717790MR2549633ZBL1274.26049SteffensenJ. F.On certain inequalities and methods of approximation191951274297.PečarićJ. E.ProschanF.TongY. L.1992187Boston, Mass, USAAcademic PressMathematics in Science and EngineeringBoas,R. P.Jr.The Jensen-Steffensen inequality1970302–31918MR0293037ZBL0213.34601PečarićJ.PerićI.Rodić LipanovićM.Uniform treatment of Jensen type inequalities201416(66)2KhanA. R.PečarićJ.Rodić LipanovićM.n-exponential convexity for Jensen-type inequalities20137331333510.7153/jmi-07-29MR3115068ZBL1277.26039NiculescuC. P.PerssonL.-E.200623New York, NY, USASpringerxvi+255CMS Books in Mathematics/Ouvrages de Mathématiques de la SMCMR2178902WidderD. V.Completely convex functions and Lidstone series194251387398MR000635610.1090/S0002-9947-1942-0006356-4ZBL0061.11511FranjićI.KhalidS.PečarićJ.On the refinements of Jensen-Steffensen inequalities20112011, article 121110.1186/1029-242X-2011-12Klaričić BakulaM.PečarićJ.PerićJ.Extensions of the Hermite-Hadamard inequality with applications201215489992110.7153/mia-15-77MR3014733ZBL1253.26040PečarićJ.PerićJ.Improvements of the Giaccardi and the Petrović inequality and related Stolarsky type means20123916575MR2979954ZBL1274.26069JakšetićJ.PečarićJ.Exponential Convexity Method2013201181197