We consider Pλ,τM policy of a dam in which the water input is an increasing Lévy process. The release rate of the water is changed from 0 to M and from
M to 0 (M>0) at the moments when the water level upcrosses level λ and downcrosses level τ (τ<λ), respectively. We determine the potential of the
dam content and compute the total discounted as well as the long-run average cost. We also find the stationary distribution of the dam content. Our results extend the results in the literature when the water input is assumed to be a Poisson process.
1. Introduction and Summary
Lam and Lou [1] consider the control of a finite dam where the water input is a Wiener process, using Pλ,τMpolicies. In these policies, the water release rate is assumed to be zero until the water reaches level λ>0,as soon as this happens the water is released at rate M>0until the water content reaches level τ>0,λ>τ.Abdel-Hameed and Nakhi [2] discuss the optimal control of a finite dam using Pλ,τMpolicies, using the total discounted as well as the long-run average costs. They consider the cases where the water input is a Wiener process and a geometric Brownian motion process. Lee and Ahn [3] consider the long-run average cost case when the water input is a compound Poisson process. Abdel-Hameed [4] treats the case where the water input is a compound Poisson process with a positive drift. He obtains the total discounted cost as well as the long-run average cost. Bae et al. [5] consider the Pλ,0Mpolicy in assessing the workload of an M/G/1 queuing system. Bae et al. [6] consider the log-run average cost for Pλ,τMpolicy in a finite dam, when the input process is a compound Poisson process. In this paper, we consider the Pλ,τMpolicy for the more general case where the water input is assumed to be an increasing Lévy process. At any time, the release rate can be increased from 0 to M with a starting costK1M or decreased fromM to zero with a closing cost K2M. Moreover, for each unit of output, a reward Ris received. Furthermore, there is a penalty cost which accrues at a rate f, where f is a bounded measurable function on the state space of the content process.
We will use the term “increasing” to mean “nondecreasing” throughout this paper.
In Section 2, we discuss the potentials of the processes of interest as well as the other results that are needed to compute the total discounted and long-run average costs. In Section 3, we obtain formulas for the cost functionals using the total discounted as well as the long-run average cost cases. In Section 4, we discuss the special cases where the water input is an increasing compound Poisson process as well as inverse Gaussian process.
2. Basic Results
The content process is best described by the bivariate process B=(Z,R), where Z={Zt,t≥0} and R={Rt,t≥0}describe the dam content and the release rate, respectively. We define the following sequence of stopping times: T̂0=inf{t≥0:Zt≥λ},T0*=inf{t≥T̂0:Zt≤τ},T̂n=inf{t≥T̂n-1:Zt≥λ},Tn*=inf{t≥T̂n:Zt≤τ},n=1,2,….
The process B has as its state space the pair of line segments S=[[0,λ)×{0}]∪[(τ,∞)×{M}].
Let I={It,t≥0} be an increasing Lévy process with drift a≥0. For each t≥0, we let It*=It-Mt. From the definition of the Pλ,τMpolicy, it follows that, for each t∈[0,T̂0), Zt=It, Zt={It,t∈⋃n=0∞{[Tn*,T̂n+1)},It*,t∈⋃n=0∞{[T̂n,Tn*)}.
Furthermore, IT̂n*=IT̂n,n=0,1,…. It follows that the content process Z is a delayed regenerative process with the regeneration points being the Tn*,n=1,2,….The penalty cost rate function is defined as follows: f(z,r)={g(z)(z,r)∈[0,λ)×{0},g*(z)(z,r)∈(τ,∞)×{M},
where g:[0,λ)→R+ and g*:(τ,∞):→R+ are bounded measurable functions.
For any process Y={Yt,t≥0}with state space E,any Borel set A⊂Eand any functional f, Ey(f)denotes the expectation of f conditional on Y0=y,Py(A)denotes the corresponding probability measure, and IA()is the indicator function of the set A. Throughout, we let R=(-∞,∞), R+=[0,∞), N={1,2,…}, and N+={0,1,…}. For x,y∈R, we define x∨y=xmaxyand x∧y=xminy. Throughout, we define Wλ=inf{t≥0:It≥λ} and Wτ*=inf{t≥0:It*≤τ}. For any x<λ and y>τ, let Cgα(0,x,λ) and Cg*α(M,y,τ)be the expected discounted penalty costs, during the intervals (0,Wλ)and (0,Wτ*), respectively. Furthermore, let Cg(0,x,λ) and Cg*(M,y,τ)be the expected nondiscounted penalty costs during the same intervals. It follows that Cgα(0,x,λ)=Ex∫0Wλe-αtg(It)dt,Cg*α(M,y,τ)=Ey∫0Wτ*e-αtg*(It*)dt,Cg(0,x,λ)=Ex∫0Wλg(It)dt,Cg*(M,y,τ)=Ey∫0Wτ*g*(It*)dt.
The functionals above, which we aim to evaluate, are basic ingredients in computing the total discounted and long-run average costs associated with the Pλ,τM policy as discussed in Section 3.
Let a≥0 and ν be the drift term and the Lévy measure of input process I, respectively, then, for all t≥0,x≥0,and α≥0the Laplace transform of It is of the form, Ex[e-αIt]=e-t[x+ϕ(α)].
The function ϕ(α) is known as the Lévy component and is given byϕ(α)=αa+∫0∞(1-e-αx)ν(dx),
where ν is a measure on [0,∞)satisfying∫0∞(x∧1)ν(dx)<∞,ν({0})=0.
Increasing Lévy processes include increasing compound Poisson processes, inverse Gaussian processes, gamma processes, and stable processes.
We assume that the expected value of I1 is finite throughout this paper.
To evaluate the cost functionals and other parameters of the content process, we define the Lévy process killed at Wλ as follows:X={It,t<Wλ}.
From Theorem 3.3.12 of Blumenthal and Getoor [7], it follows that the process X is a strong Markov process.
Definition 2.1.
Let Y be a Markov process with a state space E. For each α≥0, the α-potential of Y (denoted by UYα) is defined for any bounded measurable function on Eand every x∈E via ((1.8.9), p.41 of [8])
UYαf(x)≝∫Ef(z)UYα(x,dz)=Ex∫0∞e-αtf(Yt)dt.
Remark 2.2.
Throughout, we denote the α-potential of the process I by Uα. Since the process I has stationary independent increments, it follows that Uα(x,dy)=Uα(0,dy-x), for each xand y in the state space of the process I satisfying y≥x. We denote Uα(0,dy)by Uα(dy), throughout.
Since the process Iis increasing and has stationary independent increments, it follows that
Cgα(0,x,λ)=Uαg(x)=∫xλg(y)Uα(x,dy)=∫0λ-xg(x+y)Uα(dy),Cg(0,x,λ)=U0g(x)=∫xλg(y)U0(x,dy)=∫0λ-xg(x+y)U0(dy).
The following lemma follows by taking g(x)=1 for all x∈[0,λ)in (2.11) and (2.12), respectively.
Lemma 2.3.
For x≤λ one has
Ex(exp(-αWλ))=1-αUαI[0,λ-x)(0)=αUαI[λ-x,∞)(0),Ex(Wλ)=U0I[0,λ-x)(0).
The following Lemma gives the Laplace transform of IWλ as well as the expected value of IWλ.
Lemma 2.4.
(a) For x<λ and α≥0,
Ex[exp(-αIWλ)]=exp(-αx)[1-ϕ(α)∫[0,λ-x)exp(-αz)U0(dz)].
(b) For x<λ,
Ex(IWλ)=x+E0(I1)E0(Wλ-x).
Proof of (a).
For x<λand α≥0, since the process I has stationary independent increments, we have
Ex[exp(-αIWλ)]=E0[exp(-α(x+IWλ-x))]=exp(-αx)[ϕ(α)∫[λ-x,∞)exp(-αz)U0(dz)]=exp(-αx)[ϕ(α){∫[0,∞)exp(-αz)U0(dz)-∫[0,λ-x)exp(-αz)U0(dz)}]=exp(-αx)[ϕ(α){1ϕ(α)-∫[0,λ-x)exp(-αz)U0(dz)}]=exp(-αx)[1-ϕ(α)∫[0,λ-x)exp(-αz)U0(dz)],
where the second equation follows from (8) of Alili and Kyprianou [9], and the fourth equation follows from the definition of ϕ(α)and U0.
Proof of (b).
For x<λ,
Ex(IWλ)=x+E0(IWλ-x)=x+limα→0[1-E0[exp(-αIWλ-x)]α]=x+limα→0[ϕ(α)α∫[0,λ-x)exp(-αz)U0(dz)]=x+ϕ′(0)U0I[0,λ-x)(0)=x+E0(I1)U0I[0,λ-x)(0)=x+E0(I1)E0(Wλ-x),
where the first equation follows since the process I is a Lévy process, the third equation follows from (2.15), the fourth equation follows because ϕ(0)=0, the fifth equation follows since ϕ′(0)=E0(I1), and the last equation follows from (2.14).
To derive Cg*α(M,y,τ),Cg*(M,y,τ),Ey(exp(-αWτ*)),and Ey(Wτ*), we defineX*={It*,t<Wτ*}.
Clearly, the state space of the process X*is (τ,∞). From Theorem 3.3.12 of Blumenthal and Getoor [7], it follows that the process X* is a strong Markov process.
Throughout, we assume that M≥a. Using Doob's optional sampling theorem, the following is easy to see.
Lemma 2.5.
For x≥τ,
Ex[exp(-αWτ*)]=exp(-(x-τ)η(α)),
where η(α) is the solution of the integral equation
Mη(α)=α+ϕ(η(α)).
The following Lemma gives, among other things, a formula for computing Ex(Wτ*)and condition under which this expectation is finite.
Lemma 2.6.
(a) η(0+)=0 if and only if M-E0(I1)>0.
(b) The function η(α) is a concave increasing function on R+.
(c) For x≥τ,Ex(Wτ*)=x-τM-E0(I1)ifM-E0(I1)>0,=∞otherwise.
Proof of (a).
From (2.20), it follows that η(α) is an increasing function on R+ and limα→∞η(α)=∞. Let f(x)=η-1(x), using (2.21) it follows that f(x)=Mx-ϕ(x). Furthermore, η(0+)is the largest root of f, and 0 is indeed a root of f and, since η(α) is an increasing function, fis an increasing function on the domain [η(0+),∞). It follows that the only root of the function f above is zero if and only if f′(0)>0. Observe that
f′(x)=M-ϕ′(x)=M-a-∫0∞ye-xyν(dy),
where the interchange of the differentiation and integration in the second equation is permissible using the Lebesgue dominated convergence theorem, since for each x≥0,y≥0,ye-xy<y and ∫0∞yν(dy)=E0(I1)-a<∞. The rest of the proof follows since f′(0)=M-E0(I1).
Proof of (b).
To prove part (b), first we observe that f′(x)is an increasing function in its argument, and hence f(x) is a convex function in its argument. Since f(x)=η-1(x), it follows that η(α) is a concave function.
Proof of (c).
If the proof of part (c) follows since, from (2.20), Wτ*<∞almost everywhere if and only if η(0+)=0, in this case, Ex(Wτ*)=(x-τ)η′(0+)=(x-τ)/f′(0)=(x-τ)/(M-E0(I1)).
Remark 2.7.
The equation given in part (c) of Lemma 2.6 is consistent with the well-known fact about the expected busy period of the M/G/1 queue.
Let Uα* be the potential of the process X*. To find Uα*, we first need to introduce the following definition.
Definition 2.8.
A Lévy process is said to be spectrally positive (negative) if it has no negative (positive) jumps.
Clearly, a Lévy process L is spectrally positive if and only if the process -Lis spectrally negative. Furthermore, the process I* is spectrally positive with bounded variation.
For θ,t∈R+, we haveE[e-θIt*]=etψ(θ),
whereψ(θ)=Mθ-ϕ(θ).
We note that the function η is the right-hand inverse of the function ψ.
We now define the α-scale function, which plays a major role in the applications of spectrally positive (negative) Lévy processes. This function is closely connected to the two-sided exit problem of such processes (cf. Bertion [10]).
Definition 2.9.
For α≥0, the α-scale function (of the process It*) W(α):R→R+ is the unique function whose restriction to R+ is continuous and has Laplace transform
∫0∞e-θxW(α)(x)dx=1ψ(θ)-α,θ>η(α),
and is defined to be identically zero on the interval (-∞,0).
Letting α=0, we get the 0-scale function, which is referred to as the “scale function” in the literature. We denote this function by W (instead of W(0)) throughout. We note that ψ(θ)=Mθ-ϕ(θ)=(Mθ-aθ)-∫0∞(1-e-αx)ν(dx)=Nθ-θ∫0∞e-αxν[x,∞)dx, where N=M-a>0. Let μ=∫0∞xν(dx)=∫0∞ν[x,∞)dx. For every x∈R+, let F(x)=∫0xν[y,∞)dy/μ be the equilibrium distribution function corresponding to ν. Let ρ=μ/N and assume that ρ<1. It follows thatW(x)=1N∑k=0∞ρkF(k)(x),
where F(k) is the kth convolution of F. Furthermore, we note that for α,x∈R+,W(α)(x)=∑k=0∞αkW(k+1)(x),
where W(k) is the kth convolution of W.
We are now in a position to state and prove a lemma that characterizes Uα*.
Lemma 2.10.
Uα* is absolutely continuous with respect to the Lebesgue measure on [τ,∞), and its density is given as follows:
Uα*(x,y)=e-η(α)(x-τ)W(α)(y-τ)-W(α)(x-y),x,y∈[τ,∞).
Proof.
Define the process I∧ to be equal to -I*; it follows that I∧ is a spectrally negative Lévy process. For a,b∈R, we let
Tb+=inf{t≥0:I∧t≥b},Ta-=inf{t≥0:I∧t≤a}.
Supurn [11] proved that (for b>0) the α-potential of the process obtained by killing the process I∧at Tb+∧T0- is absolutely continuous with respect to the Lebesgue measure on [0,b], and its density is equal to
W(α)(x)W(α)(b-y)W(α)(b)-W(α)(x-y),x,y∈[0,b].
It follows that, for a,b∈R, a<b, the α-potential of the process obtained by killing the process I∧ at Tb+∧Ta- is absolutely continuous with respect to the Lebesgue measure on [a,b], and its density is equal to
W(α)(x-a)W(α)(b-y)W(α)(b-a)-W(α)(x-y),x,y∈[a,b].
From Lemma 4 of Pistorious [12], we have W(α)(x)=O(eη(α)x)as x→∞. Letting a→-∞ in the last density above, then the the α potential of the process obtained by killing the process I∧ at Tb+is absolutely continuous with respect to the Lebesgue measure on (-∞,b), and its density (denoted by ubα(x,y)) is as follows:
ubα(x,y)=e-η(α)(b-x)W(α)(b-y)-W(α)(x-y),x,y∈(-∞,b].
Observe that for any A⊂(τ,∞) and x∈(τ,∞),
Px{Xt*∈A}=P{It*∈A,t<Wτ*∣I0*=x}=P{It̂∈-A,t<T-τ+∣I0̂=-x}.
Thus,
uα*(x,y)=u-τα(-x,-y)=e-η(α)(x-τ)W(α)(y-τ)-W(α)(y-x),x,y∈[τ,∞).
It is seen that, for x≥τ,Cg*α(M,x,τ)=Uα*g*(x)=∫τ∞g*(y)Uα*(x,dy),Cg*0(M,x,τ)=U0*g*(x)=∫τ∞g*(y)U0*(x,dy).
Theorem 2.11.
For any α≥0 and x≥0,
for x≤λ,
Ex[exp(-αT*0)]=Mη(α)exp(-η(α)(x-τ))∫[λ-x,∞)exp(-zη(α))Uα(dz),
for x>λ,
Ex[exp(-αT*0)]=exp(-η(α)(x-τ)).
Proof of (a).
Let Ϝbe the sigma algebra generated by (Wλ,IWλ), then we have
Ex[exp(-αT*0)]=Ex[exp(-α(Wλ+(T*0-Wλ)))]=Ex[Ex[exp(-α(Wλ+(T*0-Wλ)))∣Ϝ]]=Ex[exp(-αWλ)EIWλexp(-αWτ*)]=Ex[exp(-αWλ)exp(-η(α)(IWλ-τ))]=E0[exp(-αWλ-x)exp(-η(α)(IWλ-x+x-τ))]=exp(-η(α)(x-τ))E0[exp(-αWλ-x)exp(-η(α)(IWλ-x))]=(α+ϕ(η(α)))exp(-η(α)(x-τ))∫[λ-x,∞)exp(-zη(α))Uα(dz)=Mη(α)exp(-η(α)(x-τ))∫[λ-x,∞)exp(-zη(α))Uα(dz),where the third equation follows from the second equation, since given Ϝ,T*0-Wλ=Wτ*almost everywhere, the fourth equation follows from (2.20) above, the seventh equation follows from (8) Alili and Kyprianou [9], and the last equation follows from (2.21) above.
Proof of (b).
The proof of the part (b) of the theorem follows from (2.20), since for x>λ, Wλ=0 and T*0=Wτ* almost everywhere.
3. The Total Discounted, Long-Run Average Costs and the Stationary Distribution of the Dam Content
We now discuss the computations of the cost functionals using the total discounted cost as well as the long-run average cost criteria. Let W be the length of the first cycle, that is, W=T*1-T*0,and let Cα(x)be the expected cost during the interval [0,T*0),when Z0=x. Since the content process Z is a delayed regenerative process with regeneration points T*0,T*1,…, using the delayed regeneration property, it follows that the total discounted cost associated with an Pλ,τM policy is given byCα(λ,τ)=Cα(x)+Ex(exp(-αT*0)EτCα(1))1-Eτ(exp(-αW)),
where Cα(1) is the total discounted cost during the interval (0,W). From the definitions of Cα(x), it follows that, for x>λ, Cα(x)=M{K1-REx∫0Wτ*e-αtdt}+Cg*α(M,x,τ).
To compute Cα(x) for x≤λ, we let Ϝ be the sigma algebra generated by (Wλ,IWλ) and proceed as follows:Cα(x)=M{K2+K1Ex(e-αWλ)-REx∫WλT*0e-αtdt}+Ex∫0Wλe-αtg(Zt)dt+Ex∫WλT*0e-αtg*(Zt)dt=M{K2+K1Ex(e-αWλ)-Rα[Ex(e-αWλ)-Ex(e-αT*0)]}+Ex∫0Wλe-αtg(It)dt+Ex∫WλT*0e-αtg*(Zt)dt=M{K2+K1Ex(e-αWλ)-Rα[Ex(e-αWλ)-Ex(e-αT*0)]}+Cgα(0,τ,λ)+ExEx(∫WλT*0e-αtg*(Zt)dt∣Ϝ)=M{K2+K1Ex(e-αWλ)-Rα[Ex(e-αWλ)-Ex(e-αT*0)]}+Cgα(0,τ,λ)+Ex[e-αWλEIWλ(∫0W*τe-αtg*(It*)dt)]=M{K2+K1Ex(e-αWλ)-Rα[Ex(e-αWλ)-Ex(e-αT*0)]}+Cgα(0,τ,λ)+Ex[e-αWλCg*α(M,IWλ,λ)],
where the second equation follows from the definition of the process Z, the third equation follows from the definition of Cgα(0,τ,λ),the fourth equation follows from the definition of the content process Z and since, given Ϝ, T*0-Wλ=Wτ* almost everywhere, and the last equation follows from the definition of Cg*α(M,x,λ).
We note thatEτCα(1)=Cα(τ).
The following lemma shows how Eτ(exp(-αW))(given in (3.1)) can be computed and also gives a formula for computing the expected value of W, which we will need later on to compute the long-run average cost.
Lemma 3.1.
Let W be the length of the first cycle as defined above, then
We note that, given Z0=τ, T0*=W almost everywhere. Thus, for each α≥0,
Eτ(e-αW)=Eτ(e-αT0*)=Mη(α)∫[λ-τ,∞)exp(-zη(α))Uα(dz)=Mη(α)[∫0∞exp(-zη(α))Uα(dz)-∫0λ-τexp(-zη(α))Uα(dz)]=Mη(α)[1Mη(α)-∫0λ-τexp(-zη(α))Uα(dz)]=1-Mη(α)∫0λ-τexp(-zη(α))Uα(dz),where the second equation follows (2.37) upon substituting τ for x, the third equation follows from the definition of the Uα and (2.21).
Proof of (b).
From (3.5), it is evident that, starting at τ, W is finite almost everywhere if and only if η′(0+)=0. From part (a) of Lemma 2.6, it follows that W is finite almost everywhere if and only if E0(I1)<M. From (2.14) and (3.5), we have
Eτ(W)=Mη′(0)E0(Wλ-τ)ifE0(I1)<M,=∞otherwise.
The proof of (b) is complete, since as shown in the proof of part (c) of Lemma 2.6η′(0)=1M-E0(I1)ifE0(I1)<M.
Now, we turn our attention to computing the long-run average cost per a unit of time. Let M-E0(I1)=M*and assume that M*>0. From (3.1), (3.3), and (3.4), it follows, by a Tauberian theorem, that the long-run average cost per unit of time, denoted by C(λ,τ),is given byC(λ,τ)=M[K+(RE0(Wλ-τ))]+Cg(0,λ,τ)+Eτ(Cg*(M,IWλ,τ))Eτ(W)-RM=KM*+(M*/M)[Cg(0,λ,τ)+Eτ(Cg*(M,IWλ,τ))]E0(Wλ-τ)-RE0(I1),
whereK=K1+K2, and the second equation follows from (3.6) and the first equation.
Remark 3.2.
Assume that both penalty functions g and g* are identically zero on their domains, and M* defined above is greater than zero. The following follows from (3.10) above:
C(λ,τ)=KM*E0(Wλ-τ)-RE0(I1).
Letting R=0,K=0and g(x)=I[τ,z](x),x∈[0,λ)and g*(x)=I[τ,z](x),x∈[τ,∞)in (3.10), we get the following proposition which generalizes the results obtained by Lee and Ahn [3], where they assumed that the input process is a compound Poisson process and τ=0.
Proposition 3.3.
Assume that M>E0(I1). Let Z=limt→∞Zt, and, H(z) be the distribution function of the process Z, then, for z∈[τ,∞),
H(z)=(M*M)E0(W(λ∧z)-τ)E0(Wλ-τ)+(M*M)Eτ[U*I[τ,z]0(IWλ)]E0(Wλ-τ).
4. Special Cases
In this section, we give the basic identities needed to compute the cost functionals when the input process is an inverse Gaussian process and a compound Poisson process, respectively.
Case 1.
Assume that I is an inverse Gaussian process with transition function defined for x≥0,y≥0,μ>0, and σ2>0, by
p(t,x,y)=tσ2π(y-x)3exp[-(μ(y-x)-t)22(y-x)σ2],y≥x.=0y<x.
It follows that the process I is an increasing Lévy process with state space R+, Lévy measure
ν(dy)=1σ2πy3e-(yμ2/2σ2),
and Lévy component
ϕ(α)=2ασ2+μ2-μσ2.
Furthermore, E0(I1)=1/μ.
Substituting this Lévy component above in (2.21), it is seen that the solution of this equation is as follows (we omit the proof):η(α)=αM+(1-Mμ)+2αMσ2+(1-Mμ)2M2σ2.
To find the α-potential of the process I, for each x≥0 and β≥0, we define fβ(x)=exp(-βx), and it is easily seen thatUαfβ(0)=σ2ασ2+{2βσ2+μ2-μ}.
Throughout we let φZ(·) be as the standard normal density function and let erf() and erfc() be the well-known error and complimentary error functions, respectively. Inverting the above function with respect to β, we have Uα(dy)=σyφZ(yμσ)dy+(μ-ασ22)eαy((ασ2/2)-μ)erfc{yασ2-μ2σ2}dy=uα(y)dy,
whereuα(y)=σyφZ(yμσ)+(μ-ασ22)eαy((ασ2/2)-μ)erfc{yασ2-μ2σ2}.
From (2.13), it follows that, for x≤λ,Ex(exp(-αWλ))=αUαI[λ-x,∞)(0)=ασ2-μασ2-2μeα(λ-x)((ασ2/2)-μ)erfc(λ-xασ2-μ2σ2)-μασ2-2μerfc(λ-xμ2σ2),
where the last equation follows by integrating Uα(dy)over the interval [λ-x,∞).
Inverting the right hand side of (4.8) with respect to α, it follows that, given I0=x≤λ, the distribution function of Wλ (denoted by FWλ()) is given byFWλ(t)=12erfc{(λ-x)μ-t2σ2}-12e2μt/σ2erfc{(λ-x)μ+t2σ2},t≥0.
Furthermore, for x≤λ,Ex(Wλ)=U0I[0,λ)(x)=σ∫0λ-x1yφZ(yμσ)dy+μ2∫0λ-xerfc(-y2μσ)dy=(λ-x)μ2+σλ-xφZ(λ-xμσ)+(λ-x)μ2+σ22μerf(λ-x2μσ),
where the third equation follows from the second equation upon tedious calculations which we omit.
We now turn our attention to computing the distribution function of IWλ (denoted by FIWλ(x)). We first need the following identity which expresses the Lévy component ϕ(α) given in (4.3) in a form suitable for computing FIWλ. The proof of this identity follows from (4.3) after some simple algebraic manipulations which we omit:ϕ(α)=2ασ2+μ2-μσ2=2σ2[αϕ(α)-μ].
For each β∈R+, we write∫λ∞e-βxFIWλ(x)dx=ϕ(β)β∫λ∞e-αxu0(x)dx=2σ2[1ϕ(β)∫λ∞e-βxu0(x)dx-μβ∫λ∞e-βxu0(x)dx]=2σ2[∫0∞e-βxu0(x)dx∫λ∞e-βxu0(x)dx-μβ∫λ∞e-βxu0(x)dx]=2σ2[∫λ∞e-βx{∫λx(u0(x-y)-μ)u0(y)dy}dx],
where the first equation follows from the second equation given the proof of part (a) of Lemma 2.4 by letting x=0, the second equation follows from (4.11), the third equation follows from (4.5) upon letting α=0, and the fourth equation follows from the third equation through integration by parts.
From (4.12), it follows that, for each x≥λ,FIWλ(x)=2σ2[∫λx{u0(x-y)-μ}u0(y)dy].
Case 2.
Assume that I is an increasing compound Poison process with intensity u and F as the distribution function of the size of each jump. This model is treated in details in references [3, 4, 6]. Here, we give the basic entities involved when the drift terms a=0. For the proof of these entities and more in depth analysis of this case, the reader is referred to the above- mentioned references.
It is obvious that
ϕ(α)=u∫0∞(1-e-αx)F(dx),E0(I1)=uμ, where μis the expected jump size of the compound Poisson process.
Define, for any α≥0 and y≥0, Fα(y)=(u/u+α)F(y).For n∈N+, we let Fα(n)(y)be the nth convolution of Fα(y), where Fα(0)(y)=1 for all y≥0. For each y≥0, we define Rα(y)=∑n=0∞Fα(n)(y) to be the renewal function corresponding to Fα(y).It follows that
Uα(dy)=1u+αRα(dy).
Furthermore, for x≤λ,
Ex(exp(-αWλ))=1-αUαI[0,λ-x)(0)=1-αu+αRα(λ-x),Ex(Wλ)=U0I[0,λ-x)(0)=1uR0(λ-x).
Also,
E0(e-αIWλ)=ϕ(α)∫λ∞e-αxU0(dx)=∫λ∞e-αxR0(dx)-∫0∞e-αxF(dx)∫λ∞e-αxR0(dx)=∫λ∞e-αxR0(dx)-∫λ∞e-αx∫λxF(dx-y)R0(dy),
where the first equation follows from the second equation in the proof of part (a) of Lemma 2.4, by letting x=0. Furthermore, the second equation follows from (4.14) and (4.15).
Inverting (4.17) with respect to α, the distribution function of IWλ, denoted by G, is given through
G(dx)=[R0(dx)-∫λx[F(dx-y)R0(dy)]]I[λ,∞)(x)=[F(dx)+∫λxF(dx-y)R0(dy)]I[λ,∞)(x).
Acknowledgment
This Research was supported, in part, by a 2010 Summer Research Grant from the College of Business and Economics, UAE University.
LamY.LouJ. H.Optimal control of a finite dam: Wiener process input198724118619987618010.2307/3214070ZBL0617.93078Abdel-HameedM.NakhiY.Optimal control of a finite dam using Pλ,τM policies and penalty cost: total discounted and long run average cases1990284888898107753710.2307/3214831ZBL0726.60099LeeE. Y.AhnS. K.PλM-policy for a dam with input formed by a compound Poisson process1998352482488164184510.1239/jap/1032192863ZBL0913.60081Abdel-HameedM.Optimal control of a dam using Pλ,τM policies and penalty cost when the input process is a compound Poisson process with positive drift2000372408416178100010.1239/jap/1014842546ZBL0970.60087BaeJ.KimS.LeeE. Y.A PλM policy for M/G/1 queueing system200226929939BaeJ.KimS.LeeE. Y.Average cost under the Pλ,τM policy in a finite dam with compound Poisson inputs2003402519526197810810.1239/jap/1053003561ZBL1030.60088BlumenthalR. M.GetoorR. K.1968New York, NY, USAAcademic Pressx+313Pure and Applied Mathematics, Vol. 290264757LampertiJ.1977New York, NY, USASpringerxvi+2660461600AliliL.KyprianouA. E.Some remarks on first passage of Lévy processes, the American put and pasting principles200515320622080215225310.1214/105051605000000377ZBL1083.60034BertionJ.1996Cambridge, UKCambridge University PressSupurnV. N.The ruin problem and the resolvent of a killed independent increment process1976283945PistoriusM. R.On exit and ergodicity of the spectrally one-sided Lévy process reflected at its infimum2004171183220205458510.1023/B:JOTP.0000020481.14371.37ZBL1049.60042