On the Preservation of Infinite Divisibility under Length-Biasing

0 ydF(y). It is known that L(X) is infinitely divisible if and only if X d = X + Z, where Z is independent of X. Here we assume this relation and ask whether L(Z) or L(X) is infinitely divisible. Examples show that both, neither, or exactly one of the components of the pair (L(X), L(X)) can be infinitely divisible. Some general algorithms facilitate exploring the general question. It is shown that length-biasing up to the fourth order preserves infinite divisibility when L(X) has a certain compound Poisson law or the Lambert law. It is conjectured for these examples that this extends to all orders of length-biasing.


Introduction
Let  be a nonnegative random variable whose distribution function (DF) and Laplace-Stieltjes transform (LST) is denoted by () and (), respectively.If the law of , (), is infinitely divisible (infdiv), then () = exp(−()), where the Laplace exponent (or cumulant function) () is a Bernstein function, denoted by  ∈ B. This means that there is a measure (called the Lévy measure)  on (0, ∞) satisfying ∫ ∞ 0 ( ∧ 1)() < ∞ and a constant  ≥ 0 such that In this case  = inf supp() is the left-extremity of the support of  and hence we will set  = 0, thereby losing no generality.Differentiation of  yields −  () = ()  () which in turn is equivalent to the convolution identity where () = ().Conversely, if a DF  satisfies (2) where  is a measure having a finite LST, then  is the DF of an infdiv law.See Theorem 4.10 in [1] and the reference there to Steutel's original formulation of this result.
Suppose now that the first moment  = () ∈ (0, ∞) and let F() =  −1 ∫  0 () denote the DF of the lengthbiased (or size-biased) version of .Clearly  =   (0) = ∫ ∞ 0 (), and hence () =  −1 [0, ] is the DF of a random variable,  say.So if X denotes a random variable having the DF F and () is infdiv, then (2) has the random variable formulation where  = denotes equality in law and the random variables on the right-hand side are independent.(Note that it is always assumed that random variables occurring on the right-hand side of in-law equalities are independent.)The equality (3) underlines the fact that length-biasing is an increasing operation with respect to the stochastic order; ( X > ) ≥ ( > ) if  > 0. Conversely, if (3) holds for some positive defect random variable , then () is infdiv.See [2] for more on this result.
The primary question we address is that if (3) holds, then is ( X) also infdiv?A secondary question is whether () is infdiv?If it is, then clearly ( X) is infdiv.The primary question can be extended to asking whether arbitrary order length-biasing of an infdiv law () also is infdiv.Here, if 2 Journal of Probability  > 0, then we define the order- length-biased version of  by F () = L  ()() =  −1  ∫  0   (), where   := (  ) ∈ (0, ∞).Thus F = F1 and L  denotes the length-bias operator of order  acting on distribution functions having a finite moment of order .We extend this notation by writing X = L   for a random variable whose DF is F and we denote the corresponding LST by   ().See [3] for various properties of L  .
The paper is structured as follows.Definitions of some important classes of infdiv laws are collected in Section 2. Generalities about length-biasing and infinite divisibility are addressed in Section 3. First, several examples show that both, neither, or exactly one member of the pair ((), ( X)) can be infdiv.Second, if ( X ) is infdiv, then there is a law . This is the approach taken for the particular cases discussed in Sections 4 and 5. Theorem 7 provides a convenient recursive method of computing the LSTs of the defect laws (  ).This section ends with a general additive representation for ( X ), assuming that it is infdiv, and also Lemma 8 giving a general formula for the result of applying L  to the sum of two independent random variables.
In Section 4 we examine the case where () has the unit rate compound Poisson law whose jump law is the Exp(35) law and Section 5 deals with the case which first motivated this paper; that is, () is the Lambert law introduced by Pakes [4].As mentioned, the approach in both cases is, having established that ( X ) is infdiv, to investigate as far as possible the infdiv status of (  ).We thereby establish in both cases that ( X ) is infdiv for  = 1, . . ., 4 and we conjecture that this is the case for all .Our algebraic manipulations become more involved as  increases, suggesting other methods are needed to resolve our conjectures one way or the other.
We end this introduction by reminding the reader that length-biasing is important in contexts such as random sampling of regions in spatial problems, for example, the distribution of the volume or surface area of spheres given the distribution of radii; see page 170 in [5].Another context is measuring income inequality via the Lorenz curve, that is, the set {( F(), ()) :  ≥ 0}; see page 40 in [6].Many probability laws can be characterized through relations involving lengthbiasing, as in [3,7].The first of this pair of references cites other occurrences of length-biasing.
Self-decomposable laws comprise the limit laws of affine transformations of sums of independent random variables.
Following terminology in [8], say that the Laplace exponent  ∈ B is a complete Bernstein function, written  ∈ CB, if ℓ ∈ CM.The corresponding class of laws are termed Bondesson (BO) laws in [8] and T 2 laws in [9], which term replaces the original descriptive designation GCMED, meaning "generalised convolution of mixtures of exponential distributions." This last term underlines the fact that the BO laws comprise the smallest class which is closed under convolutions and weak limits and contains EM.Finally, we observe that, like B, CB is closed under composition.
An important subclass of complete Bernstein Lévy exponents is the set of Thorin Bernstein functions TB defined by the requirement that ℓ() ∈ CM.Thus, TB ⊂ CB ∩ .Note that the composition of two functions in TB need not be in TB, although it will be in CB.Infdiv laws () having a Lévy exponent in TB comprise those laws which are the weak limits of sums of independent gamma distributed random variables, and hence they are called generalized gamma convolutions (GGCs).The class of GGCs is the smallest which contains gamma laws and is closed under convolutions and weak limits.They have become important because in recent years the self-decomposability of many familiar continuous laws has been demonstrated by showing that they are GGCs; see [9] or [1].The Lévy exponent of a GGC has the form where the Thorin measure Computing the derivative of the Laplace exponent (5) and noting that (+ Comparing this with the derivative obtained from (4), we obtain the following known result (page 352 in [1]).
Finally, we say that a positive-valued function ℎ() ( ≥ 0) is hyperbolically completely monotone (HCM) if, for each  > 0, the function of V > 0 equal to ℎ(V)ℎ(/V) is a completely monotone function of V + V −1 .A key result is that a law which has an HCM density is a GGC and hence is selfdecomposable.Example 3 below gives two examples and [9] presents many more.A recent summary with examples of the above ideas is [10].
We end this section with the following result about LSTs having the following form.Let {  :  = 1, 2, . ..} be a discrete law, {()} a sequence of positive numbers, and 0 ≤ () ≤ 2 constants.The gamma mixture LST will arise in examples below.Let (, ) have the beta density , where, conventionally, ,  > 0. It is consistent with the moment structure of the beta laws to specify ((0, ) = 0) = 1 and ((, 0) = 1) = 1.Also, let   be a random variable having the standard gamma law with density ∝  −1  − .
Proof.It follows from the well-known gamma-beta identity Hence the form (7) follows by computing the expectation at (8).The infdiv assertion follows because scale mixtures of the gamma(2) law are infdiv; see page 344 in [1].
We note that scale mixtures of the gamma() law may, or may not, be infdiv if  > 2; see page 409 in [1].Finally, although we will not use this, it is worth observing that a limit argument based on (7) and Lemma 2 shows that if  is a DF on [0, ∞) and (V) is a function on this set with values in [0, 2], then is an infdiv LST.

General Observations
We begin with examples which illustrate what is possible in relation to the infinite divisibility of () and its lengthbiased versions.The first simply reminds readers that there is a rich collection of infdiv laws whose length-biased versions of all orders also are infdiv.
Suppose that  is a real constant,  > 0, and N has the standard normal law.Then  = exp( + N) has the lognormal law whose density function ∝  (/ 2 )−1 exp(−(log ) 2 /2 2 ).This density function is HCM (page 59 in [9]), and clearly X has a lognormal law for all real .
More examples of HCM densities may be found in [9] and the fact that if () has a HCM density, then so does (  ) if || > 1.
The next example shows the existence of laws ( X ) which are not infdiv for any  ≥ 0.
Example 4. It is known that infinite divisibility of a positive law () imposes constraints on how fast the survivor function () = ( > ) can go to zero as  → ∞.In fact, if () is infdiv, then it follows from (9.6) on page 114 in [1] that () ≥ exp(− log ) for any  > 0. Laws with a bounded support comprise the extreme contradiction of this condition; they are not infdiv and neither are their length-biased versions.Laws with unbounded support where () = (exp(−  )) where  > 0 and  > 1 likewise are not infdiv; their right-hand tail is too thin.Clearly   < ∞ if  > 0, and the length-biased laws of all positive orders satisfy a similar tail constraint and hence are not infdiv.
Our next example shows that length-biasing laws which are not infdiv can yield infdiv laws.
On the other hand, if  > 2 then () is not infdiv (page 409 in [1]).For any  > 0, the order  > 0 length-biased version of () is the gamma( + ) law, and hence it is infdiv.
We will look at the case  ≤ 2 in more detail.The LST of () in (3) is Suppose first that 0 <  ≤ 1 and let (  ) denote the stable law of index , denoted by stable() and having the LST Note that ( the LST of a scaled positive Linnik law (also called the Mittag-Leffler law); see page 219 in [12] (and its references) and page 38 in [9] for a proof that Linnik laws are GGCs.
Next, let  ≥ 0 and let E  denote the exponential tilt operator; F () := E  () ∝ ∫  0  − ().The corresponding LST is φ () = ( + )/(), and hence F is infdiv if  is infdiv.Each of the classes , BO, and GGC is closed under exponential tilting.If () has the DF , then X := E   denotes a random variable whose DF is F .
The next example shows that length biasing an infdiv law can result in a non-infdiv law.This possibility is noted in Example 12.6 of [1] (page 413) for two cases where ( X) is not infdiv, although it is not clear whether or not () is infdiv.
The following example rests on the fact that if () is infdiv then () = exp(−()) ̸ = 0 if R >   , where   ≤ 0 is the abscissa of convergence of the integral defining .Note that if  can be analytically continued into R <   , then it can have zeros there.Example 6.If  > 1 then () having the LST is a scale mixture of gamma(2) laws, and hence it is infdiv.This LST is holomorphic in R > −1.
Since  1 () = −  ()/, it is clear that ( X) is a mixture of gamma(3) laws which may, or may not, be infdiv.We show now that the latter can occur.Calculation shows that  1 () = 0 The real part of  0 has the same sign as With  =  3 , the right-hand side bracketed term is the cubic polynomial Hence  1 () has a conjugate pair of zeros in the half-plane R > −1 if  >   , and hence ( X) is not infdiv even though () is infdiv.
Restricting the length-bias order to positive integer values  = 1, 2, . .., it is clear that the LST of ( X ) is   () = (−1)   () ()/  .It follows from (3) that ( X ) is infdiv if and only if there is a defect law (  ) such that in which case the LST of   is Thus  =  0 .
The following result provides a recursive computation scheme for   () which is simpler to use than (22).Assuming the moments are finite, the first equality in (22) defines a realanalytic function   irrespective of whether or not ( X ) is infdiv and, in addition,   ∈ (0, 1).The recursion (23) can be used as follows.If ( X ) is known to be infdiv, then   ∈ CM and in principle it can be inverted to yield the density   () (or DF) of (  ).If also    ∈ CM, then (  ) is infdiv and hence so is ( X+1 ) and the process can be continued.However, as shown in Example 5, it is possible that ( X+1 ) is infdiv even though (  ) is not.Although (24) provides in principle a direct evaluation of   (), deciding whether (  ) is infdiv usually requires a consideration of its LST.
We end this section with two simple deductions from (3).First, assuming that ( X ) is infdiv for  = 1, . . .,  − 1, then iterating (21) yields the additive representation Weakening the assumption here to requiring only that () is infdiv a second, more complicated representation is implied by the following lemma about the application of L  to the sum of random variables.We use the following compact notation.Suppose that () satisfies (  ) < ∞ for a positive integer  and  is a random variable taking values in {1, . . ., }.Then Ŷ denotes a random variable whose value is Ŷ with probability ( = ).
Applying the Leibniz differentiation rule to this product shows that the LST of L  ((1) + (2)) is The mixture LST on the right-hand side is that of the righthand side of (28).

A Compound Poisson Law
We illustrate Theorem 7 with an example which also illustrates the mounting algebraic difficulties as  is increased.It has a relation to the Lambert example discussed in Section 5.
Case  = 2.It follows from the  = 1 case that ( X2 ) ∈ BO.Using Theorem 7 and (23) with  = 1 we obtain The partial fraction form inverts to the density It follows directly from the rational form in (35) that Since the Laplace transform of cosh  is /( 2 −  2 ), the inverse transform of   2 () is Hence   2 () ∈ CM, so  2 () is a Lévy density and ( 2 ) is infdiv.In addition, similar to the above treatment of  1 (),  2 () has an inverse Laplace transform comprising a linear combination of unit step functions whose rightcontinuous version is identically zero in [0, 1) and positivevalued in [1, ∞).Hence  2 ∈ CM, so ( 2 ) ∈ BO.The inverse transform of  2 () is a linear combination of two positive and two negative point masses, so ( 2 ) ∉ GGC.
Finally, the Lévy density  3 () has an inverse transform which again is a linear combination of unit step functions which is positive-valued in [1, ∞).Hence ( 3 ) ∈ BO, but similar to the above, ( 3 ) ∉ GGC.It follows from our calculations that ( X ) ∈ BO for  = 1, . . ., 4 and we conjecture that this holds for all .

The Lambert Law
The principal solution () of the functional equation Many properties and applications are given in [13] and it is classified as an elementary function in [14].
The key property for our purposes is that  ∈ B. See [4,15,16] for various proofs of this fact.It is shown in [4] (Theorem 3.1) that where the Lévy density is and ] is a probability measure with supp(]) = [ −1 , ∞).Consequently, as observed in the remarks following the proof of Theorem 3.1 in [4], () ∈ TB.
Lagrange's reversion of series yields the known expansion converging if || <  −1 and implying that () has finite moments of all positive orders,   = ( + 1) −1 .In particular Differentiating (49) and using (46) and then (49) we find that the LST of ( X) is Knowing that () is infdiv, it follows that (3) holds and that the defect LST is The factor (1 + ()) −1 is the LST of (), the exponentially stopped Lambert subordinator.Here  ∼ Exp(1) is independent of (()).It follows that (3) takes the specific form where    = .The right-hand side is the sum of independent infdiv random variables and hence ( X) is infdiv.
This bare conclusion follows from Example 3.2.6 in [9] asserting that the length-biased version of a GGC law is infdiv.The above working yields some extra detail and, in particular, the following refinement.Instead of invoking Theorem 7, computation of  () () for  = 1, 2, 3 suggests the following result which follows from an easy induction argument from (22).[4] shows that   > 0 if  ≥ 2 and 0 ≤  < .