1. Introduction The three parameter Macdonald distribution (Nagar et al. [1, 2]) is defined by the probability density function (p.d.f.):(1)fMy;α,β,σ=σ-βyβ-1Γα;σ-1yΓβΓα+β, y>0, σ>0, β>0, α+β>0,where the extended gamma function, Γ(ν;σ), is defined as(2)Γν;σ=∫0∞tν-1exp-t-σtdt, σ>0.For Re(ν)>0 and by taking σ=0, it is clear that the extension of the gamma function reduces to the classical gamma function, Γ(ν;0)=Γ(ν). The generalized gamma function (extended) has been proved very useful in various problems in engineering and physics; see, for example, Aslam Chaudhry and Zubair [3–8].
We will denote by Y~M(α,β,σ) if the random variable Y follows the Macdonald distribution. If σ=1 in the density above, then we will simply write Y~M(α,β). It has been shown in Nagar et al. [9] that the product of two independent gamma variables follows a Macdonald distribution. A random variable W is said to have a two parameter gamma distribution, denoted by W~Ga(κ,θ), if its p.d.f. is given by(3)wκ-1exp-w/θθκΓκ, w>0.
Note that for θ=1, the above distribution reduces to a standard gamma distribution and in this case we write W~Ga(κ). There are several univariate, multivariate, and matrix variate generalizations of gamma distribution; for example, see standard text such as Johnson et al. [10], Kotz et al. [11], and Gupta and Nagar [12]. Kalla et al. [13], by using the generalized gamma function defined and studied by Al-Musallam and Kalla [14, 15], have defined a generalization of the gamma distribution which includes a number of distributions as special cases. In a recent article, Gupta et al. [16] have generalized the matrix variate gamma distribution. If W1 and W2 are independent, W1~Ga(κ1,θ1) and W2~Ga(κ2,θ2), then W1W2~M(κ2-κ1,κ1,θ1θ2).
Recently, Nagar et al. [17] have used the Macdonald distribution to construct a new bivariate distribution which they call Macdonald-gamma distribution. The random variables X and Y are said to have a Macdonald-gamma distribution, denoted by (X,Y)~MG(α,β,σ), if their joint p.d.f. is given by(4)fx,y;α,β,σ=xα-1yβ-1exp-x-y/σxσβΓβΓα+β, x>0, y>0,where β>0, α+β>0, and σ>0. The bivariate distribution defined by the above density has many interesting features. For example, the marginal and the conditional distributions of Y are Macdonald and gamma, the marginal distribution of X is gamma, and the conditional distribution of X given Y is extended gamma. The Macdonald-gamma distribution is positively likelihood ratio dependent (PLRD). Several results pertaining to this distribution such as marginal and conditional distributions, moments, entropies, information matrix, and distributions of sum are derived in Nagar et al. [17]. The distributions of the product and the ratio of independent or correlated Macdonald random variables are derived in Nagar et al. [9].
In this article, we study multivariate generalization of the Madcdonal distribution defined by the density (1) and derive properties such as marginal and conditional distributions, moments, distribution of sums, and factorization. We also give a multivariate generalization of Macdonald-gamma and study its properties.
2. Some Definitions and Preliminary Results In this section, we give some definitions and results which are used in subsequent sections.
The gamma function was first introduced by Leonard Euler in 1729 as the limit of a discrete expression and later as an absolutely convergent improper integral:(5)Γν=∫0∞tν-1exp-tdt, Reν>0.The gamma function has many beautiful properties and has been used in almost all the branches of science and engineering. Replacing t by z/σ, σ>0, in (5), the gamma function with an additional parameter σ>0 can be given as(6)Γν=σ-ν∫0∞zν-1exp-zσdz, Reν>0.The extended gamma function is very similar to the modified Bessel function of type 2. An integral representation of the modified Bessel function type 2 (Gradshteyn and Ryzhik [18, Eq. 3.471.9]) is given by(7)Kν2ab=12abν/2∫0∞tν-1-at+btdt,where Re(a)>0 and Re(b)>0. Comparing (2) and (7), it can easily be seen that(8)Γν;σ=2σν/2Kν2σ.
Also, substituting x=σ/t in (2), it can be checked that(9)Γν;σ=σνΓ-ν;σ.
Finally, we define beta type 1 and beta type 2 distributions. These definitions can be found in Johnson et al. [19].
Definition 1. The random variable X is said to have a beta type 1 distribution with parameters (a,b), a>0, b>0, denoted as X~B1(a,b), if its p.d.f. is given by(10)xa-11-xb-1Ba,b, 0<x<1,where B(a,b) is the beta function defined by(11)Ba,b=ΓaΓbΓa+b, Rea>0, Reb>0.
Definition 2. The random variable X is said to have a beta type 2 distribution with parameters (a,b), denoted as X~B2(a,b), a>0, b>0, if its p.d.f. is given by(12)xa-11+x-a+bBa,b, x>0.
Definition 3. The random variables X1,…,Xn are said to have a Dirichlet type 1 distribution with parameters αi>0, i=1,…,n, and β>0, denoted by (X1,…,Xn)~D1(α1,…,αn;β) if their joint p.d.f. is(13)Γ∑i=1nαi+β∏i=1nΓαiΓβ∏i=1nxiαi-11-∑i=1nxiβ-1, xi>0, i=1,…,n, ∑i=1nxi<1.
The matrix variate generalizations of beta type 1 and beta type 2 distributions have been defined and studied extensively. For example, see Gupta and Nagar [12].
If Y~M(α,β,σ), then the rth moment of Y is given as(14)EYr=σrΓβ+rΓα+β+rΓβΓα+β.
Theorem 4. If (X,Y)~MG(α,β,σ), then the p.d.f. of the sum S=X+Y is given by(15)fSs;α,β,σ=Γα+1sα+β-1exp-sσβΓβΓα+β∑m=0∞Lm1σΓβ+mΓα+β+m+1×F11β+m;α+β+m+1;s, s>0,where α+1>0, β>0, σ>0, Lm(x) is the Laguerre polynomial of degree m, and F11 represents the confluent hypergeometric function.
Proof. See Nagar et al. [17].
3. Density Function We propose a multivariate generalization of the Macdonald distribution as follows.
Definition 5. The random variables Y1,…,Yn are said to have a multivariate Macdonald distribution with parameters α,β1,…,βn, and σ, denoted as (Y1,…,Yn)~MM(α,β1,…,βn,σ), if their joint p.d.f. is given by(16)Cα,β1,…,βn,σ∏i=1nyiβi-1Γα;∑i=1nyiσ, yi>0, i=1,…,n,where βi>0, i=1,…,n, α+∑i=1nβi>0, σ>0, and C(α,β1,…,βn,σ) is the normalizing constant.
Since, the integration of the p.d.f. (16) over its support set is one, we have(17)Cα,β1,…,βn,σ∫0∞⋯∫0∞∏i=1nyiβi-1Γα;∑i=1nyiσdy1⋯dyn=1.Replacing Γα;∑i=1nyi/σ by its equivalent integral, namely,(18)Γα;∑i=1nyiσ=∫0∞xα-1exp-x-∑i=1nyiσxdxin (17), the normalizing constant is derived as(19)Cα,β1,…,βn,σ-1=∫0∞xα-1exp-x∏i=1n∫0∞yiβi-1exp-yiσxdyidx=σ∑i=1nβi∏i=1nΓβi∫0∞xα+∑i=1nβi-1exp-xdx=σ∑i=1nβi∏i=1nΓβiΓα+∑i=1nβi.
Theorem 6. If (Y1,…,Yn)~MM(α,β1,…,βn,σ), then for 1≤s≤n, (Y1,…,Ys)~MM(α+∑i=s+1nβi,β1,…,βs,σ).
Proof. By using (18), the joint density of Y1,…,Yn can be written as(20)Cα,β1,…,βn,σ∏i=1nyiβi-1∫0∞xα-1exp-x-∑i=1nyiσxdx,where yi>0, i=1,…,n. Now, integrating ys+1,…,yn in the above density, we get the marginal density of Y1,…,Ys as(21)Cα,β1,…,βn,σσ∑i=s+1nβi∏i=s+1nΓβi∏i=1syiβi-1×∫0∞xα+∑i=s+1nβi-1exp-x-∑i=1syiσxdx.Now, writing the above integral in terms of extended gamma function and simplifying, we get the desired result.
Corollary 7. If (Y1,…,Yn)~MM(α,β1,…,βn,σ), then for s=1,…,n the marginal distribution of Ys is Macdonald, Ys~M(α+∑i(≠s)=1nβi,βs,σ).
Theorem 8. Let (Y1,…,Yn)~MM(α,β1,…,βn,σ) and define Zi=Yi/Z for i=1,…,n-1 and Z=∑j=1nYj. Then, (Z1,…,Zn-1) and Z are independent, (Z1,…,Zn-1)~D1β1,…,βn-1;βn, and Z~M(α,∑i=1nβi,σ).
Proof. Substituting zi=yi/z for i=1,…,n-1 and z=∑j=1nyj with the Jacobian J(y1,…,yn-1,yn→z1,…,zn-1,z)=zn-1 in (16), the joint density of Z1,…,Zn-1 and Z is derived as(22)Cα,β1,…,βn,σz∑i=1nβi-1Γα,zσ∏j=1n-1zjβj-11-∑j=1n-1zjβn-1,where zi>0, i=1,…,n-1, ∑i=1n-1zi<1, and z>0. From the above factorization, it is clear that (Z1,…,Zn-1) and Z are independent, (Z1,…,Zn-1)~D1β1,…,βn-1;βn, and Z~M(α,∑i=1nβi,σ).
Corollary 9. If (Y1,…,Yn)~MM(α,β1,…,βn,σ), then(23)∑i=1sYi∑j=1nYj~B1∑i=1sβi,∑i=s+1nβi, s<n.
4. Joint Moments We derive the joint moments of random variables jointly distributed as multivariate Macdonald. These moments will facilitate for us to compute several expected values such as mean and variance.
Using (16) and (19), the joint moments of Y1,…,Yn are obtained as(24)EY1r1⋯Ynrn=Cα,β1,…,βn,σ∫0∞⋯∫0∞∏i=1nyiβi+r1-1Γα,∑i=1nyiσdy1⋯dyn=Cα,β1,…,βn,σCα,β1+r1,…,βn+rn,σ=σ∑i=1nri∏i=1nΓβi+riΓα+∑i=1nβi+ri∏i=1nΓβiΓα+∑i=1nβi.From the above expression the (r,s)th moment of Yi and Yj, for i≠j, is given by(25)EYirYjs=σr+sΓβi+rΓβj+sΓα+βi+βj+r+sΓβiΓβjΓα+βi+βj.Substituting s=r, βi=β, and βj=β+1/2 in (25) and using the duplication formula, the rth moment of 2YiYj is obtained as(26)E2YiYjr=σrΓ2β+rΓα+2β+1/2+rΓ2βΓα+2β+1/2.Now, comparing the above moment expression with (14), we can conclude that 2YiYj~M(α+1/2,2β,σ). For s=-r, the above expression reduces to(27)EYirYj-r=Γβi+rΓβj-rΓβiΓβj,which shows that Yi/Yj has a standard beta type 2 distribution with parameters βi and βj. Substituting appropriately in (25), means and variances of Yi and Yj and the covariance between Yi and Yj are computed as(28)EYk=σβkα+βi+βj, k=i,j,EYk2=σ2βkβk+1α+βi+βjα+βi+βj+1, k=i,j,VarYk=σ2βkα+βi+βjα+βi+βj+βk+1, k=i,j,EYiYj=σ2βiβjα+βi+βjα+βi+βj+1,CovYi,Yj=σ2βiβjα+βi+βj.The correlation coefficient between Yi and Yj is given by(29)ρYi,Yj=βiβjα+2βi+βj+1α+βi+2βj+1.Further, for βi=βj=β, the correlation coefficient between Yi and Yj is given by(30)ρYi,Yj=βα+3β+1.
5. Factorizations In this section, we give several factorizations of the multivariate Macdonald density.
Theorem 10. Let (Y1,…,Yn)~MM(α,β1,…,βn,σ). For i=1,…,n-1, define Ui=∑j=1iYj/∑j=1i+1Yj and Un=∑j=1nYj. Then, the random variables U1,…,Un are independent, Ui~B1(∑j=1iβj,βi+1), i=1,…,n-1, and Un~M(α,∑i=1nβi,σ).
Proof. From the transformation given in the theorem, we obtain y1=un∏i=1n-1ui, y2=un(1-u1)∏i=2n-1ui,…,yn-1=un(1-un-2)un-1, and yn=un(1-un-1) with the Jacobian J(y1,…,yn→u1,…,un)=∏i=2nuii-1. Now, making appropriate substitutions in the joint p.d.f. of Y1,…,Yn, we obtain(31)Cα,β1,…,βn,σun∏i=1n-1uiβ1-1∏j=2nun1-uj-1∏i=jn-1uiβj-1Γα;unσ∏i=2nuii-1.Further, writing (32)Cα,β1,…,βn,σ=∏j=1n-1B∑i=1jβi,βj+1-1×σ∑i=1nβiΓ∑i=1nβiΓα+∑i=1nβi-1,the above expression is simplified as(33)∏j=1n-1uj∑i=1jβi-11-ujβj+1-1B∑i=1jβi,βj+1un∑i=1nβi-1Γα;un/σσ∑i=1nβiΓ∑i=1nβiΓα+∑i=1nβi,where 0<u1,…,un-1<1 and un>0. Now, from the above factorization, we get the result.
Theorem 11. Let (Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Zn=∑j=1nYj and Zi=Yi+1/∑j=1iYj, for i=1,…,n-1. Then, Z1,…,Zn are independent, Zi~B2(βi+1,∑j=1iβj), i=1,…,n-1, and Zn~M(α,∑i=1nβi,σ).
Proof. This result is obtained from Theorem 10, by observing that Zi=(1-Ui)/Ui, for i=1,…,n-1, Zn=Un, and (1-Ui)/Ui~B2(βi+1,∑j=1iβj), where Ui~B1(∑j=1iβj,βi+1).
Theorem 12. Let (Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Wn=∑j=1nYj and Wi=∑j=1iYj/Yi+1, for i=1,…,n-1. Then, W1,…,Wn, are independent, Wi~B2(∑j=1iβj,βi+1) for i=1,…,n-1 and Wn~M(α,∑i=1nβi,σ).
Proof. The result is obtained from Theorem 11 by taking into account that Wi=1/Zi for i=1,…,n-1, Wn=Zn, and 1/Zi~B2(∑j=1iβj,βi+1), where Zi~B2(βi+1,∑j=1iβj).
Theorem 13. Let (Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Vn=∑j=1nYj and for i=1,…,n-1 Vi=Yi/∑j=inYj. Then, V1,…,Vn are independent, Vi~B1(βi,∑j=i+1nβj), for i=1,…,n-1 and Vn~M(α,∑i=1nβi,σ).
Proof. Making the substitutions y1=vnv1, y2=vnv2(1-v1),…,yn-1=vnvn-1(1-v1)⋯(1-vn-2), and yn=vn(1-v1)⋯(1-vn-1) with the Jacobian J(y1,…,yn→v1,…,vn)=vnn-1∏i=1n-2(1-vi)n-i-1 in (16), we obtain the joint p.d.f. of V1,…,Vn as(34)Cα,β1,…,βn,σ∏i=1n-1vnvi∏j=1i-11-vjβi-1vn∏j=1n-11-vjβn-1Γα;vnσvnn-1∏i=1n-21-vin-i-1,which can be rewritten as(35)∏i=1n-1viβi-11-vi∑j=i+1nβj-1Bβi,∑j=i+1nβjun∑i=1nβi-1Γα;vn/σσ∑i=1nβiΓ∑i=1nβiΓα+∑i=1nβi.Now, the desired result follows from the above factorization.
Theorem 14. Let (Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Zn=∑j=1nYj and Zi=Yi/∑j=i+1nYj, for i=1,…,n-1. Then, Z1,…,Zn, are independent, Zi~B2(βi,∑j=i+1nβj), for i=1,…,n-1 and Zn~M(α,∑i=1nβi,σ).
Proof. The result is obtained from Theorem 13, by noting that Zi=Vi/(1-Vi) for i=1,…,n-1, Zn=Vn and Vi/(1-Vi)~B2(βi,∑j=i+1nβj) for Vi~B1(βi,∑j=i+1nβj).
Theorem 15. Let (Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Wn=∑j=1nYj and Wi=∑j=i+1nYj/Yi, for i=1,…,n-1. Then, W1,…,Wn are independent, Wi~B2(∑j=i+1nβj,βi) for i=1,…,n-1 and Wn~M(α,∑i=1nβi,σ).
Proof. This result follows from Theorem 14, by observing that Wi=1/Zi for i=1,…,n-1, Wn=Zn and 1/Zi~B2(∑j=i+1nβj,βi), where Zi~B2(βi,∑j=i+1nβj).
6. The Multivariate Macdonald-Gamma Distribution We propose a multivariate generalization of the Macdonald-gamma distribution as follows.
Definition 16. The random variables X,Y1,…,Yn are said to have a multivariate Macdonald-gamma distribution with parameters α,β1,…,βn, and σ, denoted as (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ), if their joint p.d.f. is(36)Cα,β1,…,βn,σxα-1∏i=1nyiβi-1exp-x-∑i=1nyiσx, x>0, yi>0, i=1,…,n,where βi>0, i=1,…,n, α+∑i=1nβi>0, σ>0, and the normalizing constant C(α,β1,…,βn,σ) is given by (19).
Theorem 17. If (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ), then for 1≤s≤n, (X,Y1,…,Ys)~MMG(α+∑i=s+1nβi,β1,…,βs,σ).
Proof. Integrating ys+1,…,yn in (36), we get the marginal density of X, Y1,…,Ys as(37)Cα+∑i=s+1nβi,β1,…,βs,σxα+∑i=s+1nβi-1∏i=1syiβi-1exp-x-∑i=1syiσx,where x>0 and yi>0, i=1,…,s.
Corollary 18. If (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ), then for s=1,…,n the joint distribution of X and Ys is Macdonald-gamma, (X,Ys)~MG(α+∑i(≠s)=1nβi,βs,σ).
Theorem 19. If (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ), then for 1≤s≤n, (Y1,…,Ys)~MM(α+∑i=s+1nβi,β1,…,βs,σ).
Proof. Integrating x in (37) by using the definition of the extended gamma function, we obtain the desired result.
Corollary 20. If (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ), then for s=1,…,n the marginal distribution of Ys is Macdonald, Ys~M(α+∑i(≠s)=1nβi,βs,σ).
Theorem 21. Let (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ) and define Zi=Yi/Z for i=1,…,n-1 and Z=∑j=1nYj. Then, (Z1,…,Zn-1) and (X,Z) are independent, (Z1,…,Zn-1)~D1β1,…,βn-1;βn and (X,Z)~MG(α,∑i=1nβi,σ).
Proof. Substituting zi=yi/z for i=1,…,n-1 and z=∑j=1nyj with the Jacobian J(y1,…,yn-1,yn→z1,…,zn-1,z)=zn-1 in (36), the joint density of Z1,…,Zn-1, X and Z is derived as (38)Cα,β1,…,βn,σxα-1z∑i=1nβi-1exp-x-zσx∏j=1n-1zjβj-11-∑j=1n-1zjβn-1,where x>0,zi>0, i=1,…,n-1, ∑i=1n-1zi<1, and z>0. From the above factorization, it is clear that (Z1,…,Zn-1) and (X,Z) are independent, (Z1,…,Zn-1)~D1β1,…,βn-1;βn, and (X,Z)~MG(α,∑i=1nβi,σ).
Corollary 22. If (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ), then(39)∑i=1sYi∑j=1nYj~B1∑i=1sαi,∑i=s+1nαi, s<n.
Corollary 23. If (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ) and Zi=Yi/Z for i=1,…,n-1 and Z=∑j=1nYj. Then, (Z1,…,Zn-1) and Z/X are independent, (Z1,…,Zn-1)~D1β1,…,βn-1;βn and Z/X~G(∑i=1nβi,σ).
Theorem 24. If (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ), then the p.d.f. of S=X+∑j=1nYj is given by (40)Γα+1sα+∑i=1nβi-1exp-sσ∑i=1nβiΓ∑i=1nβiΓα+∑i=1nβi∑m=0∞Lm1σΓ∑i=1nβi+mΓα+∑i=1nβi+m+1×F11∑i=1nβi+m;α+∑i=1nβi+m+1;s, s>0,where α+1>0 and σ>0.
Proof. From Theorem 21, the joint distribution of X and ∑j=1nYj is Macdonald-gamma, (X,∑j=1nYj)~MG(α,∑j=1nβj,σ). Further, from Theorem 4, we get the distribution of S.
Using (36) and (19), the joint moments of X, Y1,…,Yn are obtained as (41)EXsY1r1⋯Ynrn=Cα,β1,…,βn,σ∫0∞xα+s-1×∫0∞⋯∫0∞∏i=1nyiβi+r1-1exp-x-∑i=1nyiσxdy1⋯dyndx=Cα+s,β1+r1,…,βn+rn,σCα,β1,…,βn,σ=σ∑i=1nri∏i=1nΓβi+riΓα+∑i=1nβi+ri+s∏i=1nΓβiΓα+∑i=1nβi.Next, we give several factorizations of the multivariate Macdonald-gamma density.
Theorem 25. Let (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ). For i=1,…,n-1, define Ui=∑j=1iYj/∑j=1i+1Yj and Un=∑j=1nYj. Then, U1,…,Un-1 and (X,Un) are independent, Ui~B1(∑j=1iβj,βi+1), i=1,…,n-1, and (X,Un)~MG(α,∑i=1nβi,σ).
Proof. Similar to the proof of Theorem 10.
Theorem 26. Let (X,Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Zn=∑j=1nYj and Zi=Yi+1/∑j=1iYj, for i=1,…,n-1. Then, Z1,…,Zn-1 and (X,Zn) are independent, Zi~B2(βi+1,∑j=1iβj), i=1,…,n-1, and (X,Zn)~MG(α,∑i=1nβi,σ).
Proof. This result is obtained from Theorem 25, by observing that Zi=(1-Ui)/Ui, for i=1,…,n-1, Zn=Un, and (1-Ui)/Ui~B2(βi+1,∑j=1iβj), where Ui~B1(∑j=1iβj,βi+1).
Theorem 27. Let (X,Y1,…,Yn)~MMG(α,β1,…,βn,σ). Define Wn=∑j=1nYj and Wi=∑j=1iYj/Yi+1, for i=1,…,n-1. Then, W1,…,Wn-1, and (X,Wn) are independent, Wi~B2(∑j=1iβj,βi+1) for i=1,…,n-1, and (X,Wn)~MG(α,∑i=1nβi,σ).
Proof. The result is obtained from Theorem 26 by taking into account that Wi=1/Zi for i=1,…,n-1, Wn=Zn, and 1/Zi~B2(∑j=1iβj,βi+1), where Zi~B2(βi+1,∑j=1iβj).
Theorem 28. Let (X,Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Vn=∑j=1nYj and for i=1,…,n-1 Vi=Yi/∑j=inYj. Then, V1,…,Vn-1 and (X,Vn) are independent, Vi~B1(β,∑j=i+1nβj), for i=1,…,n-1, and (X,Vn)~MG(α,∑i=1nβi,σ).
Proof. Similar to the proof of Theorem 13.
Theorem 29. Let (X,Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Zn=∑j=1nYj and Zi=Yi/∑j=i+1nYj, for i=1,…,n-1. Then, Z1,…,Zn-1 and (X,Zn) are independent, Zi~B2(βi,∑j=i+1nβj), for i=1,…,n-1, and (X,Zn)~MG(α,∑i=1nβi,σ).
Proof. The result is obtained from Theorem 28, by noting that Zi=Vi/(1-Vi) for i=1,…,n-1, Zn=Vn, and Vi/(1-Vi)~B2(βi,∑j=i+1nβj) for Vi~B1(βi,∑j=i+1nβj).
Theorem 30. Let (X,Y1,…,Yn)~MM(α,β1,…,βn,σ). Define Wn=∑j=1nYj and Wi=∑j=i+1nYj/Yi, for i=1,…,n-1. Then, W1,…,Wn-1 and (X,Wn) are independent, Wi~B2(∑j=i+1nβj,βi) for i=1,…,n-1 and (X,Wn)~MG(α,∑i=1nβi,σ).
Proof. This result follows from Theorem 29, by observing that Wi=1/Zi for i=1,…,n-1, Wn=Zn, and 1/Zi~B2(∑j=i+1nβj,βi), where Zi~B2(βi,∑j=i+1nβj).