Applying to the moment inequality of negatively dependent random variables the
complete convergence for weighted sums of sequences of negatively dependent random variables
is discussed. As a result, complete convergence theorems for negatively dependent
sequences of random variables are extended.
1. Introduction and LemmasDefinition 1.1.
Random variables X and Y are said to be negatively dependent (ND) if
P(X≤x,Y≤y)≤P(X≤x)P(Y≤y)
for all x,y∈R. A collection of random variables is said to be pairwise negatively dependent (PND) if every pair of random variables in the collection satisfies (1.1).
It is important to note that (1.1) implies that
P(X>x,Y>y)≤P(X>x)P(Y>y)
for all x,y∈R. Moreover, it follows that (1.2) implies (1.1), and, hence, (1.1) and (1.2) are equivalent. However, (1.1) and (1.2) are not equivalent for a collection of 3 or more random variables. Consequently, the following definition is needed to define sequences of negatively dependent random variables.
Definition 1.2.
The random variables X1,…,Xn are said to be negatively dependent (ND) if, for all real x1,…,xn,
P(⋂j=1n(Xj≤xj))≤∏j=1nP(Xj≤xj),P(⋂j=1n(Xj>xj))≤∏j=1nP(Xj>xj).
An infinite sequence of random variables {Xn;n≥1} is said to be ND if every finite subset X1,…,Xn is ND.
Definition 1.3.
Random variables X1,X2,…,Xn, n≥2, are said to be negatively associated (NA) if, for every pair of disjoint subsets A1 and A2 of {1,2,…,n},
cov(f1(Xi;i∈A1),f2(Xj;j∈A2))≤0,
where f1 and f2 are increasing in every variable (or decreasing in every variable), provided this covariance exists. A random variables sequence {Xn;n≥1} is said to be NA if every finite subfamily is NA.
The definition of PND is given by Lehmann [1], the concept of ND is given by Bozorgnia et al. [2], and the definition of NA is introduced by Joag-Dev and Proschan [3]. These concepts of dependence random variables have been very useful in reliability theory and applications.
First, note that by letting f1(X1,X2,…,Xn-1)=I(X1≤x1,X2≤x2,…,Xn-1≤xn-1), f2(Xn)=I(Xn≤xn) and f¯1(X1,X2,…,Xn-1)=I(X1>x1,X2>x2,…,Xn-1>xn-1), f¯2(Xn)=I(Xn>xn), separately, it is easy to see that NA implies (1.3). Hence, NA implies ND. But there are many examples which are ND but are not NA. We list the following two examples.
Example 1.4.
Let Xi be a binary random variable such that P(Xi=1)=P(Xi=0)=0.5 for i=1,2,3. Let (X1,X2,X3) take the values (0,0,1), (0,1,0), (1,0,0), and (1,1,1), each with probability 1/4.
It can be verified that all the ND conditions hold. However,
P(X1+X3≤1,X2≤0)=48≰P(X1+X3≤1)P(X2≤0)=38.
Hence, X1, X2, and X3 are not NA.
In the next example X=(X1,X2,X3,X4) possesses ND, but does not possess NA obtained by Joag-Dev and Proschan [3].
Example 1.5.
Let Xi be a binary random variable such that P(Xi=1)=.5 for i=1,2,3,4. Let (X1,X2) and (X3,X4) have the same bivariate distributions, and let X=(X1,X2,X3,X4) have joint distribution as shown in Table 1.
It can be verified that all the ND conditions hold. However,
P(Xi=1,i=1,2,3,4)>P(X1=X2=1)P(X3=X4=1),
violating NA.
(X1,X2)
(0,0)
(0,1)
(1,0)
(1,1)
Marginal
(0,0)
.0577
.0623
.0623
.0577
.24
(0,1)
.0623
.0677
.0677
.0623
.26
(X3,X4)
(1,0)
.0623
.0677
.0677
.0623
.26
(1,1)
.0577
.0623
.0623
.0577
.24
marginal
.24
.26
.26
.24
From the above examples, it is shown that ND does not imply NA and ND is much weaker than NA. In the papers listed earlier, a number of well-known multivariate distributions are shown to possess the ND properties, such as (a) multinomial, (b) convolution of unlike multinomials, (c) multivariate hypergeometric, (d) dirichlet, (e) dirichlet compound multinomial, and (f) multinomials having certain covariance matrices. Because of the wide applications of ND random variables, the notions of ND random variables have received more and more attention recently. A series of useful results have been established (cf. Bozorgnia et al. [2], Amini [4], Fakoor and Azarnoosh [5], Nili Sani et al. [6], Klesov et al. [7], and Wu and Jiang [8]). Hence, the extending of the limit properties of independent or NA random variables to the case of ND random variables is highly desirable and of considerable significance in the theory and application. In this paper we study and obtain some probability inequalities and some complete convergence theorems for weighted sums of sequences of negatively dependent random variables.
In the following, let an≪bn (an≫bn) denote that there exists a constant c>0 such that an≤cbn (an≥cbn) for sufficiently large n, and let an≈bn mean an≪bn and an≫bn. Also, let logx denote ln(max(e,x)) and Sn=̂∑j=1nXj.
Lemma 1.6 (see [2]).
Let X1,…,Xn be ND random variables and {fn;n≥1} a sequence of Borel functions all of which are monotone increasing (or all are monotone decreasing). Then {fn(Xn);n≥1} is still a sequence of ND r. v. ’s.
Lemma 1.7 (see [2]).
Let X1,…,Xn be nonnegative r. v. ’s which are ND. Then
E(∏j=1nXj)≤∏j=1nEXj.
In particular, let X1,…,Xn be ND, and let t1,…,tn be all nonnegative (or non-positive) real numbers. Then
E(exp(∑j=1ntjXj))≤∏j=1nE(exp(tjXj)).
Lemma 1.8.
Let {Xn;n≥1} be an ND sequence with EXn=0 and E|Xn|p<∞,p≥2. Then for Bn=∑i=1nEXi2,
E|Sn|p≤cp{∑i=1nE|Xi|p+Bnp/2},E(max1≤i≤n|Si|p)≤cplogpn{∑i=1nE|Xi|p+Bnp/2},
where cp>0 depends only on p.
Remark 1.9.
If {Xn;n≥1} is a sequence of independent random variables, then (1.9) is the classic Rosenthal inequality [9]. Therefore, (1.9) is a generalization of the Rosenthal inequality.
Proof of Lemma 1.8.
Let a>0, Xi′=min(Xi,a),andSn′=∑i=1nXi′. It is easy to show that {Xi′;i≥1} is a negatively dependent sequence by Lemma 1.6. Noting that (ex-1-x)/x2 is a nondecreasing function of x on R and that EXi′≤EXi=0, tXi′≤ta, we have
E(etXi′)=1+tEXi′+E(etXi′-1-tXi′t2Xi′2t2Xi′2)≤1+(eta-1-ta)a-2EXi′2≤1+(eta-1-ta)a-2EXi2≤exp{(eta-1-ta)a-2EXi2}.
Here the last inequality follows from 1+x≤ex, for all x∈R.
Note that Bn=∑i=1nEXi2 and {Xi′;i≥1} is ND, we conclude from the above inequality and Lemma 1.7 that, for any x>0 and h>0, we get
e-hxE(ehSn′)=e-hxE(∏i=1nehXi′)≤e-hx∏i=1nE(ehXi′)≤exp{-hx+(eha-1-ha)a-2Bn}.
Letting h=ln((xa)/Bn+1)/a>0, we get
(eha-1-ha)a-2Bn=xa-Bna2ln(xaBn+1)≤xa.
Putting this one into (1.12), we get furthermore
e-hxE(ehSn′)≤exp{xa-xaln(xaBn+1)}.
Putting x/a=t into the above inequality, we get
P(Sn≥x)≤∑i=1nP(Xi>a)+P(Sn′≥x)≤∑i=1nP(Xi>a)+e-hxEehSn′≤∑i=1nP(Xi>xt)+exp{t-tln(x2tBn+1)}=∑i=1nP(Xi>xt)+et(1+x2tBn)-t.
Letting -Xi take the place of Xi in the above inequality, we can get
P(-Sn≥x)=P(Sn≤-x)≤∑i=1nP(-Xi>xt)+et(1+x2tBn)-t=∑i=1nP(Xi<-xt)+et(1+x2tBn)-t.
Thus
P(|Sn|≥x)=P(Sn≥x)+P(Sn≤-x)≤∑i=1nP(|Xi|<xt)+2et(1+x2tBn)-t.
Multiplying (1.17) by pxp-1, letting t=p, and integrating over 0<x<+∞, according to
E|X|p=p∫0+∞xp-1P(|X|≥x)dx,
we obtain
E|Sn|p=p∫0+∞xp-1P(|Sn|≥x)dx≤p∑i=1n∫0+∞xp-1P(|Xi|≥xp)dx+2pep∫0+∞xp-1(1+x2pBn)-pdx=pp+1∑i=1nE|Xi|p+pep(pBn)p/2∫0+∞up/2-1(1+u)pdu=pp+1∑i=1nE|Xi|p+pp/2+1epB(p2,p2)Bnp/2,
where B(α,β)=∫01xα-1(1-x)β-1dx=∫0+∞xα-1(1+x)-(α+β)dx,α,β>0 is Beta function. Letting cp=max(pp+1,p1+p/2epB(p/2,p/2)),we can deduce (1.9) from (1.19). From (1.9), we can prove (1.10) by a similar way of Stout's paper [10, Theorem 2.3.1].
Lemma 1.10.
Let {Xn;n≥1} be a sequence of ND random variables. Then there exists a positive constant c such that, for any x≥0 and all n≥1,
(1-P(max1≤k≤n|Xk|>x))2∑k=1nP(|Xk|>x)≤cP(max1≤k≤n|Xk|>x).
Proof.
Let Ak=(|Xk|>x) and αn=1-P(⋃k=1nAk)=1-P(max1≤k≤n|Xk|>x). Without loss of generality, assume that αn>0. Note that {I(Xk>x)-EI(Xk>x);k≥1} and {I(Xk<-x)-EI(Xk<-x);k≥1} are still ND by Lemma 1.6. Using (1.9), we get
E(∑k=1n(IAk-EIAk))2=E(∑k=1n(I(Xk>x)-EI(Xk>x))+(I(Xk<-x)-EI(Xk<-x)))2≤2E(∑k=1n(I(Xk>x)-EI(Xk>x)))2+2E(∑k=1n(I(Xk<-x)-EI(Xk<-x)))2≤c∑k=1nP(Ak).
Combining with the Cauchy-Schwarz inequality, we obtain
∑k=1nP(Ak)=∑k=1nP(Ak,⋃j=1nAj)=∑k=1nE(IAkI⋃j=1nAj)=E(∑k=1n(IAk-EIAk))I⋃j=1nAj+∑k=1nP(Ak)P(⋃j=1nAj)≤(E(∑k=1n(IAk-EIAk))2EI⋃j=1nAj)1/2+(1-αn)∑k=1nP(Ak)≤(c(1-αn)αnαn∑k=1nP(Ak))1/2+(1-αn)∑k=1nP(Ak)≤12(c(1-αn)αn+αn∑k=1nP(Ak))+(1-αn)∑k=1nP(Ak).
Thus
αn2∑k=1nP(Ak)≤c(1-αn),
that is,
(1-P(max1≤k≤n|Xk|>x))2∑k=1nP(|Xk|>x)≤cP(max1≤k≤n|Xk|>x).
2. Main Results and the Proofs
The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [11] as follows. A sequence {Yn;n≥1} of random variables converges completely to the constant c if ∑n=1∞P(|Xn-c|>ɛ)<∞, for all ɛ>0. In view of the Borel-Cantelli lemma, this implies that Yn→0 almost surely. Therefore, complete convergence is one of the most important problems in probability theory. Hsu and Robbins [11] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Baum and Katz [12] proved that if {X,Xn;n≥1} is a sequence of i.i.d. random variables with mean zero, then E|X|p(t+2)<∞(1≤p<2,t≥-1) is equivalent to the condition that ∑n=1∞ntP(∑1=1n|Xi|/n1/p>ɛ)<∞, for all ɛ>0. Recent results of the complete convergence can be found in Li et al. [13], Liang and Su [14], Wu [15, 16], and Sung [17].
In this paper we study the complete convergence for negatively dependent random variables. As a result, we extend some complete convergence theorems for independent random variables to the negatively dependent random variables without necessarily imposing any extra conditions.
Theorem 2.1.
Let {X,Xn;n≥1} be a sequence of identically distributed ND random variables and {ank;1≤k≤n,n≥1} an array of real numbers, and let r>1, p>2. If, for some 2≤q<p,
N(n,m+1)=̂♯{k≥1;|ank|≥(m+1)-1/p}≈mq(r-1)/p,n,m≥1,EX=0for1≤q(r-1),∑k=1nank2≪nδfor2≤q(r-1)and some0<δ<2p,
then, for r≥2,
E|X|p(r-1)<∞
if and only if
∑n=1∞nr-2P(max1≤k≤n|∑i=1kaniXi|>ɛn1/p)<∞,∀ɛ>0.
For 1<r<2, (2.4) implies (2.5), conversely, and (2.5) and nr-2P(max1≤k≤n|ankXk|>n1/p) decreasing on n imply (2.4).
For p=2, q=2, we have the following theorem.
Theorem 2.2.
Let {X,Xn;n≥1} be a sequence of identically distributed ND random variables and {ank;1≤k≤n,n≥1} an array of real numbers, and let r>1. If
N(n,m+1)=̂♯{k;|ank|≥(m+1)-1/2}≈mr-1,n,m≥1,EX=0,1≤2(r-1),∑k=1n|ank|2(r-1)=O(1),
then, for r≥2,
E|X|2(r-1)log|X|<∞
if and only if
∑n=1∞nr-2P(max1≤k≤n|∑i=1kaniXi|>ɛn1/2)<∞,∀ɛ>0.
For 1<r<2, (2.8) implies (2.9), conversely, and (2.9) and nr-2P(max1≤k≤n|ankXk|>n1/2) decreasing on n imply (2.8).
Remark 2.3.
Since NA random variables are a special case of ND r. v. 's, Theorems 2.1 and 2.2 extend the work of Liang and Su [14, Theorem 2.1].
Remark 2.4.
Since, for some 2≤q≤p, ∑k∈N|ank|q(r-1)≪1 as n→∞ implies that
N(n,m+1)=̂♯{k≥1;|ank|≥(m+1)-1/p}≪mq(r-1)/pas n⟶∞,
taking r=2, then conditions (2.1) and (2.6) are weaker than conditions (2.13) and (2.9) in Li et al. [13]. Therefore, Theorems 2.1 and 2.2 not only promote and improve the work of Li et al. [13, Theorem 2.2] for i.i.d. random variables to an ND setting but also obtain their necessities and relax the range of r.
Proof of Theorem 2.1.
Equation (2.4)⇒(2.5). To prove (2.5) it suffices to show that
∑n=1∞nr-2P(max1≤k≤n|∑i=1kani±Xi|>ɛn1/p)<∞,∀ɛ>0,
where ani+=max(ani,0) and ani-=max(-ani,0). Thus, without loss of generality, we can assume that ani>0 for all n≥1,i≤n. For 0<α<1/p small enough and sufficiently large integer K, which will be determined later, let
Xni(1)=-nαI(aniXi<-nα)+aniXiI(ani|Xi|≤nα)+nαI(aniXi>nα),Xni(2)=(aniXi-nα)I(nα<aniXi<ɛn1/p/K),Xni(3)=(aniXi+nα)I(-ɛn1/p/K<aniXi<-nα),Xni(4)=aniXni-Xni(1)-Xni(2)-Xni(3)=(aniXi+nα)I(aniXi≤-ɛn1/p/K)+(aniXi-nα)I(aniXi≥ɛn1/p/K),Snk(j)=∑i=1kXni(j),j=1,2,3,4;1≤k≤n,n≥1.
Thus Snk=̂∑i=1kaniXi=∑j=14Snk(j). Note that
(max1≤k≤n|Snk|>4ɛn1/p)⊆⋃j=14(max1≤k≤n|Snk(j)|>ɛn1/p).
So, to prove (2.5) it suffices to show that
Ij=̂∑n=1∞nr-2P(max1≤k≤n|Snk(j)|>ɛn1/p)<∞,j=1,2,3,4.
For any q′>q,
∑i=1naniq′(r-1)=∑j=1∞∑(j+1)-1≤anip<j-1aniq′(r-1)≤∑j=1∞∑(j+1)-1≤anip<j-1j-q′(r-1)/p≪∑j=1∞(N(n,j+1)-N(n,j))j-q′(r-1)/p≪∑j=1∞N(n,j)(j-q′(r-1)/p-(j+1)-q′(r-1)/p)≪∑j=1∞j-1-(q′-q)(r-1)/p<∞.
Now, we prove that
n-1/pmax1≤k≤n|ESnk(1)|⟶0,n⟶∞.
(i) For 0<q(r-1)<1, taking q<q′<p such that 0<q′(r-1)<1, by (2.4) and (2.15), we get
n-1/pmax1≤k≤n|ESnk(1)|≤n-1/p∑i=1n(E|aniXi|I(|aniXi|≤nα)+nαP(|aniXi|>nα))≤n-1/p(∑i=1nE|aniXi|q′(r-1)|aniXi|1-q′(r-1)I(|aniXi|≤nα)+nα-αq′(r-1)∑i=1nE|aniXi|q′(r-1))≪n-1/p+α-αq′(r-1)⟶0,n⟶∞.
For 1≤q(r-1), letting q<q′<p, by (2.2), (2.4), and (2.15), we get
n-1/pmax1≤k≤n|ESnk(1)|≤n-1/p∑i=1n(E|aniXi|I(|aniXi|>nα)+nαP(|aniXi|>nα))≤n-1/p∑i=1n(E|aniXi|(|aniXi|nα)q′(r-1)-1I(|aniXi|≤nα)+nα-αq′(r-1)E|aniXi|q′(r-1))≪n-1/p+α-αq′(r-1)⟶0.
Hence, (2.16) holds. Therefore, to prove I1<∞ it suffices to prove that
Ĩ1=̂∑n=1∞nr-2P(max1≤k≤n|Snk(1)-ESnk(1)|>ɛn1/p)<∞,∀ɛ>0.
Note that {Xni(1);1≤i≤n,n≥1} is still ND by the definition of Xni(1) and Lemma 1.6. Using the Markov inequality and Lemma 1.8, we get for a suitably large M, which will be determined later,
Ĩ1≪∑n=1∞nr-2-M/pE(max1≤k≤n|Snk(1)-ESnk(1)|)M≪∑n=1∞nr-2-M/plogMn[∑i=1nE|Xni(1)|M+(∑i=1nE(Xni(1))2)M/2]=̂Ĩ11+Ĩ12.
Taking M>max(2,p(r-1)(1-αq′)/(1-αp)), then r-2-M/p+αM-αq′(r-1)<-1, and, by (2.15), we get
Ĩ11≤∑n=1∞nr-2-M/plogMn∑i=1n(E|aniXi|MI(|aniXi|≤nα)+nMαP(|aniXi|>nα))≤∑n=1∞nr-2-M/plogMn∑i=1n(E|aniXi|q′(r-1)nα(M-q′(r-1))+nα(M-q′(r-1))E|aniXi|q′(r-1))≪∑n=1∞nr-2-M/p+αM-αq′(r-1)logMn<∞.
(i) For q(r-1)<2, taking q<q′<p such that q′(r-1)<2 and taking M>max(2,2p(r-1)/(2-2αp+αpq′(r-1))), from (2.15) and r-2-M/p+αM-Mαq′(r-1)/2<-1, we have
Ĩ12≤∑n=1∞nr-2-M/plogMn[∑i=1nE|aniXi|q′(r-1)nα(2-q′(r-1))I(|aniXi|≤nα)+n2α-αq′(r-1)E|aniXi|q′(r-1)∑i=1nE]M/2≪∑n=1∞n-r-2-M/p+αM-Mαq′(r-1)/2logMn<∞.
(ii) For q(r-1)≥2, taking q<q′<p and M>max(2,2p(r-1)/(2-pδ)), where δ is defined by (2.3), we get, from (2.3), (2.4), (2.15), and r-2-M/p+δM/2<-1,
Ĩ12≪∑n=1∞nr-2-M/plogMn[∑i=1nani2+n2α-αq′(r-1)E|aniXi|q′(r-1)]M/2≪∑n=1∞nr-2-M/p+δM/2logMn<∞.
Since
(∑i=1nXni(2)>ɛn1/p)=(∑i=1n(aniXi-nα)I(nα<aniXi<ɛn1/p/K)>ɛn1/p)⊆(there at least existKindicesksuch thatankXk>nα),
we have
P(∑i=1nXni(2)>ɛn1/p)≤∑1≤i1<i2<⋯<iK≤nP(ani1Xi1>nα,ani2Xi2>nα,…,aniKXiK>nα).
By Lemma 1.6, {aniXi;1≤i≤n,n≥1} is still ND. Hence, for q<q′<p we conclude that
P(∑i=1nXni(2)>ɛn1/p)≤∑1≤i1<i2<⋯<iK≤n∏j=1KP(anijXij>nα)≤(∑i=1nP(|aniXi|>nα))K≤(∑i=1nn-αq′(r-1)E|aniXi|q′(r-1))K≪n-αq′(r-1)K,
via (2.4) and (2.15). Xni(2)>0 from the definition of Xni(2). Hence by (2.26) and by taking α>0 and K such that r-2-αKq′(r-1)<-1, we have
I2=∑n=1∞nr-2P(∑i=1nXni(2)>ɛn1/p)≪∑n=1∞nr-2-αq′(r-1)K<∞.
Similarly, we have Xni(3)<0 and I3<∞.
Last, we prove that I4<∞. Let Y=KX/ɛ. By the definition of Xni(4) and (2.1), we have
P(max1≤k≤n|Snk(4)|>ɛn1/p)≤P(∑i=1n|Xni(4)|>ɛn1/p)≤P(⋃i=1n(ani|Xi|>ɛn1/pK))≤∑i=1nP(ani|Xi|>ɛn1/pK)=∑j=1∞∑(j+1)-1≤anip<j-1P(|Y|>(nj)1/p)=∑j=1∞(N(n,j+1)-N(n,j))∑l=nj∞P(l≤|Y|p<l+1)=∑l=n∞∑j=1[l/n](N(n,j+1)-N(n,j))P(l≤|Y|p<l+1)≈∑l=n∞(ln)q(r-1)/pP(l≤|Y|p<l+1).
Combining with (2.15),
I4≈∑n=1∞nr-2∑l=n∞(ln)q(r-1)/pP(l≤|Y|p<l+1)=∑l=1∞∑n=1lnr-2-q(r-1)/plq(r-1)/pP(l≤|Y|p<l+1)≈∑l=1∞lr-1P(l≤|Y|p<l+1)≈E|Y|p(r-1)≈E|X|p(r-1)<∞.
Now we prove (2.5)⇒(2.4). Since
max1≤j≤n|anjXj|≤max1≤j≤n|∑i=1janiXi|+max1≤j≤n|∑i=1j-1aniXi|,
then from (2.5) we have
∑n=1∞nr-2P(max1≤j≤n|anjXj|>n1/p)<∞.
Combining with the hypotheses of Theorem 2.1,
P(max1≤j≤n|anjXj|>n1/p)⟶0,n⟶∞.
Thus, for sufficiently large n,
P(max1≤j≤n|anjXj|>n1/p)<12.
By Lemma 1.6, {anjXj;1≤j≤n,n≥1} is still ND. By applying Lemma 1.10 and (2.1), we obtain
∑k=1nP(|ankXk|>n1/p)≤4CP(max1≤k≤n|ankXk|>n1/p).
Substituting the above inequality in (2.5), we get
∑n=1∞nr-2∑k=1nP(|ankXk|>n1/p)<∞.
So, by the process of proof of I4<∞,
E|X|p(r-1)≈∑n=1∞nr-2∑k=1nP(|ankXk|>n1/p)<∞.
Proof of Theorem 2.2.
Let p=2, α<1/p=1/2,andK>1/(2α). Using the same notations and method of Theorem 2.1, we need only to give the different parts.
Letting (2.7) take the place of (2.15), similarly to the proof of (2.19) and (2.26), we obtain
n-1/2max1≤k≤n|ESnk(1)|≪n-1/2+α-2α(r-1)⟶0,n⟶∞.
Taking M>max(2,2(r-1)), we have
Ĩ11≪∑n=1∞n-1-(1-2α)(M/2-(r-1))logMn<∞.
For r-1≤1, taking M>max(2,2(r-1)/(1-2α+2α(r-1))), we get
Ĩ12≪∑n=1∞n-1-(1-2α(r-1)-2α)M/2+(r-1)logMn<∞.
For r-1>1, EXni2<∞ from (2.8). Letting M>2(r-1)2, by the Hölder inequality,
Ĩ12≪∑n=1∞nr-2-M/2logMn[∑i=1nani2+n2α-2α(r-1)E(aniXi)2(r-1)]M/2≪∑n=1∞nr-2-M/2logMn[(∑i=1nani2(r-1))1/(r-1)(∑i=1n1)r-2/(r-1)]M/2≪∑n=1∞n-1-M/2(r-1)+(r-1)logMn<∞.
By the definition of K,
I2≪∑n=1∞n-1-(r-1)(2αK-1)<∞.
Similarly to the proof (2.31), we have
I4≪∑l=1∞∑n=1ln-1lr-1P(l≤|Y|2<l+1)=∑l=1∞lr-1loglP(l≤|Y|2<l+1)≈E(|Y|2(r-1)log|Y|)≈E(|X|2(r-1)log|X|)<∞.Equation (2.9)⇒(2.8) Using the same method of the necessary part of Theorem 2.1, we can easily get
E(|X|2(r-1)log|X|)≈∑n=1∞nr-2∑k=1nP(|ankXk|>n1/2)<∞.
Acknowledgments
The author is very grateful to the referees and the editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. This work was supported by the National Natural Science Foundation of China (11061012), the Support Program the New Century Guangxi China Ten-Hundred-Thousand Talents Project (2005214), and the Guangxi China Science Foundation (2010GXNSFA013120). Professor Dr. Qunying Wu engages in probability and statistics.
LehmannE. L.Some concepts of dependence196637511371153020222810.1214/aoms/1177699260ZBL0146.40601BozorgniaA.PattersonR. F.TaylorR. L.Limit theorems for ND r.v.’s1993Athens, Ga, USAUniversity of GeorgiaJoag-DevK.ProschanF.Negative association of random variables, with applications198311128629568488610.1214/aos/1176346079AminiM.2000FakoorV.AzarnooshH. A.Probability inequalities for sums of negatively dependent random variables20052132572642206118ZBL1129.60303Nili SaniH. R.AminiM.BozorgniaA.Strong laws for weighted sums of negative dependent random variables20051632612652500132KlesovO.RosalskyA.VolodinA. I.On the almost sure growth rate of sums of lower negatively dependent nonnegative random variables200571219320210.1016/j.spl.2004.10.0272126775ZBL1070.60030WuQ. Y.wqy666@glite.edu.cnJiangY. Y.The strong consistency of M estimator in a linear model for negatively dependent random samples201140346749110.1080/03610920903427792RosenthalH. P.On the subspaces of Lp(P>2) spanned by sequences of independent random variables19708273303027172110.1007/BF02771562ZBL0213.19303StoutW. F.1974New York, NY, USAAcademic Pressx+3810455094HsuP. L.RobbinsH.Complete convergence and the law of large numbers1947332531001985210.1073/pnas.33.2.25ZBL0030.20101BaumL. E.KatzM.Convergence rates in the law of large numbers1965120108123019852410.1090/S0002-9947-1965-0198524-1ZBL0142.14802LiD. L.RaoM. B.JiangT. F.WangX. C.Complete convergence and almost sure convergence of weighted sums of random variables199581497610.1007/BF022134541308670ZBL0814.60026LiangH.-Y.SuC.Complete convergence for weighted sums of NA sequences19994518595171835510.1016/S0167-7152(99)00046-2ZBL0967.60032WuQ. Y.Convergence properties of pairwise NQD random sequences20024536176241915127ZBL1008.60039WuQ. Y.Complete convergence for negatively dependent sequences of random variables20102010105072932611036ZBL1202.60050SungS. H.Moment inequalities and complete moment convergence20092009142712652551753ZBL1180.60019