DDNSDiscrete Dynamics in Nature and Society1607-887X1026-0226Hindawi Publishing Corporation63060810.1155/2010/630608630608Research ArticleComplete Convergence for Weighted Sums of ρ-Mixing Random VariablesSungSoo HakShaikhetLeonidDepartment of Applied MathematicsPai Chai UniversityTaejon 302-735South Koreapcu.ac.kr201030032010201002082009240320102010Copyright © 2010This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We obtain the complete convergence for weighted sums of ρ-mixing random variables. Our result extends the result of Peligrad and Gut (1999) on unweighted average to a weighted average under a mild condition of weights. Our result also generalizes and sharpens the result of An and Yuan (2008).

1. Introduction

In many stochastic models, the assumption that random variables are independent is not plausible. So it is of interest to extend the concept of independence to dependence cases. One of these dependence structures is ρ*-mixing.

Let {Xn,n1} be a sequence of random variables defined on a probability space (Ω,,P), and let nm denote the σ-algebra generated by the random variables Xn,Xn+1,,Xm. For any SN, define S=σ(Xi,iS). Given two σ-algebras 𝒜, in , put

ρ(𝒜,)=sup{corr(X,Y);XL2(𝒜),YL2()}, where corr(X,Y)=(EXY-EXEY)/var(X)var(Y). Define the ρ*-mixing coefficients by

ρn*=sup{ρ(S,T);S,TNwithdist(S,T)n}. Obviously, 0ρn+1*ρn*ρ0*=1. The sequence {Xn,n1} is called ρ*-mixing (or ρ̃-mixing) if there exists kN such that ρk*<1. Note that if {Xn,n1} is a sequence of independent random variables, then ρn*=0 for all n1.

A number of limit results for ρ*-mixing sequences of random variables have been established by many authors. We refer to Bradley  for the central limit theorem, Bryc and Smoleński , Peligrad and Gut , and Utev and Peligrad  for moment inequalities, Gan , Kuczmaszewska , and Wu and Jiang  for almost sure convergence, and An and Yuan , Cai , Gan , Kuczmaszewska , Peligrad and Gut , and Zhu  for complete convergence.

The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins . A sequence {Xn,n1} of random variables converges completely to the constant θ if

n=1P(|Xn-θ|>ϵ)<ϵ>0. In view of the Borel-Cantelli lemma, this implies that Xnθ almost surely. Therefore, the complete convergence is a very important tool in establishing almost sure convergence of summation of random variables as well as weighted sums of random variables. Hsu and Robbins  proved that the sequence of arithmetic means of independent and identically distributed random variables converges completely to the expected value if the variance of the summands is finite. Erdös  proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors. One of the most important generalizations is Baum and Katz  strong law of large numbers.

Theorem 1.1 (Baum and Katz [<xref ref-type="bibr" rid="B14">14</xref>]).

Let p1/α and 1/2<α1. Let {Xn,n1} be a sequence of independent and identically distributed random variables with EX1=0. Then the following statements are equivalent:

E|X1|p<;

n=1npα-2P(max1jn|i=1jXi|>ϵnα)< for all ϵ>0,

Peligrad and Gut  extended the result of Baum and Katz  to ρ*-mixing random variables.

Theorem 1.2 (Peligrad and Gut [<xref ref-type="bibr" rid="B3">3</xref>]).

Let p>1/α and 1/2<α1. Let {Xn,n1} be a sequence of identically distributed ρ*-mixing random variables with EX1=0. Then the following statements are equivalent:

E|X1|p<;

n=1npα-2P(max1jn|i=1jXi|>ϵnα)< for all ϵ>0.

Cai  complemented Theorem 1.2 when p=1/α.

Recently, An and Yuan  obtained a complete convergence result for weighted sums of identically distributed ρ*-mixing random variables.

Theorem 1.3 (An and Yuan [<xref ref-type="bibr" rid="B8">8</xref>]).

Let p>1/α and 1/2<α1. Let {Xn,n1} be a sequence of identically distributed ρ*-mixing random variables with EX1=0. Assume that {ani,1in,n1} is an array of real numbers satisfying i=1n|ani|p=O(nδ)forsome0<δ<1,#Ank=#{1in:|ani|p>(k+1)-1}ne-1/kk1,n1. Then the following statements are equivalent:

E|X1|p<;

n=1npα-2P(max1jn|i=1janiXi|>ϵnα)< for all ϵ>0.

Note that the result of An and Yuan  is not an extension of Peligrad and Gut's  result, since condition (1.4) does not hold for the array with ani=1,1in,n1. An and Yuan  proved the implication (i)(ii) under condition (1.4), and proved the converse under conditions (1.4) and (1.5). However, the array satisfying both (1.4) and (1.5) does not exist. Noting that #Ank/(k+1)i=1n|ani|pO(nδ), we have that ne-1/k#Ank(k+1)O(nδ). But, this does not hold when k is fixed and n is large enough.

In this paper, we obtain a new complete convergence result for weighted sums of identically distributed ρ*-mixing random variables. Our result extends the result of Peligrad and Gut , and generalizes and sharpens the result of An and Yuan .

Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance, [x] denotes the integer part of x, and ab=min{a,b}.

2. Main Result

To prove our main result, we need the following lemma which is a Rosenthal-type inequality for ρ*-mixing random variables.

Lemma 2.1 (Utev and Peligrad [<xref ref-type="bibr" rid="B4">4</xref>]).

Let {Xn,n1} be a sequence of ρ*-mixing random variables with EXn=0 and E|Xn|r< for some r2 and all n1. Then there exists a constant D=D(r,k,ρk*) depending only on r,k, and ρk* such that for any n1,E(max1jn|i=1jXi|r)D{i=1nE|Xi|r+(i=1nEXi2)r/2}, where ρk*<1.

Now we state the main result of this paper.

Theorem 2.2.

Let p>1/α and 1/2<α1. Let {Xn,n1} be a sequence of identically distributed ρ*-mixing random variables with EX1=0. Assume that {ani,1in,n1} is an array of real numbers satisfying i=1n|ani|q=O(n)forsomeq>p. If E|X1|p<, then n=1npα-2P(max1jn|i=1janiXi|>ϵnα)<ϵ>0. Conversely, if (2.3) holds for any array {ani} satisfying (2.2), then E|X1|p<.

To prove Theorem 2.2, we first prove the following lemma which is the sufficiency of Theorem 2.2 when the array is bounded.

Lemma 2.3.

Let {Xn,n1} be a sequence of identically distributed ρ*-mixing random variables with EX1=0 and E|X1|p< for some p>1/α and 1/2<α1. Assume that {ani,1in,n1} is an array of real numbers satisfying |ani|1 for 1in and n1. Then (2.3) holds.

Proof.

For 1in and n1, define Xni'=XiI(|Xi|nα). Since EXi=0 and i=1n|ani|n, we have that n-αmax1jn|i=1janiEXni|=n-αmax1jn|i=1janiEXiI(|Xi|>nα)|n-αi=1n|ani|E|X1|I(|X1|>nα)n1-αE|X1|I(|X1|>nα)n1-pαE|X1|pI(|X1|>nα)0 as n. Hence for n large enough, we have n-αmax1jn|i=1janiEXni|<ϵ2  . It follows that n=1npα-2P(max1jn|i=1janiXi|>ϵnα)n=1npα-2i=1nP(|Xi|>nα)+n=1npα-2P(max1jn|i=1janiXni|>ϵnα)n=1npα-1P(|X1|>nα)+Cn=1npα-2P(max1jn|i=1jani(Xni-EXni)|>ϵnα2)=:I+CJ. Noting that n=1npα-1P(|X1|>nα)CE|X1|p<, we have I<. Thus, it remains to show that J<.

We have by Markov's inequality and Lemma 2.1 that for any r2,J(2ϵ)rn=1npα-rα-2Emax1jn|i=1jani(Xni-EXni)|rCn=1npα-rα-2{(i=1nani2E|Xni|2)r/2+i=1n|ani|rE|Xni|r}Cn=1npα-rα-2+r/2(E|X1|2I(|X1|nα))r/2+Cn=1npα-rα-1E|X1|rI(|X1|nα)=:CJ1+CJ2. In the last inequality, we used the fact that |ani|1 for 1in and n1.

If p2, then we take large enough r such that r>max{(pα-1)/(α-1/2),p}. Since r>(pα-1)/(α-1/2), we get J1Cn=1npα-rα-2+r/2<. Since r>p, we also get J2=n=1npα-rα-1i=1nE|X1|rI((i-1)α<|X1|iα)=i=1E|X1|rI((i-1)α<|X1|iα)n=inpα-rα-1Ci=1E|X1|rI((i-1)α<|X1|iα)ipα-rαCE|X1|p<. If p<2, then we take r=2. Since r>p, (2.9) still holds, and so J1=J2<.

We next prove the sufficiency of Theorem 2.2 when the array is unbounded.

Lemma 2.4.

Let {Xn,n1} be a sequence of identically distributed ρ*-mixing random variables with EX1=0 and E|X1|p< for some p>1/α and 1/2<α1. Assume that {ani,1in,n1} is an array of real numbers satisfying ani=0 or |ani|>1, and i=1n|ani|qnforsomeq>p. Then (2.3) holds.

Proof.

If p<2, then we can take δ>0 such that p<p+δ<min{2,q}. Since ani=0 or |ani|>1, we have that i=1n|ani|p+δi=1n|ani|qn. Thus we may assume that (2.10) holds for some p<q<2 when p<2.

Let Snj=i=1janiXiI(|aniXi|nα) for 1jn and n1. In view of EXi=0, we get n-αmax1jn|ESnj|=n-αmax1jn|i=1janiEXiI(|aniXi|>nα)|n-αi=1nE|aniXi|I(|aniXi|>nα)n-pαi=1nE|aniXi|pI(|aniXi|>nα)n-pαi=1n|ani|pE|X1|pn-pα(i=1n|ani|q)p/qn1-p/qE|X1|pn1-pαE|X1|p0, since pα>1. Hence for n large enough, we have that n-αmax1jn|ESnj|<ϵ/2. It follows that n=1npα-2P(max1jn|i=1janiXi|>ϵnα)n=1npα-2P(max1in|aniXi|>nα)+n=1npα-2P(max1jn|Snj|>ϵnα)n=1npα-2i=1nP(|aniXi|>nα)+Cn=1npα-2P(max1jn|Snj-ESnj|>ϵnα2)=:I+CJ. For 1jn-1 and n2, let Inj={1in:n1/q(j+1)-1/q<|ani|n1/qj-1/q}. Then {Inj,1jn-1} are disjoint, j=1n-1Inj={1in:ani0}, and j=1k#Injk+1 for 1kn-1, since n{1in:ani0}|ani|q=j=1n-1iInj|ani|qnj=1k1j+1#Injn(k+1)j=1k#Inj. For convenience of notation, let t=1/(α-1/q). Since ani=0 or |ani|>1, and i=1n|ani|qn, we have a11=0. It follows that I=n=2npα-2j=1n-1iInjP(|aniXi|>nα)n=2npα-2j=1n-1P(|X1|t>njt/q)#Inj  n=2npα-2j=1n-1#Injk[njt/q]P(k<|X1|tk+1)n=2npα-2k=nP(k<|X1|tk+1)j=1(n-1)[((k+1)/n)q/t]#Injn=2npα-2k=nP(k<|X1|tk+1)n([(k+1n)q/t]+1)n=1npα-2k=n[n1+t/q]P(k<|X1|tk+1)([(k+1n)q/t]+1)+n=1npα-1k=[n1+t/q]+1P(k<|X1|tk+1)=:I1+I2. Since pα-2-q/t=-α(q-p)-1<-1, we obtain I1Cn=1npα-2-q/tk=n[n1+t/q]P(k<|X1|tk+1)kq/tCk=1P(k<|X1|tk+1)kq/tn=[kq/(q+t)]knpα-2-q/tCk=1P(k<|X1|tk+1)kq/t-qα(q-p)/(q+t)CE|X1|p<. We also obtain I2k=1P(k<|X1|tk+1)n=1[kq/(t+q)]npα-1Ck=1P(k<|X1|tk+1)kp(α-1/q)CE|X1|p<. From I1< and I2<, we have I<. Thus, it remains to show that J<.

We have by Markov's inequality and Lemma 2.1 that for any r2,JCn=1npα-rα-2Emax1jn|Snj-ESnj|rCn=1npα-rα-2(i=1nE|aniXi|2I(|aniXi|nα))r/2+Cn=1npα-rα-2i=1nE|aniXi|rI(|aniXi|nα)=:J1+J2. Observe that for rq and n>m,nj=1n-1iInj|ani|qnj=1n-11j+1#Injn(m+1)r/q-1j=mn-1(j+1)-r/q#Inj. So j=mn-1j-r/q#InjCm-(r/q-1) for rq and n>m.

For J1 and J2, we proceed with two cases.

(i) If p2, then we take r large enough such that r>max{(pα-1)/(α-1/2),q}. Then we obtain that J1Cn=1npα-rα-2(i=1n|ani|2)r/2Cn=1npα-rα-2(i=1n|ani|q)r/2Cn=1npα-rα-2+r/2<. The second inequality follows by the fact that ani=0 or |ani|>1.

Noting that a11=0, we also obtain that J2=n=2npα-rα-2j=1n-1iInjE|aniXi|rI(|aniXi|nα)n=2npα-rα-2+r/qj=1n-1j-r/q#InjE|X1|rI(|X1|tn(j+1)t/q)n=2npα-rα-2+r/qj=1n-1j-r/q#Inj0k[n(j+1)t/q]E|X1|rI(k<|X1|tk+1)=n=2npα-rα-2+r/qj=1n-1j-r/q#Injk=02nE|X1|rI(k<|X1|tk+1)+n=2npα-rα-2+r/qj=1n-1j-r/q#Injk=2n+1[n(j+1)t/q]E|X1|rI(k<|X1|tk+1)=:J3+J4. Since pα-rα-2+r/q<qα-rα-2+r/q=-(r-q)(α-1/q)-1<-1 and q>p, we have that J3=n=2npα-rα-2+r/qk=02nE|X1|rI(k<|X1|tk+1)j=1n-1j-r/q#InjCn=2npα-rα-2+r/qk=02nE|X1|rI(k<|X1|tk+1)Ck=1E|X1|rI(k<|X1|tk+1)n=[k/2]npα-rα-2+r/qCk=1E|X1|rI(k<|X1|tk+1)kpα-rα-1+r/qCk=1P(k<|X1|tk+1)kpα-1CE|X1|p<. Since 1/t+1/q-α=0 and pα-2-q/t=-α(q-p)-1<-1, we also have that J4n=2npα-rα-2+r/qk=2n+1[n(q+t)/q]E|X1|rI(k<|X1|tk+1)j=[(k/n)q/t]-1n-1j-r/q#InjCn=2npα-rα-2+r/qk=2n+1[n(q+t)/q]E|X1|rI(k<|X1|tk+1)([(kn)q/t]-1)-(r/q-1)Ck=5E|X1|rI(k<|X1|tk+1)k-(r-q)/tn=[kq/(q+t)][k/2]npα-2-q/tCk=5E|X1|rI(k<|X1|tk+1)k-(r-q)/t-(α-1/q)(q-p)CE|X1|p<. From J3< and J4<, we have J2<.

(ii) If p<2, then we take r=2. As noted above, we may assume that p<q<2. Since r>q, as in the case p2, we have J1=J2CE|X1|p<.

We now prove Theorem 2.2 by using Lemmas 2.3 and 2.4.

Proof of Theorem <xref ref-type="statement" rid="thm2.1">2.2</xref>.

Sufficiency.

Without loss of generality, we may assume that i=1n|ani|qn for some q>p. For n1, let An={1in:|ani|1},Bn={1in:|ani|>1}, and let ani=ani if iAn,ani=0 otherwise, and ani′′=ani if iBn,ani′′=0 otherwise. Then max1jn|i=1janiXi|max1jn|i=1janiXi|+max1jn|i=1jani′′Xi|. It follows that n=1npα-2P(max1jn|i=1janiXi|>ϵnα)n=1npα-2P(max1jn|i=1janiXi|>ϵnα2)+n=1npα-2P(max1jn|i=1jani′′Xi|>ϵnα2)=:I+J. By Lemma 2.3, we have I<. By Lemma 2.4, we have J<. Hence (2.3) holds.

Necessity.

Choose, for each n1,an1==ann=1. Then {ani} satisfies (2.2). By (2.3), we obtain that n=1npα-2P(max1jn|i=1jXi|>ϵnα)<ϵ>0, which implies that n=1npα-2P(max1jn|Xj|>ϵnα)<ϵ>0. Observe that >i=1n=2i-1+12inpα-2P(max1jn|Xj|>ϵnα){i=1(2i-1)pα-22i-1P(max1j2i-1|Xj|>ϵ(2i)α)ifpα2,i=1(2i)pα-22i-1P(max1j2i-1|Xj|>ϵ(2i)α)if1<pα<2,{i=1P(max1j2i-1|Xj|>ϵ(2i)α)ifpα2,2pα-2i=1P(max1j2i-1|Xj|>ϵ(2i)α)if1<pα<2. Hence we have that for any ϵ>0,P(max1j2i-1|Xj|>ϵ(2i)α)0 as i, and so P(max1jn|Xj|>nα)0 as n. The rest of the proof is same as that of Peligrad and Gut  and is omitted.

Remark 2.5.

Taking ani=1 for 1in and n1, we can immediately get Theorem 1.2 from Theorem 2.2. If the array {ani} satisfies (1.4), then it satisfies (2.2): taking q such that p<q<p/δ, we have i=1n|ani|qmax1in|ani|q-pi=1n|ani|pCnδ(q-p)/pnδCn. So the implication (i)(ii) of Theorem 1.3 follows from Theorem 2.2. As noted after Theorem 1.3, the implication (ii)(i) of Theorem 1.3 is not true. Therefore, our result extends the result of Peligrad and Gut  to a weighted average, and generalizes and sharpens the result of An and Yuan .

Acknowledgments

The author is grateful to the editor Leonid Shaikhet and the referees for the helpful comments and suggestions that considerably improved the presentation of this paper. This work was supported by the Korea Science and Engineering Foundation (KOSEF) Grant funded by the Korea government (MOST) (no. R01-2007-000-20053-0).

BradleyR. C.On the spectral density and asymptotic normality of weakly dependent random fieldsJournal of Theoretical Probability199252355373MR115799010.1007/BF01046741ZBL0787.60059BrycW.SmoleńskiW.Moment conditions for almost sure convergence of weakly correlated random variablesProceedings of the American Mathematical Society19931192629635MR114996910.2307/2159950ZBL0785.60018PeligradM.GutA.Almost-sure results for a class of dependent random variablesJournal of Theoretical Probability199912187104MR167497210.1023/A:1021744626773ZBL0928.60025UtevS.PeligradM.Maximal inequalities and an invariance principle for a class of weakly dependent random variablesJournal of Theoretical Probability2003161101115MR195682310.1023/A:1022278404634ZBL1012.60022GanS.Almost sure convergence for ρ˜-mixing random variable sequencesStatistics & Probability Letters2004674289298MR206012810.1016/j.spl.2003.12.011KuczmaszewskaA.On Chung-Teicher type strong law of large numbers for ρ-mixing random variablesDiscrete Dynamics in Nature and Society200820081010.1155/2008/140548140548MR2403201WuQ.JiangY.Some strong limit theorems for ρ˜-mixing sequences of random variablesStatistics & Probability Letters200878810171023MR241891910.1016/j.spl.2007.09.061AnJ.YuanD.Complete convergence of weighted sums for ρ-mixing sequence of random variablesStatistics & Probability Letters2008781214661472MR245380110.1016/j.spl.2007.12.020CaiG.-H.Strong law of large numbers for ρ-mixing sequences with different distributionsDiscrete Dynamics in Nature and Society20062006710.1155/DDNS/2006/2764827648MR2244296KuczmaszewskaA.On complete convergence for arrays of rowwise dependent random variablesStatistics & Probability Letters200777111050106010.1016/j.spl.2006.12.007MR2395060ZBL1120.60025ZhuM.-H.Strong laws of large numbers for arrays of rowwise ρ-mixing random variablesDiscrete Dynamics in Nature and Society20072007610.1155/2007/7429674296MR2272417HsuP. L.RobbinsH.Complete convergence and the law of large numbersProceedings of the National Academy of Sciences of the United States of America1947332531MR001985210.1073/pnas.33.2.25ZBL0030.20101ErdösP.On a theorem of Hsu and RobbinsAnnals of Mathematical Statistics194920286291MR003071410.1214/aoms/1177730037ZBL0033.29001BaumL. E.KatzM.Convergence rates in the law of large numbersTransactions of the American Mathematical Society1965120108123MR019852410.2307/1994170ZBL0142.14802