The complete moment convergence of weighted sums for arrays of rowwise φ-mixing random variables is investigated. By using moment inequality and truncation method, the sufficient conditions for complete moment convergence of weighted sums for arrays of rowwise φ-mixing random variables are obtained. The results of Ahmed et al. (2002) are complemented. As an application, the complete moment convergence of moving average processes based on a φ-mixing random sequence is obtained, which improves the result of Kim et al. (2008).
1. Introduction
Hsu and Robbins [1] introduced the concept of complete convergence of {Xn}. A sequence {Xn,n=1,2,…} is said to converge completely to a constant C if
(1.1)∑n=1∞P(|Xn-C|>ϵ)<∞,∀ϵ>0.
Moreover, they proved that the sequence of arithmetic means of independent identically distributed (i.i.d.) random variables converge completely to the expected value if the variance of the summands is finite. The converse theorem was proved by Erdös [2]. This result has been generalized and extended in several directions, see Baum and Katz [3], Chow [4], Gut [5], Taylor et al. [6], and Cai and Xu [7]. In particular, Ahmed et al. [8] obtained the following result in Banach space.
Theorem A.
Let {Xni;i≥1,n≥1} be an array of rowwise independent random elements in a separable real Banach space (B,∥·∥). Let P(∥Xni∥>x)≤CP(|X|>x) for some random variable X, constant C and all n,i and x>0. Suppose that {ani,i≥1,n≥1} is an array of constants such that
(1.2)supi≥1|ani|=O(n-r),for somer>0,∑i=1∞|ani|=O(nα),for someα∈[0,r).
Let β be such that α+β≠-1 and fix δ>0 such that 1+α/r<δ≤2. Denote s=max(1+(α+β+1)/r,δ). If E|X|s<∞ and Sn=∑i=1∞aniXni→0 in probability, then ∑n=1∞nβP(∥Sn∥>ϵ)<∞ for all ϵ>0.
Chow [4] established the following refinement which is a complete moment convergence result for sums of (i.i.d.) random variables.
Theorem B.
Let EX1=0, 1≤p<2 and r≥p. Suppose that E[|X1|r+|X1|log(1+|X1|)]<∞. Then
(1.3)∑n=1∞n(r/p)-2-(1/p)E(|∑i=1nXi|-ϵn1/p)+<∞,∀ϵ>0.
The main purpose of this paper is to discuss again the above results for arrays of rowwise φ-mixing random variables. The author takes the inspiration in [8] and discusses the complete moment convergence of weighted sums for arrays of rowwise φ-mixing random variables by applying truncation methods. The results of Ahmed et al. [8] are extended to φ-mixing case. As an application, the corresponding results of moving average processes based on a φ-mixing random sequence are obtained, which extend and improve the result of Kim and Ko [9].
For the proof of the main results, we need to restate a few definitions and lemmas for easy reference. Throughout this paper, C will represent positive constants, the value of which may change from one place to another. The symbol I(A) denotes the indicator function of A; [x] indicates the maximum integer not larger than x. For a finite set B, the symbol ♯B denotes the number of elements in the set B.
Definition 1.1.
A sequence of random variables {Xi,1≤i≤n} is said to be a sequence of φ-mixing random variables, if
(1.4)φ(m)=supk≥1{|P(B∣A)-P(B)|;A∈ℑ1k,B∈ℑk+m∞,P(A)>0}→0,asm→∞,
where ℑjk=σ{Xi;j≤i≤k}, 1≤j≤k≤∞.
Definition 1.2.
A sequence {Xn,n≥1} of random variables is said to be stochastically dominated by a random variable X (write {Xi}≺X) if there exists a constant C, such that P{|Xn|>x}≤CP{|X|>x} for all x≥0 and n≥1.
The following lemma is a well-known result.
Lemma 1.3.
Let the sequence {Xn,n≥1} of random variables be stochastically dominated by a random variable X. Then for any p>0,x>0(1.5)E|Xn|pI(|Xn|≤x)≤C[E|X|pI(|X|≤x)+xpP{|X|>x}],(1.6)E|Xn|pI(|Xn|>x)≤CE|X|pI(|X|>x).
Definition 1.4.
A real-valued function l(x), positive and measurable on [A,∞) for some A>0, is said to be slowly varying if limx→∞l(xλ)/l(x)=1 for each λ>0.
By the properties of slowly varying function, we can easily prove the following lemma. Here we omit the details of the proof.
Lemma 1.5.
Let l(x)>0 be a slowly varying function as x→∞, then there exists C (depends only on r) such that
Ckr+1l(k)≤∑n=1knrl(n)≤Ckr+1l(k) for any r>-1 and positive integer k,
Ckr+1l(k)≤∑n=k∞nrl(n)≤Ckr+1l(k) for any r<-1 and positive integer k.
The following lemma will play an important role in the proof of our main results. The proof is due to Shao [10].
Lemma 1.6.
Let {Xi,1≤i≤n} be a sequence of φ-mixing random variables with mean zero. Suppose that there exists a sequence {Cn} of positive numbers such that E(∑i=k+1k+mXi)2≤Cn for any k≥0,n≥1,m≤n. Then for any q≥2, there exists C=C(q,φ(·)) such that
(1.7)Emax1≤j≤n|∑i=k+1k+jXi|q≤C[Cnq/2+Emaxk+1≤i≤k+n|Xi|q].
Lemma 1.7.
Let {Xi,1≤i≤n} be a sequence of φ-mixing random variables with ∑i=1∞φ1/2(i)<∞, then there exists C such that for any k≥0 and n≥1(1.8)E(∑i=k+1k+nXi)2≤C∑i=k+1k+nEXi2.
Proof.
By Lemma 5.4.4 in [11] and Hölder's inequality, we have
(1.9)E(∑i=k+1k+nXi)2=∑i=k+1k+nEXi2+2∑k+1≤i<j≤k+nEXiXj≤∑i=k+1k+nEXi2+4∑k+1≤i<j≤k+nφ1/2(j-i)(EXi2)1/2(EXj2)1/2≤∑i=k+1k+nEXi2+2∑i=k+1k+n-1∑j=i+1k+nφ1/2(j-i)(EXi2+EXj2)≤(1+4∑i=1nφ1/2(i))∑i=k+1k+nEXi2.
Therefore, (1.8) holds.
2. Main Results
Now we state our main results. The proofs will be given in Section 3.
Theorem 2.1.
Let {Xni,i≥1,n≥1} be an array of rowwise φ-mixing random variables with EXni=0, {Xni}≺X and ∑m=1∞φ1/2(m)<∞. Let l(x)>0 be a slowing varying function, and {ani,i≥1,n≥1} be an array of constants such that
(2.1)supi≥1|ani|=O(n-r),forsomer>0,∑i=1∞|ani|=O(nα),forsomeα∈[0,r).
If α+β+1>0 and there exists some δ>0 such that (α/r)+1<δ≤2, and s=max(1+((α+β+1)/r),δ), then E|X|sl(|X|1/r)<∞ implies
(2.2)∑n=1∞nβl(n)E[supk≥1|∑i=1kaniXni|-ϵ]+<∞,∀ϵ>0.
If β=-1,α>0, then E|X|1+(a/r)(1+l(|X|1/r))<∞ implies
(2.3)∑n=1∞n-1l(n)E[supk≥1|∑i=1kaniXni|-ϵ]+<∞,∀ϵ>0.
Remark 2.2.
If α+β+1<0, then E|X|<∞ implies that (2.2) holds. In fact,
(2.4)∑n=1∞nβl(n)E[supk≥1|∑i=1kaniXni|-ϵ]+≤∑n=1∞nβl(n)∑i=1∞|ani|E|Xni|+ϵ∑n=1∞nβl(n)≤C∑n=1∞nβ+αl(n)E|X|+ϵ∑n=1∞nβl(n)<∞.
Remark 2.3.
Note that
(2.5)∞>∑n=1∞nβl(n)E[supk≥1|∑i=1kaniXni|-ϵ]+=∑n=1∞nβl(n)∫0∞P{supk≥1|∑i=1kaniXni|-ϵ>x}dx=∫0∞∑n=1∞nβl(n)P{supk≥1|∑i=1kaniXni|>x+ϵ}dx.
Therefore, from (2.5), we obtain that the complete moment convergence implies the complete convergence, that is, under the conditions of Theorem 2.1, result (2.2) implies
(2.6)∑n=1∞nβl(n)P{supk≥1|∑i=1kaniXni|>ϵ}<∞,
and (2.3) implies
(2.7)∑n=1∞n-1l(n)P{supk≥1|∑i=1kaniXni|>ϵ}<∞.
Corollary 2.4.
Under the conditions of Theorem 2.1,
if α+β+1>0 and there exists some δ>0 such that (α/r)+1<δ≤2, and s=max(1+((α+β+1)/r),δ), then E|X|sl(|X|1/r)<∞ implies
(2.8)∑n=1∞nβl(n)E[|∑i=1∞aniXni|-ϵ]+<∞,∀ϵ>0,
if β=-1,α>0, then E|X|1+(α/r)(1+l(|X|1/r))<∞ implies
(2.9)∑n=1∞n-1l(n)E[|∑i=1∞aniXni|-ϵ]+<∞,∀ϵ>0.
Corollary 2.5.
Let {Xni,i≥1,n≥1} be an array of rowwise φ-mixing random variables with EXni=0,{Xni}≺X and ∑m=1∞φ1/2(m)<∞. Suppose that l(x)>0 is a slowly varying function.
Let p>1 and 1≤t<2. If E|X|ptl(|X|t)<∞, then
(2.10)∑n=1∞np-2-(1/t)l(n)E[max1≤k≤n|∑i=1kXni|-ϵn1/t]+<∞,∀ϵ>0.
Let 1<t<2. If E|X|t[1+l(|X|t)]<∞, then
(2.11)∑n=1∞n-1-(1/t)l(n)E[max1≤k≤n|∑i=1kXni|-ϵn1/t]+<∞,∀ϵ>0.
Corollary 2.6.
Suppose that Xn=∑i=-∞∞ai+nYi,n≥1, where {ai,-∞<i<∞} is a sequence of real numbers with ∑-∞∞|ai|<∞, and {Yi,-∞<i<∞} is a sequence of φ-mixing random variables with EYi=0, {Yi}≺Y and ∑m=1∞φ1/2(m)<∞. Let l(x) be a slowly varying function.
Let 1≤t<2,r≥1+(t/2). If E|Y|rl(|Y|t)<∞, then
(2.12)∑n=1∞n(r/t)-2-(1/t)l(n)E[|∑i=1nXi|-ϵn1/t]+<∞,∀ϵ>0.
Let 1<t<2. If E|Y|t[1+l(|Y|t)]<∞, then
(2.13)∑n=1∞n-1-(1/t)l(n)E[|∑i=1nXi|-ϵn1/t]+<∞,∀ϵ>0.
Remark 2.7.
Corollary 2.6 obtains the result about the complete moment convergence of moving average processes based on a φ-mixing random sequence with different distributions. We extend the results of Chen et al. [12] from the complete convergence to the complete moment convergence. The result of Kim and Ko [9] is a special case of Corollary 2.6 (1). Moreover, our result covers the case of r=t, which was not considered by Kim and Ko.
3. Proofs of the Main Results Proof of Theorem 2.1.
Without loss of generality, we can assume
(3.1)supi≥1|ani|≤n-r,∑i=1∞|ani|≤nα.
Let Snk(x)=∑i=1kaniXniI(|aniXni|≤n-rx) for any k≥1, n≥1, and x≥0. First note that E|X|sl(|X|1/r)<∞ implies E|X|t<∞ for any 0<t<s. Therefore, for x>nr,
(3.2)x-1nrsupk≥1E|Snk(x)|=x-1nrsupk≥1E|∑i=1kaniXniI(|aniXni|>n-rx)|(EXni=0)≤∑i=1∞E|aniXni|I(|aniXni|>n-rx)≤∑i=1∞E|aniX|I(|aniX|>n-rx)≤∑i=1∞|ani|E|X|I(|X|>x)≤nαE|X|I(|X|>x)≤xα/rE|X|I(|X|>x)≤E|X|1+(α/r)I(|X|>nr)→0asn→∞.
Hence, for n large enough we have supk≥1E|Snk(x)|<(ϵ/2)n-rx. Then
(3.3)∑n=1∞nβl(n)E[supk≥1|∑i=1kaniXni|-ϵ]+=∑n=1∞nβl(n)∫ϵ∞P{supk≥1|∑i=1kaniXni|≥x}dx=∑n=1∞nβ-rl(n)ϵ∫nr∞P{supk≥1|∑i=1kaniXni|≥ϵn-rx}dx≤C∑n=1∞nβ-rl(n)∫nr∞P{supi|aniXni|>n-rx}dx+C∑n=1∞nβ-rl(n)∫nr∞P{supk≥1|Snk(x)-ESnk(x)|≥n-rxϵ2}dx:=I1+I2.
Noting that α+β>-1, by Lemma 1.5, Markov inequality, (1.6), and (3.1), we have
(3.4)I1≤C∑n=1∞nβ-rl(n)∫nr∞∑i=1∞P{|aniXni|>n-rx}dx≤C∑n=1∞nβ-rl(n)∫nr∞nrx-1∑i=1∞E|aniXni|I(|aniXni|>n-rx)dx≤C∑n=1∞nβ+αl(n)∫nr∞x-1E|X|I(|X|>x)dx≤C∑n=1∞nβ+αl(n)∑k=n∞∫krkr+1x-1E|X|I(|X|>x)dx≤C∑n=1∞nβ+αl(n)∑k=n∞k-1E|X|I(|X|>kr)≤C∑k=1∞k-1E|X|I(|X|>kr)∑n=1knβ+αl(n)≤C∑k=1∞kβ+αl(k)E|X|I(|X|>kr)≤CE|X|1+((1+α+β)/r)l(|X|1/r)<∞.
Now we estimate I2, noting that ∑m=1∞φ1/2(m)<∞, by Lemma 1.7, we have
(3.5)sup1≤m<∞E(∑i=1maniXniI(|aniXni|≤n-rx)-E∑i=1maniXniI(|aniXni|≤n-rx))2≤C∑i=1∞Eani2Xni2I(|aniXni|≤n-rx).
By Lemma 1.6, Markov inequality, Cr inequality, and (1.5), for any q≥2, we have
(3.6)P{supk≥1|Snk(x)-ESnk(x)|≥n-rxϵ2}≤Cnrqx-qEsupk≥1|Snk(x)-ESnk(x)|q≤Cnrqx-q[(∑i=1∞Eani2Xni2I(|aniXni|≤n-rx))q/2+∑i=1∞E|aniXni|qI(|aniXni|≤n-rx)]≤Cnrqx-q(∑i=1∞Eani2X2I(|aniX|≤n-rx))q/2+Cnrqx-q∑i=1∞E|aniX|qI(|aniX|≤n-rx)+C(∑i=1∞P{|aniX|>n-rx})q/2+C∑i=1∞P{|aniX|>n-rx}:=J1+J2+J3+J4.
So,
(3.7)I2≤∑n=1∞nβ-rl(n)∫nr∞(J1+J2+J3+J4)dx.
From (3.4), we have ∑n=1∞nβ-rl(n)∫nr∞J4dx<∞.
For J1, we consider the following two cases.
If s>2, then EX2<∞. Taking q≥2 such that β+(q(α-r)/2)<-1, we have
(3.8)∑n=1∞nβ-rl(n)∫nr∞J1dx≤C∑n=1∞nβ-r+rql(n)∫nr∞x-q(∑i=1∞ani2)q/2dx≤C∑n=1∞nβ-r+rql(n)nq(α-r)/2nr(-q+1)≤C∑n=1∞nβ+(q(α-r)/2)l(n)<∞.
If s≤2, we choose s′ such that 1+(α/r)<s′<s. Taking q≥2 such that β+(qr/2)(1+(α/r)-s′)<-1, we have
(3.9)∑n=1∞nβ-rl(n)∫nr∞J1dx≤C∑n=1∞nβ-r+rql(n)∫nr∞x-q(∑i=1∞|ani||ani|s′-1E|aniX|2-s′|X|s′I(|aniX|≤n-rx))q/2dx≤C∑n=1∞nβ-r+rql(n)nqα/2n-(qr/2)(s′-1)∫nr∞x-q(n-rx)(q/2)(2-s′)dx≤C∑n=1∞nβ+(qr/2)(1+(α/r)-s′)l(n)<∞.
So, ∑n=1∞nβ-rl(n)∫nr∞J1dx<∞.
Now, we estimate J2. Set Inj={i≥1∣(n(j+1))-r<|ani|≤(nj)-r},j=1,2,…. Then ∪j≥1Inj=N, where N is the set of positive integers. Note also that for all k≥1,n≥1,
(3.10)nα≥∑i=1∞|ani|=∑j=1∞∑i∈Inj|ani|≥∑j=1∞(♯Inj)(n(j+1))-r≥n-r∑j=k∞(♯Inj)(j+1)-rq(k+1)rq-r.
Hence, we have
(3.11)∑j=k∞(♯Inj)j-rq≤Cnα+rkr-rq.
Note that
(3.12)∑n=1∞nβ-rl(n)∫nr∞J2dx=C∑n=1∞nβ-r+rql(n)∫nr∞x-q∑j=1∞∑i∈InjE|aniX|qI(|aniX|≤n-rx)dx=C∑n=1∞nβ-r+rql(n)∑j=1∞(♯Inj)(nj)-rq∑k=n∞∫kr(k+1)rx-qE|X|qI(|X|≤x(j+1)r)dx≤C∑n=1∞nβ-r+rql(n)∑j=1∞(♯Inj)(nj)-rq∑k=n∞kr(-q+1)-1E|X|qI(|X|≤(k+1)r(j+1)r)=C∑n=1∞nβ-rl(n)∑k=n∞kr(-q+1)-1∑j=1∞(♯Inj)j-rq∑i=0(k+1)(j+1)-1E|X|qI(ir<|X|≤(i+1)r)≤C∑n=1∞nβ-rl(n)∑k=n∞kr(-q+1)-1∑j=1∞(♯Inj)j-rq∑i=02(k+1)-1E|X|qI(ir<|X|≤(i+1)r)+C∑n=1∞nβ-rl(n)∑k=n∞kr(-q+1)-1∑j=1∞(♯Inj)j-rq∑i=2(k+1)(k+1)(j+1)E|X|qI(ir<|X|≤(i+1)r):=J2′+J2′′.
Taking q≥2 large enough such that β+α-rq+r<-1, for J2′, by Lemma 1.6 and (3.11), we get
(3.13)J2′≤C∑n=1∞nβ-rl(n)∑k=n∞kr(-q+1)-1nα+r∑i=02(k+1)-1E|X|qI(ir<|X|≤(i+1)r)=C∑k=1∞kr(-q+1)-1∑i=02(k+1)-1E|X|qI(ir<|X|≤(i+1)r)∑n=1knβ+αl(n)≤C∑k=1∞kβ+α-rq+rl(k)∑i=02(k+1)-1E|X|qI(ir<|X|≤(i+1)r)≤C+C∑i=3∞E|X|qI(ir<|X|≤(i+1)r)∑k=[i/2]∞kβ+α-rq+rl(k)≤C+C∑i=3∞iβ+α-rq+r+1l(i)E|X|qI(ir<|X|≤(i+1)r)≤C+CE|X|1+((β+α+1)/r)l(|X|1/r)<∞.
For J2′′, we obtain
(3.14)J2′′≤C∑n=1∞nβ-rl(n)∑k=n∞kr(-q+1)-1∑j=1∞(♯Inj)j-rq∑i=2(k+1)(j+1)(k+1)E|X|qI(ir<|X|≤(i+1)r)≤C∑n=1∞nβ-rl(n)∑k=n∞kr(-q+1)-1∑i=2(k+1)∞E|X|qI(ir<|X|≤(i+1)r)∑j=[i(k+1)-1]-1∞(♯Inj)j-rq≤C∑n=1∞nβ-rl(n)∑k=n∞kr(-q+1)-1∑i=2(k+1)∞nr+αir(1-q)k-r(1-q)E|X|qI(ir<|X|≤(i+1)r)=C∑k=1∞k-1∑i=2(k+1)∞ir(1-q)E|X|qI(ir<|X|≤(i+1)r)∑n=1knβ+αl(n)≤C∑k=1∞kβ+αl(k)∑i=2(k+1)∞ir(1-q)E|X|qI(ir<|X|≤(i+1)r)≤C∑i=4∞iβ+α+1+r-rqE|X|qI(ir<|X|≤(i+1)r)≤CE|X|1+((β+α+1)/r)l(|X|1/r)<∞.
So ∑n=1∞nβ-rl(n)∫nr∞J2dx<∞. Finally, we prove ∑n=1∞nβ-rl(n)∫nr∞J3dx<∞. In fact, noting 1+(a/r)<s′<s and β+(qr/2)(1+(α/r)-s′)<-1, using Markov inequality and (3.1), we get
(3.15)∑n=1∞nβ-rl(n)∫nr∞J3dx≤C∑n=1∞nβ-rl(n)∫nr∞(∑i=1∞nrs′x-s′E|aniX|s′)q/2dx≤C∑n=1∞nβ-rl(n)nqrs′/2n-r(s′-1)(q/2)nα(q/2)∫nr∞x-s′(q/2)dx≤C∑n=1∞nβ-r+r(q/2)+α(q/2)l(n)nr(-s′(q/2)+1)≤C∑n=1∞nβ+(qr/2)(1+(α/2)-s′)l(n)<∞.
Thus, we complete the proof in (a). Next, we prove (b). Note that E|X|1+α/r<∞ implies that (3.2) holds. Therefore, from the proof in (a), to complete the proof of (b), we only need to prove
(3.16)I2=C∑n=1∞n-1-rl(n)∫nr∞P{supk≥1|Snk(x)-ESnk(x)|≥n-rxϵ2}dx<∞.
In fact, noting β=-1, α+β+1>0, α+β-r<-1 and E|X|1+α/rl(|X|1/r)<∞. By taking q=2 in the proof of (3.12), (3.13), and (3.14), we get
(3.17)C∑n=1∞n-1+rl(n)∫nr∞x-2∑i=1∞Eani2X2I(|aniX|≤n-rx)dx≤C+CE|X|1+(α/r)l(|X|1/r)<∞.
Then, by (3.17), we have
(3.18)I2≤C∑n=1∞n-1-rl(n)∫nr∞n2rx-2E|Sxn-ESxn|2dx≤C∑n=1∞n-1+rl(n)∫nr∞x-2∑i=1∞Eani2Xni2I(|aniXni|≤n-rx)dx≤C∑n=1∞n-1+rl(n)∫nr∞x-2∑i=1∞Eani2X2I(|aniX|≤n-rx)dx+C∑n=1∞n-1-rl(n)∫nr∞∑i=1∞P{|aniX|>n-rx}dx≤C∑n=1∞n-1+rl(n)∫nr∞x-2∑i=1∞Eani2X2I(|aniX|≤n-rx)dx+C<∞.
The proof of Theorem 2.1 is completed.
Proof of Corollary 2.4.
Note that
(3.19)[|∑i=1∞aniXni|-ϵ]+≤[supk≥1|∑i=1kaniXni|-ϵ]+.
Therefore, (2.8) and (2.9) hold by Theorem 2.1.
Proof of Corollary 2.5.
By applying Theorem 2.1, taking β=p-2,ani=n-1/t for 1≤i≤n, and ani=0 for i>n, then we obtain (2.10). Similarly, taking β=-1, ani=n-1/t for 1≤i≤n, and ani=0 for i>n, we obtain (2.11) by Theorem 2.1.
Proof of Corollary 2.6.
Let Xni=Yi and ani=n-1/t∑j=1nai+j for all n≥1,-∞<i<∞. Since ∑-∞∞|ai|<∞, we have supi|ani|=O(n-1/t) and ∑i=-∞∞|ani|=O(n1-1/t). By applying Corollary 2.4, taking β=(r/t)-2, r=1/t, α=1-(1/t), we obtain
(3.20)∑n=1∞n(r/t)-2-(1/t)l(n)E[|∑i=1nXi|-ϵn1/t]+=∑n=1∞nβl(n)E[|∑i=-∞∞aniXni|-ϵ]+<∞,∀ϵ>0.
Therefore, (2.12) and (2.13) hold.
Acknowledgment
The paper is supported by the National Natural Science Foundation of China (no. 11271020 and 11201004), the Key Project of Chinese Ministry of Education (no. 211077), the Natural Science Foundation of Education Department of Anhui Province (KJ2012ZD01), and the Anhui Provincial Natural Science Foundation (no. 10040606Q30 and 1208085MA11).
HsuP. L.RobbinsH.Complete convergence and the law of large numbers19473325310019852ErdösP.On a theorem of hsu and robbins1949202862910030714BaumL. E.KatzM.Convergence rates in the law of large numbers19651201081230198524ChowY. S.On the rate of moment convergence of sample sums and extremes19881631772011089491GutA.Complete convergence and cesàro summation for i.i.d. random variables1993971-216917810.1007/BF011993181240721TaylorR. L.PattersonR. F.BozorgniaA.A strong law of large numbers for arrays of rowwise negatively dependent random variables200220364365610.1081/SAP-1200041181900307CaiG. H.XuB.Complete convergence for weighted sums of ρ-mixing sequences and its application20062644194222241999AhmedS. E.AntoniniR. G.VolodinA.On the rate of complete convergence for weighted sums of arrays of banach space valued random elements with application to moving average processes200258218519410.1016/S0167-7152(02)00126-81914917KimT. S.KoM. H.Complete moment convergence of moving average processes under dependence assumptions200878783984610.1016/j.spl.2007.09.0092398357ShaoQ. M.A moment inequality and its applications19883167367471000416StoutW. F.1974New York, NY, USAAcademic Press0455094ChenP. Y.HuT. C.VolodinA.Limiting behaviour of moving average processes under φ-mixing assumption200979110511110.1016/j.spl.2008.07.0262483402