Complete Convergence for Weighted Sums of ρ ∗-Mixing Random Variables

In many stochastic models, the assumption that random variables are independent is not plausible. So it is of interest to extend the concept of independence to dependence cases. One of these dependence structures is ρ∗-mixing. Let {Xn, n ≥ 1} be a sequence of random variables defined on a probability space Ω,F, P , and let Fm n denote the σ-algebra generated by the random variables Xn,Xn 1, . . . , Xm. For any S ⊂ N, define FS σ Xi, i ∈ S . Given two σ-algebras A,B in F, put


Introduction
In many stochastic models, the assumption that random variables are independent is not plausible.So it is of interest to extend the concept of independence to dependence cases.One of these dependence structures is ρ * -mixing.
Let {X n , n ≥ 1} be a sequence of random variables defined on a probability space Ω, F, P , and let F m n denote the σ-algebra generated by the random variables X n , X n 1 , . . ., X m .For any S ⊂ N, define F S σ X i , i ∈ S .Given two σ-algebras A, B in Obviously, 0 ≤ ρ * n 1 ≤ ρ * n ≤ ρ * 0 1.The sequence {X n , n ≥ 1} is called ρ * -mixing or ρ-mixing if there exists k ∈ N such that ρ * k < 1.Note that if {X n , n ≥ 1} is a sequence of independent random variables, then ρ * n 0 for all n ≥ 1.A number of limit results for ρ * -mixing sequences of random variables have been established by many authors.We refer to Bradley 1 for the central limit theorem, Bryc and Smole ński 2 , Peligrad and Gut 3 , and Utev and Peligrad 4 for moment inequalities, Gan 5 , Kuczmaszewska 6 , and Wu and Jiang 7 for almost sure convergence, and An and Yuan 8 , Cai 9 , Gan 5 , Kuczmaszewska 10 , Peligrad and Gut 3 , and Zhu 11 for complete convergence.
The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins 12 .A sequence {X n , n ≥ 1} of random variables converges completely to the constant θ if In view of the Borel-Cantelli lemma, this implies that X n → θ almost surely.Therefore, the complete convergence is a very important tool in establishing almost sure convergence of summation of random variables as well as weighted sums of random variables.Hsu and Robbins 12 proved that the sequence of arithmetic means of independent and identically distributed random variables converges completely to the expected value if the variance of the summands is finite.Erd ös 13 proved the converse.The result of Hsu-Robbins-Erd ös is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors.One of the most important generalizations is Baum and Katz 14 strong law of large numbers.
Theorem 1.1 Baum and Katz 14 .Let p ≥ 1/α and 1/2 < α ≤ 1.Let {X n , n ≥ 1} be a sequence of independent and identically distributed random variables with EX 1 0. Then the following statements are equivalent: Peligrad and Gut [3] extended the result of Baum and Katz [14] to ρ * -mixing random variables.Theorem 1.2 Peligrad and Gut 3 .Let p > 1/α and 1/2 < α ≤ 1.Let {X n , n ≥ 1} be a sequence of identically distributed ρ * -mixing random variables with EX 1 0. Then the following statements are equivalent: Cai [9] complemented Theorem 1.2 when p 1/α.Recently, An and Yuan [8] obtained a complete convergence result for weighted sums of identically distributed ρ * -mixing random variables.Theorem 1. 3 An and Yuan 8 .Let p > 1/α and 1/2 < α ≤ 1.Let {X n , n ≥ 1} be a sequence of identically distributed ρ * -mixing random variables with EX 1 0. Assume that {a ni , 1 ≤ i ≤ n, n ≥ 1} is an array of real numbers satisfying Then the following statements are equivalent: Note that the result of An and Yuan 8 is not an extension of Peligrad and Gut's 3 result, since condition 1.4 does not hold for the array with a ni 1, 1 ≤ i ≤ n, n ≥ 1.An and Yuan 8 proved the implication i ⇒ ii under condition 1.4 , and proved the converse under conditions 1.4 and 1.5 .However, the array satisfying both 1.4 and 1.5 does not exist.
But, this does not hold when k is fixed and n is large enough.
In this paper, we obtain a new complete convergence result for weighted sums of identically distributed ρ * -mixing random variables.Our result extends the result of Peligrad and Gut 3 , and generalizes and sharpens the result of An and Yuan 8 .
Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance, x denotes the integer part of x, and a ∧ b min{a, b}.

Main Result
To prove our main result, we need the following lemma which is a Rosenthal-type inequality for ρ * -mixing random variables.|a ni | q O n for some q > p.

2.2
If Conversely, if 2.3 holds for any array To prove Theorem 2.2, we first prove the following lemma which is the sufficiency of Theorem 2.2 when the array is bounded.
as n → ∞.Hence for n large enough, we have We have by Markov's inequality and Lemma 2.1 that for any r ≥ 2,

2.7
In the last inequality, we used the fact that 2.8 Since r > p, we also get
We next prove the sufficiency of Theorem 2.2 when the array is unbounded.|a ni | q ≤ n for some q > p.
Proof.If p < 2, then we can take δ > 0 such that p < p δ < min{2, q}.Since a ni 0 or Thus we may assume that 2.10 holds for some p < q < 2 when p < 2.
Let S nj j i 1 a ni X i I |a ni X i | ≤ n α for 1 ≤ j ≤ n and n ≥ 1.In view of EX i 0, we get since pα > 1. Hence for n large enough, we have that n

2.16
We also obtain
We have by Markov's inequality and Lemma 2.1 that for any r ≥ 2,

2.18
Observe that for r ≥ q and n > m,

2.19
So n−1 j m j −r/q #I nj ≤ Cm − r/q−1 for r ≥ q and n > m.
For J 1 and J 2 , we proceed with two cases.i If p ≥ 2, then we take r large enough such that r > max{ pα − 1 / α − 1/2 , q}.Then we obtain that

2.20
The second inequality follows by the fact that a ni 0 or Noting that a 11 0, we also obtain that
ii If p < 2, then we take r 2. As noted above, we may assume that p < q < 2. Since r > q, as in the case p ≥ 2, we have We now prove Theorem 2.2 by using Lemmas 2.3 and 2.4.

Proof of Theorem 2.2.
Sufficiency.Without loss of generality, we may assume that n i 1 |a ni | q ≤ n for some q > p.For n ≥ 1, let

2.29
Hence we have that for any > 0, P max 1≤j≤2 i−1 |X j | > 2 i α → 0 as i → ∞, and so P max 1≤j≤n |X j | > n α → 0 as n → ∞.The rest of the proof is same as that of Peligrad and Gut 3 and is omitted.

Remark 2.5. Taking a ni
1 for 1 ≤ i ≤ n and n ≥ 1, we can immediately get Theorem 1.2 from Theorem 2.2.If the array {a ni } satisfies 1.4 , then it satisfies 2.2 : taking q such that p < q < p/δ, we have |a ni | p ≤ Cn δ q−p /p n δ ≤ Cn.

2.30
So the implication i ⇒ ii of Theorem 1.3 follows from Theorem 2.2.As noted after Theorem 1.3, the implication ii ⇒ i of Theorem 1.3 is not true.Therefore, our result extends the result of Peligrad and Gut 3 to a weighted average, and generalizes and sharpens the result of An and Yuan 8 .

. 1 where
corr X, Y EXY − EXEY / var X var Y .Define the ρ * -mixing coefficients by ρ * n sup ρ F S , F T ; S, T ⊂ N with dist S, T ≥ n .1.2

Lemma 2 . 1
Utev and Peligrad 4 .Let {X n , n ≥ 1} be a sequence of ρ * -mixing random variables with EX n 0 and E|X n | r < ∞ for some r ≥ 2 and all n ≥ 1.Then there exists a constant D D r, k, ρ * k depending only on r, k, and ρ * k such that for any n ≥ 1, .24 and let a ni a ni if i ∈ A n , a ni 0 otherwise, and a ni a ni if i ∈ B n , a ni 0 otherwise.Then Necessity.Choose, for each n ≥ 1, a n1 • • • a nn 1.Then {a ni } satisfies 2.2 .By 2.3 , we obtain that 2.26By Lemma 2.3, we have I < ∞.By Lemma 2.4, we have J < ∞.Hence 2.3 holds.