A Note on Strong Convergence of Sums of Dependent Random Variables

For a sequence of dependent square-integrable random variables and a sequence of positive constants , conditions are provided under which the series converges almost surely as . These conditions are weaker than those provided by Hu et al. (2008).


Introduction and Results
Let {X n , n ≥ 1} be a sequence of square-integrable random variables defined on a probability space Ω, F, P and let {b n , n ≥ 1} be a sequence of positive constants.The random variables {X n , n ≥ 1} are not assumed to be independent.Past research has focussed on conditions that ensure the strong convergence of two distinct but related series: If the second sequence converges to 0 almost surely, then {X n , n ≥ 1} is said to obey the strong law of large numbers SLLN .
Assume that there exists a sequence of constants {ρ k , k ≥ 1} such that To motivate the general nature of our result consider the following example.Let {X n } be a sequence of zero mean random variables where where ρ k k q < ∞, for some q ∈ 0, 1 1.8 is a constraint on γ k .In Chapter 2 of Stout 4 the condition on the variances is shown to be close to optimal for sequences of orthogonal random variables.Lyons 6 provides an SLLN for random variables with bounded variances under the condition ∞ k 1 ρ k /k < ∞.One might conjecture that the condition 1.8 could be relaxed to ρ k /k < ∞.The above theorem, whilst allowing for far more general models than 1.7 , moves us closer to this constraint on the ρ k values.
For long range dependent stationary processes we have ρ k O k −d L k , where 0 < d < 1 and L • is a slowly varying function.Theorem 1.1 enables the strong convergence result to be extended to processes where the correlation decays at a slower rate than O k −d for d > 0.
Applying Kronecker's lemma the strong law of large numbers result is an immediate consequence of the above theorem.

Corollary 1.2. Under the conditions of Theorem 1.1, if b n is monotone increasing, the strong law of large numbers holds, that is,
There are strong law results under weaker conditions than 1.5 but with stronger conditions on the variance see, e.g., Lyons 6 , Chobanyan et al. 3 .Both papers show that if the summands have bounded variance, then 1.5 can be weakened to ∞ k 1 ρ k /k < ∞.Our approach focusses on the convergence of the series in 1.6 and relies on Kronecker's Lemma to obtain the strong law.If the aim is purely to obtain the SLLN, then alternative conditions might be possible as it is possible to construct sequences {x n } and For example, take b n n and x n log n −1 .Thus we can have the strong law holding but the series in 1.6 diverging.

Proofs
Throughout this paper, the symbol C denotes a generic constant 0 < C < ∞ which is not necessarily the same at each appearance.We first prove a number of lemmas that enable us to obtain tighter bounds for key expressions in the proof of Theorem 1 of Hu et al. 5 .Lemma 2.1.Let {X n , n ≥ 1} be a sequence of square-integrable random variables and suppose that there exists a sequence of constants {ρ k , k ≥ 1} such that 1.2 holds and a sequence {b n } satisfying 1.3 .Then for all n ≥ 0, m ≥ n 2,

Journal of Probability and Statistics
Proof.For all n ≥ 0, m ≥ n 2,

2.3
Proof.Note that x/ log x 2 is an increasing function for x ≥ e 2 > 0. Thus, for x ≥ k > e 2 , x

2.5
Lemma 2.3.For a > 0, define Proof.The result for S 0 a is the sum of a standard geometric progression.The general result follows by noting

2.9
Proof of Theorem 1.1.We will follow the method of proof in Theorem 1 in Hu et al. 5 .To prove 1.6 we first show that { n i 1 X i − EX i /b i , n ≥ 1} is a Cauchy sequence for convergence in L 2 which will imply convergence in probability.Using Lemmas 2.1 and 2.2,

Journal of Probability and Statistics
Therefore there exists a random variable S ∈ L 2 such that

2.11
Next we will show that S 2 n → S a.s.Let ε > 0 be arbitrary.Note where the last line follows by using 1.4 and 1.5 .Thus by the Borel Cantelli lemma S 2 n → S almost surely.To finish the proof we utilize the generalization of the Rademacher-Menchoff maximal inequality given by Serfling 7 and argue as in Hu et al. 5 .It is sufficient to show that, for any ε > 0,

2.13
Serfling's inequality and 3.8 from Hu et al. 5 ∞ 2.14 Theorem 1.1.Let {X n , n ≥ 1} be a sequence of square-integrable random variables and suppose that there exists a sequence of constants {ρ k , k ≥ 1} such that 1.2 holds.Let {b n , n ≥ 1} be a sequence of positive constants.Assume that there exists a constant K such that, for all n ≥ 1, n } is a stationary time series with autocovariance function {γ k } and {ν n } is a sequence of independent, zero mean random variables distributed independently of {ξ n }.