MODERATE DEVIATIONS FOR BOUNDED SUBSEQUENCES

Let (Xn)n≥1 be a sequence of random variables on a probability space (Ω, ,P) and p ≥ 1 a fixed real number. We say that (Xn)n≥1 is Lp-bounded if it has finite pth moments, that is, ‖Xn‖p ≤ C for some C > 0 and any n ≥ 1. Let ε > 0; finding the rate of convergence of the moderate deviations probabilities P[|∑k=1Xk| > εan] with an = (n logn)1/2 or (n loglogn)1/2 is known in the literature as Davis’ problems. More precisely, let δ = δ(p)≥ 0 be a function of p ≥ 1 and consider the series


Introduction and main results
Let (X n ) n≥1 be a sequence of random variables on a probability space (Ω,Ᏺ,P) and p ≥ 1 a fixed real number. We say that (X n ) n≥1 is L p -bounded if it has finite pth moments, that is, X n p ≤ C for some C > 0 and any n ≥ 1. Let ε > 0; finding the rate of convergence of the moderate deviations probabilities P[| n k=1 X k | > εa n ] with a n = (nlogn) 1/2 or (nloglogn) 1/2 is known in the literature as Davis' problems. More precisely, let δ = δ(p) ≥ 0 be a function of p ≥ 1 and consider the series ∞ n=2 (logn) δ n P n k=1 X k > ε(nlogn) 1/2 , ∞ n=3 1 n(logn) δ P n k=1 X k > ε(nloglogn) 1/2 ; (1.1) the convergence of series (2.1) has been studied by Davis (see [7,8]) and Gut (see [10]) when (X n ) n≥1 are L p -bounded i.i.d. sequences, and by Stoica (see [14,15]) when (X n ) n≥1 are L p -bounded martingale difference sequences.
In the sequel, we are interested in Davis' theorems under the only assumption that (X n ) n≥1 is L p -bounded. Our results rely on the "subsequence principle," that is, given any sequence of L p -bounded random variables, one can find a subsequence that satisfies, together with all its further subsequences, the same type of limit laws as do i.i.d. variables (or martingale difference sequences) with similar moment bounds. This principle was introduced by Chatterji (see [4][5][6]) and unifies results by Banach and Saks, Komlòs, Révész, Steinhaus in the context of law of large numbers, iterated logarithm, and central limit theorem; extensions to exchangeable sequences were given by Aldous [1] and Berkes and Péter [3]. Also note that Gut [11] and Asmussen and Kurtz [2] gave necessary and sufficient requirements for subsequences to satisfy the famous Hsu-Robbins-Erdős complete convergence result related to the law of large numbers. Our results go a step further, that is, we replace the i.i.d. assumption by L p -boundedness, and consider Davis normalizing factors ∞ n=2 (logn) δ /n and ∞ n=3 (1/n(logn) δ ) instead of complete convergence. We thus have the following.
In the case of martingale difference sequences, Theorem 1.1 fails if δ ≥ p/2 − 1 and Theorem 1.2 fails if 0 ≤ δ < 1 (see [14]), therefore Theorems 1.1 and 1.2 are the best results one can expect in the L p -bounded case.

Proofs
Proof of Theorem 1.1. In the sequel we will make use of the so-called c r -inequality (see [12, page 57]), which says that for any random variables X,Y and p > 1. Throughout the paper, C denotes a constant that depends on p and ε (but not on k,n,N), and may vary from line to line, even within the same line.
As (X n ) n≥1 is bounded in L p , according to [9,Corollary IV.8.4], it is weakly sequentially compact. Denote by (Y n ) n≥1 a subsequence of (X n ) n≥1 that converges weakly in L p to some Y ∈ L p . Subtracting Y from each element of (Y n ) n≥1 , we reduce the problem to a sequence (Y n ) n≥1 that converges weakly in L p to 0. Further, for any n ≥ 1, we choose a simple random variable Z n (i.e., Z n takes only a finite number of distinct values), such that Using Markov's inequality and (2.1), one has According to (2.1), we have and assumption (2.2) yields (we used that the subsequence (n k ) k≥1 is strictly increasing, so n k ≥ k), therefore the last series in (2.3) converges. To prove Theorem 1.1, it suffices to exibit a subsequence (n k ) k≥1 such that One can see that (Z n ) n≥1 also converges weakly in L p to 0. Indeed, for any Q ∈ L q , where 1/ p + 1/q = 1, we have E(Z n Q) = E((Z n − Y n )Q) + E(Y n Q), and the first term on the right-hand side tends to 0 by Hölder's inequality and (2.1), while the second term tends to 0 because Y n converges weakly in L p to 0. By induction, one may choose a subsequence of natural numbers 1 ≤ n 1 < n 2 ··· such that E Z nk | Z i , i ∈ I ≤ 1 2 k for each I ⊂ n 1 ,...,n k−1 , (2.8) where E Z nk | Z i , i ∈ I] denotes the conditional expectation of Z nk given the σ-algebra σ(Z i , i ∈ I) generated by (Z i ) i∈I . This can be done because σ(Z i , i ∈ I) consists of a finite partition of Ω, and as Z n → 0 weakly in L p , we have A Z n dP → 0 for any A in σ(Z i , i ∈ I).
We now prove that (n k ) k≥1 is the required subsequence in (2.7). Indeed, one can write (2.14) Using (2.9)-(2.12), the series in (2.14) is dominated by The latter series in (2.15) is convergent if and only if either δ > 1, or δ = 1 and p > 2; thus (2.14) holds and Theorem 1.2 is proved.