A SURVEY OF LIMIT LAWS FOR BOOTSTRAPPED SUMS

Concentrating mainly on independent and identically distributed (i.i.d.) real-valued parent sequences, we give an overview of first-order limit theorems available for bootstrapped sample sums for Efron’s bootstrap. As a light unifying theme, we expose by elementary means the relationship between corresponding conditional and unconditional bootstrap limit laws. Some open problems are also posed.


Introduction.
Bootstrap samples were introduced and first investigated by Efron [41].As applied to a sequence X = (X 1 ,X 2 ,...) of arbitrary random variables defined on a probability space (Ω, Ᏺ,P), and a bootstrap sample size not necessarily equal to the original sample size, his notion of a bootstrap sample is as follows.Let {m (1), m(2),... } be a sequence of positive integers and for each n ∈ N, let the random variables {X * n,j , 1 ≤ j ≤ m(n)} result from sampling m(n) times with replacement from the n observations X 1 ,...,X n such that for each of the m(n) selections, each X k has probability 1/n of being chosen.Alternatively, for each n ∈ N we have X * n,j = X Z(n,j) , 1 ≤ j ≤ m(n), where {Z(n, j), 1 ≤ j ≤ m(n)} are independent random variables uniformly distributed over {1,...,n} and independent of X; we may and do assume without loss of generality that the underlying space (Ω, Ᏺ,P) is rich enough to accommodate all these random variables with joint distributions as stated.Then X * n,1 ,...,X * n,m(n) are conditionally independent and identically distributed (i.i.d.) given X n = (X 1 ,...,X n ) with P {X * n,1 = X k |X n } = n −1 almost surely, 1 ≤ k ≤ n, n ∈ N.For any sample size n ∈ N, the sequence {X * n,1 ,...,X * n,m(n) } is referred to as Efron's nonparametric bootstrap sample from X 1 ,...,X n with bootstrap sample size m(n).
Being one of the most important ideas of the last half century in the practice of statistics, the bootstrap also introduced a wealth of innovative probability problems, which in turn formed the basis for the creation of new mathematical theories.Most of these theories have been worked out for the case, dominant also in statistical practice, when the underlying sequence X consists of i.i.d.random variables.Thus most of the classical main types of limit theorems for the partial sums n k=1 X k of the original sequence have counterparts for the row sums m(n) j=1 X * n,j in the triangular array of all bootstrapped samples pertaining to the sequence X.There are seven such types or classes that can be delineated at this writing: central limit theorems (CLTs) and related results on asymptotic distributions, weak laws of large numbers (WLLNs), strong laws of large numbers (SLLNs), laws of the (noniterated) logarithm, complete convergence theorems, moderate and large deviation results, and Erdős-Rényi laws.In each of the bootstrap versions of the seven classes there are potentially two kinds of asymptotic results: one is conditional, either on the whole infinite sequence X or on its initial sample segment X n , and the other is unconditional; the latter kind is less frequently spelled out in the existing literature.Paraphrasing somewhat part of the introductory discussion by Hall [49] in our extended context, not necessarily intended by him in this form, conditional laws are of interest to the statistician who likes to think in probabilistic terms for his particular sample, while their unconditional counterparts allow for classical frequentist interpretations.
Celebrating the 25th anniversary of the publication of [41], the primary aim of this expository note is to survey the main results for bootstrapped sums in the seven categories listed above, in seven corresponding sections, connecting, as a light unifying theme, the conditional and unconditional statements by means of the following two elementary lemmas, where throughout "a.s." is an abbreviation for "almost surely" or "almost sure."Some open problems are posed in these sections, and the extra section (Section 9) is devoted to exposing a new problem area for an eighth type of limit theorems which is missing from the above list.Lemma 1.1.Let A ∈ Ᏺ be any event and let Ᏻ ⊂ Ᏺ be any σ -algebra.Then P {A} = 1 iff P {A|Ᏻ} = 1 a.s. (1.1) Proof.It is a well-known property of the integral that if U is a random variable such that U ≥ 0 a.s. and E(U) = 0, then U = 0 a.s.Taking U = 1 − V , it follows that if V is a random variable such that V ≤ 1 a.s. and E(V ) = 1, then V = 1 a.s.Noting that P {A} = E(I A ) = E(E(I A |Ᏻ)) = E(P {A|Ᏻ}), where I A = I(A) is the indicator of A, and taking V = P {A|Ᏻ}, the necessity half of the lemma follows, while the sufficiency half is immediate.
The second lemma is an easy special case of the moment convergence theorem (see, e.g., [24,Corollary 8.1.7,page 277]), where Ᏸ → and P → denote convergence in distribution and convergence in probability, respectively.If not specified otherwise, all convergence relations are meant as n → ∞.
.. and V are real-or complex-valued random variables such that n=1 is a sequence of events such that P {A n |Ᏻ n } P → p for some constant p, then P {A n } → p.
The notation introduced so far will be used throughout.We mainly concentrate on the basic situation when X 1 ,X 2 ,... are i.i.d.real random variables, in which case X = X 1 will denote a generic variable, always assumed to be nondegenerate, F(x) = P {X ≤ x}, x ∈ R, will stand for the common distribution function, and for the pertaining quantile function, where R is the set of all real numbers.When a sequence {a n } ∞ n=1 is nondecreasing and a n → ∞, we write a n ↑ ∞.It will be always assumed that m(n) → ∞, but, most of the time, not necessarily monotonically.We will write m(n With a single deviation, we deal only with sums m(n) j=1 X * n,j resulting from Efron's nonparametric bootstrap exclusively and focus only on probability limit theorems for these sums without entering into the related theory of bootstrapped empirical processes or into any discussion of the basic underlying statistical issues.In general, one may start exploring the enormous literature from the monographs by Efron [42], Beran and Ducharme [17], Hall [53], Mammen [67], Efron and Tibshirani [43], Janas [58], Barbe and Bertail [15], Shao and Tu [73], and Politis et al. [71], listed in chronological order, or the fine collections of papers edited by Jöckel et al. [59] and LePage and Billard [61].For review articles focusing on either theoretical aspects or practical issues of the bootstrap methodology or both, see Beran [16], Swanepoel [76], Wellner [77], Young [78], Babu [14], and Giné [45].Our single deviation from bootstrap sums m(n) j=1 X * n,j is to bootstrapped moderately trimmed means in Section 2.4, which contains an apparently new result.

Asymptotic distributions.
In the whole section we assume that the parent sequence X 1 ,X 2 ,... consists of i.i.d.random variables.In the first three subsections we survey the results on the asymptotic distribution of the corresponding bootstrap sums m(n) j=1 X * n,j , while in the fourth one we consider bootstrapping moderately trimmed means based on {X n }.

Central limit theorems. The a.s. conditional bootstrap CLT asserts that lim
for some normalizing sequence {a n (m(n))} ∞ n=1 of positive constants, where X n = n −1 n k=1 X k is the sample mean and Φ(x) = P {N(0, 1) ≤ x}, x ∈ R, is the standard normal distribution function.Assuming that σ 2 = Var(X) < ∞, this was proved by Singh [75] for m(n) ≡ n and by Bickel and Freedman [19] for arbitrary m(n) → ∞; a simple proof of the general result appears in both Arcones and Giné [3] and Giné [45], and in this case one can of course always take a n (m(n)) ≡ σ m(n).Allowing any random centering sequence, different from {m(n)X n }, it was shown by Giné and Zinn [46] for m(n) ≡ n (the proof is also in [45]) and then by Arcones and Giné [3] for all m(n) ↑ ∞ satisfying inf n≥1 m(n)/n > 0 that the a.s.conditional bootstrap CLT in (2.1) does not hold for any norming sequence {a n (m(n))} when E(X 2 ) = ∞, even if the distribution of X is in the domain of attraction of the normal law (written here as F ∈ D(Φ), and is characterized by the famous normal convergence criterion obtained independently by Feller, Khinchin, and Lévy in 1935: Arcones and Giné [3] show that the condition inf n≥1 m(n)/n > 0 may even be weakened to inf n≥4 [m(n) log log n]/n > 0 and the a.s.conditional bootstrap CLT in (2.1) still fails for any norming sequence {a n (m(n))}.However, Arcones and Giné also prove in [3,4] , where (•) is a suitable positive function slowly varying at zero and the square of which can be taken by [31, Corollary 1] as 2 From the statistical point of view those versions of (2.1) in which a n (m(n)) is estimated from the sample X 1 ,...,X n are of course more desirable.Setting for the sample standard deviation, when E(X 2 ) < ∞, the natural counterpart of the a.s.conditional bootstrap CLT in (2.1) states that and this remains true whenever m(n) → ∞, as expected.Accompanying the Giné and Zinn [46] necessary condition mentioned above, Csörgő and Mason [33] and Hall [51] independently proved that if E(X 2 ) = ∞, then (2.3) also fails for m(n) ≡ n, even when F ∈ D(Φ) at the same time; again, the proof is streamlined by Giné [45].
For the same m(n) ≡ n, Hall [51] in fact proved the following general a.s.necessity statement: if there exist measurable functions C n and A n > 0 of X n such that p An,Cn n  [51] presented in (2.6) and (2.7).
Assume now the conditions E(X 2 ) = ∞, but F ∈ D(Φ).In this case the condition F ∈ D(Φ) ensures that E(|X|) < ∞ and hence σ n → ∞ a.s.by E(X 2 ) = ∞ and the SLLNs.Then, even though the a.s.statement in (2.1) fails if inf n≥4 [m(n) log log n]/n > 0 and the a.s.statement in (2.3) fails if m(n) ≡ n, Athreya [10] and Hall [51] for m(n) ≡ n, Csörgő and Mason [33] for m(n) ≈ n, and finally Arcones and Giné [3,4] for any m(n) → ∞ proved that for either choice of where in the lower branch of which the first sum extends over all the n m(n) combinations (j 1 ,...,j m(n) ) such that 1 ≤ j 1 < ••• < j m(n) ≤ n.Strictly speaking, (2.4) holds for the choice a * n (m(n)) ≡ σ n m(n) whenever m(n) ≈ n, and the generally satisfactory modification of the random norming sequence in (2.5), given in [4], is needed for "small" sequences {m(n)}.Conversely, it was proved in [33] that if (2.4) holds for m(n) ≈ n and a * n (m(n)) ≡ σ n m(n), then necessarily F ∈ D(Φ).Both the proofs of (2.4) and of its converse are briefly sketched by Giné [45] both for deterministic and for random norming sequences in the simplest case m(n) ≡ n.Further, necessity conditions associated with the conditional bootstrap CLT in probability, in (2.4), are in [3,4].More generally, Arcones and Giné [3,4] proved for any m(n) ↑ ∞ that, with any random centering and nondecreasing deterministic norming going to infinity, the conditional limit distribution of m(n) j=1 X * n,j in probability, if it exists, must be a deterministic infinitely divisible law with a suitable change of the random centering.One of the important special cases of this general necessity condition will be spelled out in Section 2.2.They could treat the converse sufficiency direction, when F is in the domain of partial attraction of an infinitely divisible law along a subsequence m(n) ↑ ∞ of proposed bootstrap sample sizes, only in special cases.
At this point it would be difficult to resist relating the beautiful result by Hall [51] for m(n) ≡ n.He proves that there exist measurable functions C n and A n > 0 of X n such that or F is slowly varying at −∞ and P {X > x}/P {|X| > x} → 0 as x → ∞, in which case where Y has the Poisson distribution with mean 1. Needless to say, the primary sums n k=1 X k do not have an asymptotic distribution when the i.i.d.terms are from a distribution with one of the tails slowly varying and dominating the other one.Hall's illuminating discussion [51] of many related issues is also noteworthy.
On setting for the bootstrap mean and the bootstrap sample standard deviation, respectively, related recent deep results of Mason and Shao [68] are for the boot- the unconditional bootstrap CLT, with the two cases a n (m(n )<∞, and with the possible choices of a n (m(n)) ≡ a * n (m(n)) when (2.4) holds.Under the respective conditions, the unconditional statements for the asymptotic distribution of T * n,m(n) also follow, along with the unconditional versions of the Poisson convergence theorems in (2.6) and (2.7).
While the framework of this paper does not allow us to go into methodological detail, we note that since, following from the very definition of the bootstrap, max 1≤j≤m(n s. for all ε > 0 by the SLLN for any numerical sequence a n → ∞, the conditional probability that the row-wise independent array {a − is infinitesimal (uniformly asymptotically negligible), given X, is 1.Hence, in both directions, most of the results in the present section, including those in Sections 2.2 and 2.3, have been or might have been obtained by checking the conditions of the classical criteria for convergence in distribution of row sums to given infinitely divisible laws, as described by Gnedenko and Kolmogorov [47].For the a.s.versions, this goes by modifications of the techniques that have been worked out for the proof of the law of the iterated logarithm for the parent sequences X, while the direct halves of the versions in probability and in distribution (above and in Sections 2.2 and 2.3) are sometimes proved by showing that any subsequence contains an a.s.convergent further subsequence with the same limit.Obtaining the necessary and sufficient conditions in the characterizations and the use of random norming sequences depend on certain nice extra criteria, such as those stating that R n ≡ max 1≤j≤n X 2 j / n k=1 X 2 k → 0 a.s.if and only if E(X 2 ) < ∞, while R n P → 0 if and only if F ∈ D(Φ).In comparison, Hall's [51] necessary and sufficient tail conditions for Poisson convergence imply that extreme terms entirely dominate the whole sums in the sense that [33,51] for references to the original sources of these results.We also refer to the introduction of [33] for a general discussion of a.s. and in probability conditional bootstrap asymptotic distributions and their interrelationship and role towards deriving statistically applicable statements.
Arenal-Gutiérrez and Matrán [5] showed that if E(X 2 ) < ∞, then the a.s.conditional CLT can also be derived from the unconditional CLT.Taking any verges in distribution a.s., conditionally on X n , then the limiting distribution is deterministic, that is, the same for almost all ω ∈ Ω.From this result, handling a.s.conditional tightness separately and identifying the limit, they derive that if m(n){X * n,m(n) − X n } converges in distribution (unconditionally), then it does so conditionally on X n , a.s., as well with the same limit.Interestingly, they obtain the unconditional CLT, stating that which of course is equivalent to (2.9) with a n (m(n)) ≡ σ m(n), from another conditional CLT by an application of Lemma 1.2.

Recalling the identity
The proof of the latter is just checking that the Lindeberg condition holds conditionally a.s.along subsequences.In fact, all this is done in [5] for a more general weighted bootstrap, of which our Efron bootstrap is a special case.
For F ∈ D(α), Athreya [10] where for any real τ > 0, which may be taken as 0, so that X (τ) n,m(n) = 0 if α ∈ (0, 1), and may be taken as ∞, so that (2.11) is a version of Athreya's theorem from Arcones and Giné [3, Corollary 2.6] as far as the choice of the centering sequence goes, while for a regular m(n) ↑ ∞ for which [m(n) log log n]/n → 0, [3, Theorem 3.4] ensures a.s.convergence in (2.11).Furthermore, for α ∈ (1, 2) they prove in [4] that ) k under the same condition; for different derivations and properties of G α (•) see Logan et al. [65] and Csörgő [28].The ratio statistic in (2.12) is of course closely related to the bootstrapped Student t-statistic T * n,m(n) considered above at (2.8), and Mason and Shao [68] point out that under the conditions for (2.12), indeed, Along with a converse and the a.s.variant due to Arcones and Giné [3], Athreya's theorem above may be particularly nicely stated for the case F ∈ D(α) with α ∈ (1, 2), when it really matters from the statistical point of view; this is the important special case of the general necessity condition of Arcones and Giné [3] mentioned after (2.5) in Section 2.1.In this case, suppose con- then the last a.s.convergence does not hold.Not specifying the norming sequence a α,n ↑ ∞, these results are stated, with sketches of parts of the proofs also provided as [45, Theorems 1.4 and 1.5] by Giné.In general, the random centering sequence 11) has the unpleasant feature of depending on the deterministic norming sequence, while the random norming sequence in the result in (2.12), which is limited to α ∈ (1, 2), changes the asymptotic stable distribution.Using the quantile-transform approach from [30,31], Deheuvels et al. [38] gave a common version of these results, which is valid for all α ∈ (0, 2) and not only deals with both these aspects, but also reveals the role of extremes when bootstrapping the mean with heavy underlying tails under x ∈ R, be the sample distribution function with the pertaining sample quantile function are the order statistics of the sample X 1 ,...,X n .For a given bootstrap size m(n), consider the Winsorized quantile function where, with • denoting the usual integer part, k n = n/m(n) , and the Winsorized sample variance Then Deheuvels et al. [38] prove the nice result that if F ∈ D(α) for some α ∈ (0, 2), then whenever m(n) → ∞ such that m(n)/n → 0, and this convergence takes place a.s.whenever m(n) → ∞ such that [m(n) log log n]/n → 0, without any regularity requirement on {m(n)}.Note that k n → ∞ and k n /n → 0, and the moderately trimmed mean n −1 n−kn l=kn+1 X l,n , with the smallest and the largest k n observations deleted, is always a good centering sequence.Bootstrapping this trimmed mean, in turn, is considered in Section 2.4.
For the unconditional variant, the mode of convergence in (2.15) is irrelevant: follows by Lemma 1.2 again, along with the unconditional versions of (2.11), (2.12), and of the statement for T * n,m(n) , for all m(n) →∞ for which m(n)/n → 0. For a general discussion of the statistical impact of such small bootstrap sample sizes, we refer to Bickel et al. [20] and, concretely for the bootstrap mean with rather negative conclusions coming from very different angles, to Hall and Jing [54] and del Barrio et al. [39].Recalling the notation T * n,m(n) for the Student statistic defined at (2.8) and assuming that m(n) → ∞ such that m(n)/n → 0, we finally note here that an interesting result of Hall and LePage [55] directly ensures that under an unusual set of conditions that make E(|X| 1+δ ) < ∞ for some δ > 0 but allow F to be outside of every domain of attraction and, hence, none of the two distribution functions in (2.17) may converge weakly; the conditions may even be satisfied for F in the domain of partial attraction of every stable law with exponent α ∈ (1,2], where α = 2 refers to the normal type of distributions.Moreover, if m(n)/n → 0 is strengthened to m(n)[log n]/n → 0, they show that a.s.convergence prevails in (2.17).

Random asymptotic distributions.
The reason for restricting attention to "small" bootstrap sample sizes in the preceding point is that Bretagnolle [23], Athreya [9,11], and subsequently Knight [60] and Arcones and Giné [3] showed that if m(n)/n 0, then the bootstrap may not work otherwise; we cited the result from [3], through [45], above and this also follows as a special case of a more general later theorem of Mammen [66, Theorem 1].What happens, then, if m(n)/n 0? As in the preceding one, we suppose throughout this subsection that F ∈ D(α) for some α ∈ (0, 2) and consider first the choice m(n) ≡ n.Then, as a special case of a corresponding multivariate statement, Athreya [9] proves that the conditional characteristic functions where i is the imaginary unit and φ(t) = ∞ −∞ e itx dG(x) is a random infinitely divisible characteristic function without a normal component, given by Athreya in terms of random Lévy measures depending on the three basic underlying situations α < 1, α = 1, and α > 1 and on a further underlying parameter measuring skewness.Thus and G(•,ω) is a random infinitely divisible distribution function on the real line for almost all ω ∈ Ω.From this, he derives that Knight [60] and Hall [51] independently gave new direct derivations of (2.19), very similar to each other, with a rather explicit description of G(•); in fact both Athreya [11] and Hall [51] go as far as proving that G n (•) for each λ > 0, which in particular implies that P {G n (x) ≤ y} → P {G(x) ≤ y} for each pair (x, y) in the plane R 2 .Furthermore, with a different random infinitely divisible limiting distribution function H(•), Athreya [11] and subsequently Knight [60] and Hall [51] obtained also a version of (2.18) and (2.19), and Athreya [11] and Hall [51] even obtained a version of the weak convergence result in Skorohod spaces on compacta, when the norming sequence n 1/α L(1/n) is replaced by a "modulus order statistic" from X 1 ,...,X n , the largest in absolute value.We also refer to Athreya [12] for his pioneering work in this area.It follows from the general necessity condition of Arcones and Giné [3,4], mentioned after (2.5) in Section 2.1, that the convergence in (2.19) cannot be improved to convergence in probability.That the same is true for the version with random norming, and H(•) replacing G(•), was already pointed out by Giné and Zinn [46].Both versions follow at once from Hall's theorem [51], stated in Section 2.1.
The corresponding unconditional asymptotic distribution may be approached in two different ways.Directly, by (2.19) and Lemma 1.2, we obtain On the other hand, by (2.18) and Lemma 1.2, For more general bootstrap sampling rates m(n), satisfying lim n→∞ m(n)/n = c for some c ∈ (0, ∞), a special case of Cuesta-Albertos and Matrán [37,Theorem 11] gives for any real τ > 0, as an extension of (2.18), where r n ≡ m(n)/n and X (•) n,m(n) is defined as for (2.11), and the limiting random infinitely divisible characteristic function φ(•) now depends also on c and τ besides the parameters mentioned at (2.18).What is more interesting is that, setting τ = 1, (2.22) continues to hold even for the case when m(n)/n → ∞, so that r n → ∞, in which case φ(t) = exp{−V t 2 /2} for all t ∈ R, where V is a positive, completely asymmetric stable random variable with exponent α/2.This statement is derived from Cuesta-Albertos and Matrán [37,Theorem 6].In the corresponding counterparts of (2. 19) and (2.20), therefore, G(•) is a random normal distribution function with mean 0 and variance V .
Even if one starts out from a single sequence X, as we did so far, the bootstrap yields a triangular array to deal with, as was noted in Section 1. Cuesta-Albertos and Matrán [37] and del Barrio et al. [39,40] begin with what they call an "impartial" triangular array of row-wise i.i.d.random variables, bootstrap the rows, and thoroughly investigate the conditional asymptotic distribution of the row sums of the resulting bootstrap triangular array.The flavor of their fine "in law in law" results with random infinitely divisible limiting distributions is that above in this subsection, and, using the second approach commencing from (2.18), similar unconditional asymptotic distributions may be derived from those results.
The finiteness of the second moment is the strongest moment condition under which results on the asymptotic distribution for bootstrapped sums are entertained above.We do not go into discussions of rates of convergence in these limit theorems, for which fascinating results were proved by Hall [50] when E(|X| α ) < ∞ for α ∈ [2,3], particularly the various asymptotic expansions under higher-order moment conditions, which are of extreme importance for the statistical analysis of the performance of bootstrap methods.The first step in this direction was made by Singh [75], and for later developments we refer to Hall [53]; see also Section 7.

Bootstrapping intermediate trimmed means.
Since a normal distribution is stable with exponent 2, for the sake of unifying notation we put D(2) = D(Φ) as usual.As the first asymptotic normality result for intermediate trimmed sums, Csörgő et al. [32] proved that if F ∈ D(α) for some α ∈ (0, 2], Ᏸ → N(0, 1) for every sequence {k n } of positive integers such that k n → ∞ and k n /n → 0, where (•) is as in Section 2.1.Deheuvels et al. [38] point out that (k n /n) here may be replaced by the Winsorized empirical standard deviation s n (k n ) pertaining to the given k n , given in (2.14), so that in view of which the asymptotic stability statement in (2.16), which holds for m(n) ≡ n/k n if {k n } is given first, is rather "curious" for α ∈ (0, 2).Subsequent to [32], picking another sequence {r n } of positive integers such that r n → ∞ and r n /n → 0, Csörgő et al. [30] determined all possible subsequential limiting distributions of the intermediate trimmed sums n−rn l=kn+1 X l,n , suitably centered and normalized, and discovered the necessary and sufficient conditions for asymptotic normality along the whole sequence of natural numbers.So, these conditions are satisfied if r n ≡ k n whenever F ∈ D(α) for some α ∈ (0, 2].
For the bootstrap sampling rate m(n) ≡ n, let X * * 1,n ≤ ••• ≤ X * * n,n be the order statistics belonging to the bootstrap sample X * n,1 ,...,X * n,n .As a special case for r n ≡ k n , Deheuvels et al. [38] prove that the necessary and sufficient conditions of asymptotic normality of n−kn l=kn+1 X l,n , obtained in [30], are also sufficient for the conditional asymptotic normality of the bootstrapped trimmed sums  (2.14).Thus, as a special case of the case r n ≡ k n of the general theorem [38,Theorem 3.2], also pointed out in [38], we obtain the following result: if F ∈ D(α) for some α ∈ (0, 2], then and hence, by Lemma 1.2, also for every sequence {k n } of positive integers such that k n → ∞ and k n /n → 0, where either s n (k n ) ≡ s * n (k n ) or s n (k n ) ≡ s n (k n ), the Winsorized standard deviation of either the bootstrap or the original sample.
Recently, Csörgő and Megyesi [34] proved that the trimmed-sum normal convergence criterion in question is satisfied more generally: whenever F is in the domain of geometric partial attraction of any semistable law of index α ∈ (0, 2], the Lévy functions of which do not have flat stretches in the sense that their generalized inverses are continuous; see [34,69] for the discussion of such domains.It follows that (2.24) and (2.25) hold for any such F for every sequence {k n } of positive integers such that k n → ∞ and k n /n → 0, while if the continuity condition is violated, then there still exists a sequence k n → ∞, k n /n → 0, such that (2.24) and (2.25) prevail.In fact, even asymmetric trimming is also possible in this generality, that is, the results continue to hold for where the corresponding Winsorized standard deviation s * n (k n ,r n ) is defined in [38] and the precise conditions are those of [34, Theorems 1 and 2], provided that [38, conditions (3.9) and (3.10)] are also satisfied; the latter conditions always hold for r n ≡ k n .These results, apparently new, are of some theoretical interest exactly because if F is in the domain of geometric partial attraction of a semistable law of index α ∈ (0, 2], then the original partial sums n j=1 X j do not in general have an asymptotic distribution along the whole sequence of natural numbers; limiting semistable distributions exist, with appropriate centering and norming, only along subsequences, one of which does not grow faster than some geometric sequence.[19] and Athreya [8] proved the a.s.conditional WLLNs for the bootstrap means from a sequence

Weak laws of large numbers. Bickel and Freedman
By Lemma 1.2 again, the corresponding unconditional statement is immediate: This unconditional result was also obtained directly, without using the conditional result (3.1), by Athreya et al. [13], by Csörgő [29], and by Arenal-Gutiérrez et al. [7].While the proof in [29] is probably the simplest possible, the weak laws in [7] apply to very general parent sequences {X n } of neither necessarily independent nor identically distributed variables.
Although these necessary conditions were good enough to effectively rule out some erroneous statements in the literature already in [36], as discussed in these papers, the gaps between the available sufficient and the necessary conditions above are still quite large.So, there are several open problems.The one formulated the easiest is this: for the naive bootstrap when m(n Is not the finiteness of E(|X|) sufficient at least in one bootstrap model?Alternatively, assuming only that lim inf n→∞ m(n)/n 1/α > 0 and E(|X| α ) < ∞ for some α ≥ 1, do we have (4.1) and (4.2) in some bootstrap model?
We return to i.i.d.parent sequences {X n }.Replacing E(X) by X n and assuming that E(|X| α ) < ∞ for some α ∈ (0, 2] and lim [70] obtained the Marcinkiewicz-Zygmund-type conditional bootstrap SLLN, stating that which again is universal and by Lemma 1.1 is equivalent to its unconditional counterpart, For α = 1, this does not give the strong law described above, but if E(|X| α ) < ∞ and lim inf n→∞ m(n)/[n 1/α log n] > 0 for some α ∈ (1, 2) and simultaneously lim sup n→∞ m(n)/[n log n] < ∞, such as for m(n) ≡ n β or for m(n) ≡ n β log n with β ∈ (1/α, 1), then, by the primary Marcinkiewicz-Zygmund law for a rate of convergence in (4.2) for special sequences {m(n)} with a restricted growth rate, which in turn is also equivalent to its conditional counterpart.For the special case m(n) ≡ n, Mikosch [70] also states the complete analogue of the Marcinkiewicz-Zygmund theorem: Results related to those in this section were proved by Bozorgnia et al. [22], Hu and Taylor [57], and more recently by Ahmed et al. [1], as applications of strong laws for row sums of row-wise independent triangular arrays, in [1,22] for bootstrap means in Banach spaces.The real-valued special cases of these results are dominated by the corresponding ones above.

5.
Laws of the logarithm.Mikosch [70] discovered that the law of the iterated logarithm for i.i.d.parent sequences {X n } reduces to the law of the logarithm for the row sums of the resulting bootstrap triangular array.Specifically, for a sequence {X n } of i.i.d.random variables, he established the conditional bootstrap bounded law of the logarithm in the following form: if then where σ n is the sample standard deviation as in ( 2) always holds, regardless of the tail behavior of X beyond a finite mean.If E(X 2 ) < ∞, then R n → 0 a.s., as noted in Section 2.1, so n log n = ᏻ(m(n)) is always sufficient for the third condition in (5.1).But if E(X 2 ) < ∞, then by the SLLN this third condition holds if and only if (log n)(max 1≤j≤n X 2 j )/m(n) → 0 a.s., which in turn holds if and only if ∞ n=2 P {X 2 > εm(n)/log n} < ∞ for every ε > 0, as Mikosch notes.This leads to sufficient conditions for (5.2), tying together moment behavior and allowable growth rates of {m(n)}.Assuming m(n)/ log n ↑ ∞, examples of such conditions are the following: (i) X is bounded; (ii) E(e tX ) < ∞ for all t ∈ (−t * ,t * ) for some t * > 0 and m(n . Again, all these bounded laws of the logarithm are universal, and while m(n)/ log n ↑ ∞ is a wholly reasonable assumption in view of the degeneracy result cited from [36] in the previous section, the strict optimality of the conditions is unclear.
However, this is not the question of optimality of the form of (5.2) in general, for universal laws, perhaps surprisingly at first sight, the correct factor in (5.2) is indeed log n and not log log n or log log m(n).But, just as for necessary conditions for SLLNs, the proof of any sharp version of (5.2) requires assumptions on the joint distribution of the bootstrap samples {X * n,1 ,...,X * n,m(n) }, n ∈ N. Assuming, in addition to the conditions in (5.1), that these samples are conditionally independent given X (one of the two assumptions in [36] and the assumption in [35]), Mikosch in fact proves that Furthermore, he even states that under the same conditions, (5.4) These last statements also say that under (5.1) the correct norming sequence is random, the factor σ n in which a.s.diverges to ∞ when E(X 2 ) = ∞.However, when E(X 2 ) < ∞, then σ n → σ = Var(X) a.s. and (5.2), (5.3), and (5.4) all hold with σ n replaced by σ .
It is at this last point where Ahmed et al. [2] pick up the line.Assuming that the bootstrap samples {X * n,1 ,...,X * n,m(n) }, n ∈ N, are conditionally independent given X, they obtain general conditions for a sequence {X n } of not necessarily independent or identically distributed random variables, for which lim n→∞ σ n (ω) = σ (ω) > 0 for almost every ω ∈ Ω, to ensure that Their result is general enough to include as special cases new results for sequences of pairwise i.i.d.random variables and for stationary ergodic sequences, and also (5.3) for an i.i.d.sequence {X n }, at least under the conditions of the third example above, that is, when E(|X| 2α ) < ∞, m(n)/ log n ↑ ∞, and n 1/α log n = ᏻ(m(n)) for some α ≥ 1.An example in which the limit σ (ω), ω ∈ Ω, is not constant is provided in [2].
Letting A be any of the events in (5.2), (5.3), (5.4), or (5.5), by Lemma 1.1 the unconditional bootstrap laws of the logarithm, stating under the respective conditions that P {A} = 1, also follow.

Complete convergence.
The original motivation here, for a sequence {X n } of i.i.d.random variables, comes from the classical result of Erdős, Hsu, and Robbins, from 1947 to 1950, which states that the sequence of sample means that X n converges completely to the mean E(X), that is, Its extensions are also important, such as the one due to Baum and Katz from 1965, stating that for every ε > 0 and some p ∈ (0, 2) and r ≥ p if and only if E(|X| r ) < ∞.The case r = p = 1 is a celebrated result by Spitzer from 1956.See, for example, [25,26,35,36,48] for the classical references.Furthermore, assuming that E(X 2 ) < ∞, Heyde [56] refined the direct half of the Erdős-Hsu-Robbins theorem, establishing the neat fact that lim ε→0 ε 2 ∞ n=1 P {|X n − E(X)| ≥ ε} = σ 2 .For the most important bootstrap sample sizes m(n) ≡ n 1/α , the unconditional bootstrap analogue of the Erdős-Hsu-Robbins theorem, proved by Csörgő and Wu [36], is completely satisfactory: for any α ≥ 1, if and only if E(|X| 1+α ) < ∞.This and the Erdős-Hsu-Robbins theorem imply that under the same conditions.The proof of (6.2) uses the Baum-Katz theorem in both directions.Unfortunately, no result of the type of (6.2) and (6.3) is known for α ∈ (0, 1), for which case it is conjectured in [25] that the necessary and sufficient condition of (6.2) is E(X 2 ) < ∞, so that (6.2) would hold for any α > 0 if and only if E(|X| 1+max{1,α} ) < ∞.It is harder to think about a necessary and sufficient condition of (6.3) for any given α > 0.
By the monotone convergence theorem, (6.2) and (6.3) imply for α ≥ 1 the respective conditional statements (the case p = 1 and r = 0 in (6.5)), but the conditions for the implied a.s.conditional convergence statements will be suboptimal.Indeed, extending the case for m(n) ≡ n in Li et al. [63] but otherwise deriving the result from a general theorem in [63], Csörgő [25] proves for an arbitrary sequence {X k } ∞ k=1 of identically distributed random variables and for every α > 0 that if for some p ∈ (0, 2), then .5)for every ε > 0 and r ≥ −1.Notice that for p = 1, complete convergence with arbitrary polynomial outside rate, the finiteness of the population mean is not needed for the large bootstrap sample sizes n 1/α with α < 1.Furthermore, as a consequence of (6.5) it is deduced that if {X k } ∞ k=1 is a sequence of pairwise i.i.d.random variables, p ∈ (0, 1], either α < 1 and E(|X|) < ∞, or α ≥ 1 and

E(max{|X|,[|X| log
for every ε > 0 and r ≥ −1.The arbitrariness of r ∈ R clearly suggests that the moment conditions are rough in all the cases of both (6.5) and (6.6); no fine behavior depending on r is picked up.The main result in [25] is the bootstrap analogue of Heyde's theorem above, accompanying the case p = 1 and r = 0 in (6.5): let X 1 ,X 2 ,... be i.
where Γ (x) = ∞ 0 u x−1 e −u du, x > 0, is the usual gamma function.For α = 1, the limit in (6.7) is σ 2 as in Heyde's theorem; in general it may be written as E(Z 2α )σ 2α , where Z is a standard normal random variable.For α > 1, the optimality of the moment condition is an open problem; a few possibilities are mentioned in [25].
Finally, we note here that Heyde-type asymptotics in the Baum-Katz and Spitzer theorems were recently proved by Gut and Spȃtaru [48].Motivated by those asymptotics, Csörgő [26] establishes corresponding Baum-Katz-type extensions of (6.7) along with a Spitzer-type boundary case (r = −1 in (6.5) as ε ↓ 0).It is conjectured in [25] that, under perhaps stronger moment conditions, the behavior in (6.7) might be inherited by the unconditional form in (6.3).However, the corresponding behavior even in the special case of p = 1 and r = 0 in (6.5) and, particularly, in the unconditional series in (6.2) as ε ↓ 0 is completely unknown.
7. Large deviations.Let X 1 ,X 2 ,... be again i.i.d.random variables.The first deep result for moderate deviations was proved by Hall [52], taking m(n) ≡ n and assuming the finiteness of the moment generating function M(t) = E(e tX ) for t in a neighborhood of the origin and Cramér's smoothness condition lim sup |t|→∞ |E(e itX )| < 1.He developed a complete asymptotic expansion for the ratio P , holding a.s.for all nonnegative x n = o( √ n), the beauty and usefulness of which lie in the fact that it is the exact empirical counterpart of the corresponding expansion for the primary ratio P obtained upon replacing all the cumulants in the latter by their sample versions; see also [53, Appendix V].
Turning now to large deviation proper, suppose that the moment generating function M(t) = E(e tX ) is finite for all t ∈ R, and for a set A ⊂ R define Λ(A) = inf x∈A sup t∈R (tx −log M(t)).Then, as an application of a result by Bolthausen [21], for any m(n) → ∞, Li et al. [64]  by Jensen's inequality, Fatou's lemma, and finally (7.1), so the unconditional version of (7.1) prevails.We are unable to determine whether the unconditional counterpart to (7.2) holds or not.Even for the conditional forms (7.1) and (7.2), it is an open question whether the finiteness of M(•) only in a neighborhood of zero, not on the whole line, suffices as in the primary result for partial sums of {X n }.Do (7.1) and (7.2) hold, for example, when X has an exponential distribution?
8. Erdős-Rényi laws.The "new law of large numbers" of Erdős and Rényi [44], with an earlier, less effective version by Shepp [74] for a sequence {X n } of i.i.d.random variables was an interesting development that had numerous important applications in several directions; see [27] for an early elaboration.
For the bootstrap version, suppose that E(X) = 0 and, as in Section 7, that M(t) = E(e tX ) is finite for all t ∈ R. and if, in addition, the bootstrap samples {X * n,j , 1 ≤ j ≤ n}, n ∈ N, are conditionally independent given X (again, one of the two models mentioned in Section 4, and the one in Section 5), then, for every c > 0, P lim sup The limiting function α(•) being the same as in the original Erdős-Rényi law is particularly striking in the given bootstrap environment in that it determines the distribution of X.As in Section 7, the question again arises concerning the possible sufficiency of the finiteness of M(•) only in a neighborhood of the origin, or, for that matter, the validity of the bootstrap analogues of the halfsided versions of the original law, given in the appendix of [27].9. Pointwise asymptotic distributions.This is the class of limit theorems missing from our list in the introduction.The last fifteen years witnessed the development of an exciting new field of probability, that of pointwise asymptotic distributions, which started out from the pointwise CLT published independently by Brosamler and Schatte in 1988; see the overview of the state of the art five years ago by Berkes [18].We are unaware of any result of this type for any bootstrap variable.Letting X 1 ,X 2 ,... be i.i.d.random variables with E(X 2 ) < ∞, the first question would be about the bootstrap analogue of the staple result in the primary field: what are the bootstrap sample sizes m(n) → ∞, if there are any, for which P lim recently proved the conditional bootstrap large deviation principle comprised of the two statements that for every open set A ⊂ R, (n) ≤ e −Λ(A) a.s.(7.2) Note that for every open set A ⊂ R, Put S * n,0 = 0 and S * n,k = k j=1 X * n,j , k = 1,...,n, and introduce M * n (c) = max 0≤k≤n− c log n (S * n,k+ c log n − S * n,k ), n ∈ N, and α(c) = sup{x ∈ R : inf t∈R e −tx M(t) ≥ e −1/c }, c > 0. Then the recent bootstrap Erdős-Rényi law of Li and Rosalsky [62] states that for every c > 0, P lim inf n→∞ M * n (c) c log n = α(c)|X = 1 a.s., x ∈ R? These conditional and unconditional pointwise bootstrap CLTs would of course be equivalent by Lemma 1.1, and already the case m(n) ≡ n would be of interest even under an extra moment condition for a starter.
and then the choicesC n ≡ X n and A n ≡ σ n → ξI G A a.s. or E(X 2 ) = ∞ and M n /A n = max{|X 1 |,...,|X n |}/A n → ξI G A a.s., and in the latter case the a.s.limiting distribution function of p )and by another application of (a continuous version of) the first statement of Lemma 1.2 we see that ψ(•) is continuous at zero.Hence by the Lévy-Cramér continuity theorem, ψ(•) is a characteristic function; in fact ψ(t) =