Convergence Rates in the Law of Large Numbers for Arrays of Banach Valued Martingale Differences

and Applied Analysis 3 the form∑nj=1 j a−1Xj. In Section 8, we consider more general weighted sums of Banach valued martingale differences, for which we extend Theorem 3.3 of Baxter et al. [35], Corollary 1 of Ghosal and Chandra [19], and Theorems 2.2–2.4 of Li et al. [9] and generalize Theorem 2 of Yu [23]. For notations, as usual, we write N = {1, 2, . . .}, N = {0}⋃N and R = (−∞,∞). 2. Maximal Inequalities for Banach Valued Martingales In this section, we show newmaximal inequalities for Banach valued martingales. Let (Ω,F,P) be a probability space and (B, ‖ ⋅ ‖) a real separable Banach space. For any real numberp ≥ 1, denote by L p B the space ofB-valued random variables such that LB = (E‖X‖ p ) 1/p is finite. Let F0 = {0, Ω} ⊂ F1 ⊂ ⋅ ⋅ ⋅ be an increasing sequence of sub-σ-fields ofF. Let {(Xj,Fj)} n j=1 be an adapted sequence of B-valued random variables defined on (Ω,F,P); that is, for every j ≥ 1, Xj is Fj measurable. We call it a sequence of B-valued martingale differences if additionally E[Xj | Fj−1] = 0 a.s. and Xj belongs to L 1 B for any j ≥ 1 and a sequence of B-valued supermartingale differences if additionally E[Xj | Fj−1] ≤ 0 a.s. and Xj belongs to L1B for any j ≥ 1. Following Pisier [36], we say that a Banach space (B, ‖ ⋅ ‖) is γ-smooth (1 < γ ≤ 2) if there exists an equivalent norm ‖ ⋅ ‖B such that sup t>0 { 1 t sup {󵄩󵄩󵄩x + ty 󵄩󵄩󵄩B + 󵄩󵄩󵄩x − ty 󵄩󵄩󵄩B − 2 : ‖x‖B = 󵄩󵄩󵄩y 󵄩󵄩󵄩B = 1}} < ∞. (8) Set S0 = 0, Sn = n ∑ j=1 Xj, S ∗ n = max 1≤j≤n 󵄩󵄩󵄩󵄩 Sj 󵄩󵄩󵄩󵄩 ∀n ∈ N ∗ , (9) and set X ∗ 0 = 0, X ∗ n = max 1≤j≤n 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 ∀n ∈ N ∗ . (10) For γ > 0, let m(γ, n) = n ∑ j=1 E [ 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 γ | Fj−1] . (11) Accordingly, for an infinite B-valued adapted sequence {(Xj,Fj)}j≥1, we write X ∗ ∞ = sup j≥1 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 , S ∗ ∞ = sup j≥1 󵄩󵄩󵄩󵄩 Sj 󵄩󵄩󵄩󵄩 , S∞ = ∞ ∑ j=1 Xj (12) if the series converges, and m(γ,∞) = ∞ ∑ j=1 E [ 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 γ | Fj−1] . (13) In the following, we consider relations among ∑ n j=1 P{‖Xj‖ > ε}, P{X ∗ n > ε}, P{S ∗ n > ε}, and P{‖Sn‖ > ε}. Our first theorem describes relations between P{X n > ε} and ∑nj=1 P{‖Xj‖ > ε} for an adapted sequence of B-valued random variables {(Xj,Fj)} n j=1. Theorem 1. Let {(Xj,Fj)} n j=1 be an adapted sequence of Bvalued random variables. Then, for any ε, γ > 0, and q ≥ 1, P {X ∗ n > ε} ≤ n ∑ j=1 P { 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 > ε} ≤ (1 + ε −γ )P {X ∗ n > ε} + ε −γ Em q (γ, n) . (14) Our second theorem shows relations between P{S n > ε} and P{X n > ε} for a sequence of B-valued martingale differences {(Xj,Fj)} n j=1: that is, for each (integer) 1 ≤ j ≤ n, Xj isFj measurable and belongs toL 1 B, andE[Xj | Fj−1] = 0 a.s. Theorem 2. Let {(Xj,Fj)} n j=1 be a finite sequence ofB-valued martingale differences. For any ε > 0, γ ∈ (1, 2], q ≥ 1, and L ∈ N, if B is γ-smooth, then P {X ∗ n > 2ε} ≤ P {S ∗ n > ε} ≤ P{X ∗ n > ε 4 (L + 1) } + ε −qγ(1+L)/(q+L) C (γ, q, L) × (Em q (γ, n)) (1+L)/(q+L) , (15) where C(γ, q, L) is a constant only depending on γ, q, and L. Corollary 3. Let {(Xj,Fj)}j≥1 be a sequence of B-valued martingale differences. Suppose that, for some γ ∈ (1, 2], ∞ ∑ j=1 E 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 γ < ∞. (16) If B is γ-smooth, then S∞ converges a.s. and the inequalities (14) and (15) hold with n replaced by ∞. We get Theorems 1 and 2 by a refinement of the method of Alsmeyer [18]. Proof of Theorem 1. The first inequality is obvious. We only consider the second one. Clearly, P {X ∗ n > ε} = n ∑ j=1 P {X ∗ j−1 ≤ ε, 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 > ε} = n ∑ j=1 P { 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 > ε} − n ∑ j=2 P {X ∗ j−1 > ε, 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 > ε} . (17) 4 Abstract and Applied Analysis Since {(Xj,Fj)} n j=1 is an adapted sequence of B-valued random variables, by Markov’s inequality (conditional on Fj−1), P {X ∗ j−1 > ε, 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 > ε} = ∫ {X j−1 >ε} 1{‖Xj‖>ε}dP = ∫ {X j−1 >ε} P { 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 > ε | Fj−1} dP ≤ ε −γ ∫ {X j−1 >ε} E [ 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 γ | Fj−1] dP ≤ ε −γ ∫ {X n >ε} E [ 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 γ | Fj−1] dP. (18) Hence, by summing, we obtain n ∑ j=2 P {X ∗ j−1 > ε, 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 > ε} ≤ ε −γ ∫ {X n >ε} m(γ, n) dP = ε −γ ∫ {m(γ,n)≤1,X n >ε} m(γ, n) dP + ε −γ ∫ {m(γ,n)>1,X n >ε} m(γ, n) dP ≤ ε −γ P {X ∗ n > ε} + ε −γ ∫ {m(γ,n)>1} m q (γ, n) dP ≤ ε −γ P {X ∗ n > ε} + ε −γ Em q (γ, n) . (19) Therefore, the upper bound in (19) gives a lower bound of P{X n > ε} by (17), which implies the second inequality of (14). Proof of Theorem 2. The first inequality is obvious, because if max1≤j≤n‖Sj‖ ≤ ε, then max 1≤j≤n 󵄩󵄩󵄩󵄩 Xj 󵄩󵄩󵄩󵄩 = max 1≤j≤n 󵄩󵄩󵄩󵄩 Sj − Sj−1 󵄩󵄩󵄩󵄩 ≤ 2ε. (20) We will prove the second inequality. For any ε > 0, n ∈ N, and L ∈ N, P {S ∗ n > 2ε} ≤ P{X ∗ n > ε 2 (L + 1) } + P{S ∗ n > 2ε, X ∗ n ≤ ε 2 (L + 1) } . (21) Define T (0) = 0, T (j) = inf {i ∈ (T (j − 1) , n] : 󵄩󵄩󵄩󵄩Si − ST(j−1) 󵄩󵄩󵄩󵄩 > ε L + 1 } for 1 ≤ j ≤ L + 1, (22) where by convention inf 0 = +∞. It is easily seen thatT(j) are stopping times (cf. e.g., [34] for the definition) with respect to the filtration F0 = {0, Ω} ⊂ F1 ⊂ ⋅ ⋅ ⋅ ⊂ Fn ⊂ Fn+1 ⊂ ⋅ ⋅ ⋅ , where we takeFk = Fn for all k ≥ n. As usual, we will write FT(j) = {A ∈ Fn : A ∩ {T(j) = k} ∈ Fk, 1 ≤ k ≤ n}. Notice that for 1 ≤ j ≤ L + 1, if T(j) < ∞, then ‖ST(j) − ST(j−1)‖ > ε/(L+1); conversely, if there exists a positive integer i ∈ (T(j − 1), n] such that ‖Si − ST(j−1)‖ > ε/(L + 1), then T(j) < ∞. We proceed by three steps to estimate the second term of the right hand side of (21). (a) We first prove that {S ∗ n > 2ε, X ∗ n ≤ ε 2 (L + 1) } ⊂ L+1 ∏ j=1 {T (j) < ∞} , (23)


Introduction
The convergence rates in the law of large numbers have been considered by many authors.Let (  ) ≥1 be a sequence of independent and identically distributed (i.i.d.) real valued random variables defined on a probability space (Ω, F, P) with E  = 0, and set   = ∑  =1   .By the law of large numbers, P{|  | > } → 0 for  > 0. Hsu and Robbins [1] introduced the notion of complete convergence and showed that if E 2 1 < ∞; Erdös [2,3] proved that the converse also holds.Spitzer [4] showed that whenever E 1 = 0.Katz [5] and Baum and Katz [6] proved that, for  = 1/ and  ≥ 1/2, or  > 1/ and  > 1/2, if and only if E| 1 |  < ∞.Lai [7] studied the limiting case where  > 2 and  = 1/2.Gafurov and Slastnikov [8] considered the case where ( −2 ) and (  ) are replaced by more general sequences.Many authors have considered the generalization of the theorem of Baum and Katz [6] to arrays of independent (but not necessarily identically distributed) random variables; see for example Li et al. [9], Hu et al. [10][11][12], Kuczmaszewska [13], Sung et al. [14], and Kruglov et al. [15].Let (  ) ≥1 be a sequence of real-valued martingale differences defined on a probability space (Ω, F, P), adapted to a filtration (F  ), with F 0 = {0,Ω}.This means that for each (integer)  ≥ 1,   is F  -measurable and E[  | F −1 ] = 0 a.s.A natural question is whether the prementioned theorem of Baum and Katz [6] is still valid for martingale differences (  ).Lesigne and Volný [16] proved that, for  ≥ 2, sup ≥1 E|  |  < ∞ implies P (           > ) =  ( −/2 ) ( (as usual, we write   = (  ) if lim  → ∞   /  = 0 and   = (  ) if the sequence (  /  ) is bounded) and that the exponent /2 is the best possible, even for strictly stationary and ergodic sequences of martingale differences.Therefore, the theorem of Baum and Katz does not hold for martingale differences without additional conditions.(Stoica [17] claimed that the theorem of Baum and Katz still holds for  > 2 in the case of martingale differences without additional assumption, but his claim is a contradiction with the conclusion of Lesigne and Volný [16], and his proof contains an error: when  > 2, we cannot choose  satisfying (6) of [17].)Alsmeyer [18] proved that the theorem of Baum and Katz for  > 1/ and 1/2 <  ≤ 1 still holds for martingale differences (  ) ≥1 if for some  ∈ ( where ‖ ⋅ ‖  denotes the   norm.This is a nice result; nevertheless, it is not always satisfied in applications; for example, (a) it does not apply to "nonhomogeneous" cases, such as martingales of the form   = ∑  =1     , where  > 0 and   are i.i.d., as in this case the condition (5) (with   =     ) is never satisfied; (b) in applications instead of a single martingale we often need to consider martingale arrays: for example, when we use the decomposition of a random sequence (  ) into martingale differences (such as in the study of directed polymers in a random environment), the summands usually depend on : , where F 0 = {0, Ω} and F  = ( 1 , . . .,   ) for  ≥ 1.
Our second main objective is to extend another important theorem of Baum and Katz [6] which states that for i.i.d.real valued random variables   with E  = 0 and for each  ≥ 1, P(| 1 | > ) = ( − ) if and only if P(|  | > ) = ( −(−1) ) for all  > 0. In fact, we prove that a similar result holds for a large class of Banach valued martingale arrays: under a simple moment condition on ∑  =1 E[‖  ‖  | F ,−1 ] for some  ∈ (1,2], we obtain sufficient conditions for  () P {      ,∞     > } =  (1) (resp.,  (1)) , (7) where  and  ,∞ are defined as before,  > 0. The result is new and sharp even for independent but not identically distributed real valued random variables.
The consideration of a Banach valued martingale array (rather than a Banach valued single martingale) makes our results very adapted in the study of weighted sums of identically distributed Banach valued random variables.Many authors have contributed to this subject.Gut [20], Lanzinger and Stadtmüller [21] considered weighted sums of i.i.d.random variables.Li et al. [9], Wang et al. [22] studied weighted sums of independent random variables.Yu [23] considered weighted sums of martingale differences (see also the references therein).Ghosal and Chandra [19] considered weighted sums of arrays of martingale differences.As applications of our main results, we generalize or improve some of their results.For example, we prove a new theorem about the convergence rate for weighted sums of identically distributed Banach valued martingale differences.
The rest of the paper is organized as follows.In Section 2, we establish some maximal inequalities for Banach valued martingales.In Section 3, we show our main results on the convergence rates for Banach valued martingale arrays, which improve and complete Theorem 2 of Ghosal and Chandra [19].In Section 4, we consider the important special case of triangular Banach valued martingale arrays, and obtain an extension of Theorem 1 and 2 of Alsmeyer [18].We also generalize a result of Chow and Teicher (cf.[34, page 393]) about the complete convergence of sums of independent real valued random variables.In Section 5, we look for the convergence rates for the maxima of sequences of any Banach valued random variables, in order to obtain further equivalent conditions about the convergence rates for Banach valued martingales in the following section.In Section 6, we consider the convergence rates for Banach valued martingales.Our results extend Theorems 1-4 of Baum and Katz [6] for i.i.d.real valued random variables and generalize Theorems 1 and 2 of Alsmeyer [18].As applications, in Section 7, we obtain new results on the convergence rates for weighted sums of Banach valued martingale differences, which extend Theorems 2 and 3 of Lanzinger and Stadtmüller [21] on weighted sums of the form ∑  =1  −1   .In Section 8, we consider more general weighted sums of Banach valued martingale differences, for which we extend Theorem 3.3 of Baxter et al. [35], Corollary 1 of Ghosal and Chandra [19], and Theorems 2.2-2.4 of Li et al. [9] and generalize Theorem 2 of Yu [23].
We get Theorems 1 and 2 by a refinement of the method of Alsmeyer [18].
Proof of Theorem 1.The first inequality is obvious.We only consider the second one.Clearly, Since {(  , F  )}  =1 is an adapted sequence of B-valued random variables, by Markov's inequality (conditional on F −1 ), Hence, by summing, we obtain (, ) dP Therefore, the upper bound in (19) gives a lower bound of P{ *  > } by (17), which implies the second inequality of (14).
which gives the desired conclusion.

Convergence Rates for Arrays of Banach Valued Martingale Differences
In this section, we consider the convergence rates in the law of large numbers for arrays of Banach valued martingale differences.
Let (Ω, F, P) a probability space and (B, ‖ ⋅ ‖) be a real separable Banach space.For every  ≥ 1, let F 0 = {0, Ω} ⊂ F 1 ⊂ ⋅ ⋅ ⋅ be an increasing sequence of sub--fields of F. For every  ≥ 1, let {(  , F  )} ≥1 be a sequence of B-valued martingale differences defined on (Ω, F, P), adapted to the filtration (F  ): that is, for every  ≥ 1,   is F  measurable and belongs to L 1 B , and if the series converges.We will call the double sequence {(  , F  ),  ≥ 1,  ≥ 1} an array of B-valued martingale differences.
We are interested in the convergence rates of the probabilities P{sup ≥1 ‖  ‖ > } and P{‖ ,∞ ‖ > }.We will describe their rates of convergence by comparing them with an auxilary function () and by considering the convergence of the related series.

Convergence Rates for Triangular Arrays of Banach Valued Martingale Differences
In this section, we consider the convergence rates in the law of large numbers for triangular arrays of Banach valued martingale differences.Let (Ω, F, P) be a probability space and (B, ‖ ⋅ ‖) a real separable Banach space.For every  ≥ 1, let =1 be a sequence of Bvalued martingale differences defined on (Ω, F, P), adapted to the filtration (F  ): that is, for every 1 ≤  ≤  and every  ≥ 1,   is F  measurable and belongs to L 1 B , and We will call the double sequence {(  , F  ), 1 ≤  ≤ ,  ≥ 1} a triangular array of B-valued martingale differences.In the following, we first give a sufficient condition for the convergence of triangular arrays of B-valued martingale.For  ∈ R and  > 0, let We are interested in the convergence rates of the probabilities P{max 1≤≤ ‖  ‖ >   } and P{‖  ‖ >   }.We will describe their rates of convergence by comparing them with an auxilary function () and by considering the convergence of the related series.
We begin with some relations among =  (1) . (89) Thus, the condition (81) holds, so that the conclusion follows from Theorem 14.
As a special case, we obtain the following extension of a result of Chow and Teicher [34, page 393] about the complete convergence on sums of independent random variables.

Convergence Rate for the Maxima of any Banach Valued Random Variables
In this section, we study the convergence rate for the maxima of a sequence of any Banach valued random variables to obtain further equivalent conditions about the convergence rate for a Banach valued martingale in Section 6.Let (B, ‖ ⋅ ‖) a separable Banach space and (  ) ≥1 be a sequence of any B-valued random variables.For any  ∈ [1, ∞), let [] be the integer part of .Set Then, for any  ≥ 1,  *  = max 0≤≤ ‖  ‖.For any  > 0,  > 0, set Let (⋅) > 0 be a function slowly varying at ∞. Recall that a function () > 0 slowly varying at ∞ has the representation form for some  0 > 0, where (⋅) is measurable and () →  ∈ (0, ∞), () → 0 as  → ∞.The function (⋅) plays no role for our purpose.We can choose () ≡ 1 without loss of generality.
We are interested in the convergence rates of P{ *  >   } and P{sup ≥  −  *  > }.Notice that P{sup ≥  −  *  > } → 0 for any  > 0 if and only if  *  /  → 0 a.s.So, our results in this section describe the rate convergence for the almost surely convergence of  *  /  .The following result shows that P{ *  >   } and P{() >   } have similar asymptotic properties.More precise comparisons will be given in Theorems 22 and 24.

Convergence Rates for Banach Valued Martingales
In this section, we consider the convergence rate in the law of large numbers for a sequence of Banach valued martingales.
Proof of Theorem 28.Applying Theorem 10 to we obtain the implications (153)⇔( 156  Let (B, ‖ ⋅ ‖) be a separable Banach space.In this section, we give a Marcinkiewicz-Zygmund type strong law of large numbers for weighted sums of B-valued martingale differences {(  , F  )} ≥1 , and we obtain a Baum-Katz type theorem for weighted sums of identically distributed B-valued martingale differences which extends Theorems 2 and 3 of Lanzinger and Stadtmüller [21].Our results will be obtained by means of our main Theorems 2 and 10.We will need the following elementary result.
The following theorem is a Marcinkiewicz-Zygmund type strong law of large numbers for the weighted sums (161).
Proof of Theorem 32.Clearly, By Theorem 22, we see that Abstract and Applied Analysis if and only if . By the proof of Lemma 31, we have And by Theorem 2 with  = 1 and  = 0, we know that By Theorem 1, we see that By (166), we have From ( 173)-(175), we see that (171) holds.Thus, (170) holds, so that (168) holds.
To establish a general Baum-Katz type theorem for the weighted sums (161), we first introduce a definition and a technical lemma.Definition 33.For a function   regularly varying at ∞ of index  ̸ = 0, one define,  ←  as its inverse function.
The following lemma shows that the inverse function of a regularly varying function of index  ̸ = 0 remains regularly varying.
Proof.Let  =   ().Define  2 () := ( ←  ()).We have We will prove that  2 () is slowly varying at ∞.We see that After changing variable, we have where dV} is slowly varying at ∞ and so is  * (), which proves the desired result.
if and only if (b)When  = ( − )/( + 1), (183) is implied by where l is defined in (181); conversely, if l(∞) = ∞ and the function  0 defined by (180 Remark 36.Theorem 35 also holds if (182) is replaced by for some  ∈ (1, 2],  ∈ [1, ∞] and  ∈ (0, ), where Of particular interest are the cases where the slowly varying functions  and  1 are constants or powers of the logarithmic function, which will be studied in the following corollaries.We first consider the case where  and  1 are constants.
if and only if Notice that the condition on   () implies in particular E  () < ∞, giving E‖ 1 ‖  < ∞.Therefore, the conclusion of the corollary is interesting only when the exponents in (189) are greater than .
For any  ≥ 1, we have where l is defined in (181).
Proof.We proceed as in the proof of Lemma 3.4 of Gut [20].
Case 2 ( = 1).By the change of variables we see that (198) holds if and only if With the change of variable V =  ← 1+ (), we know that (206) holds if and only if which is equivalent to the first condition of (184).Thus, (198) is equivalent to the first condition of (184).

Law of Large Numbers for Weighted Sums of Banach
Valued Martingale Differences.Let (B, ‖ ⋅ ‖) be a separable Banach space.In this subsection, we find sufficient conditions for the convergence of weighted sums of B-valued martingale differences (  ). Let we see that (226) follows from Theorem 4.
Let {  ,  ≥ 1,  ≥ 1} be a Toeplitz summation matrix; Theorem 2 of Pruitt [43] states that for a sequence (  ) ≥1 of i.i.d.random variables with E| 1 | < ∞ and In the following, we also consider the similar problem for arrays of B-valued martingale differences (  ).In this subsection, we consider complete convergence of weighted sums of B-valued martingale differences (  ) of the form ∑ ∞ =1     .We extend and improve Corollary 1 of Ghosal and Chandra [19] and Theorems 2.2-2.4 of Li et al. [9].We also generalize Theorem 2 of Yu [23].

7 .
Convergence Rates for Weighted Sums of Banach Valued Martingale Differences of the Form ∑  =1  −1