Optimal Bounds for the Variance of Self-Intersection Local Times

,

Let (, ) = ∑  =1 1(  = ) be the local time of (  ) ∈N 0 at the site  ∈ Z  , and define for a positive integer  the -fold self-intersection local time (1) We will denote the corresponding quantities for simple random walk in Z  by  SRW  (, ) or simply  SRW  () when the dimension is clear from the context.
Let  + and  − be, respectively, the semigroup and the group generated by the support of ,  + fl { ∈ Z  | P (  = ) > 0 for some  ≥ 0} ,  fl { ∈ Z  |  =  −  for some ,  ∈  + } . (2) Following Spitzer [1], we call the random variable  and the random walk it generates genuinely -dimensional if the group  is -dimensional.
The quantity   () has received considerable attention in the literature due to its relation to self-avoiding walks and random walks in random scenery.In particular let the random scenery {  ,  ∈ Z  } be a collection of i.i.d.random variables, independent of (  )  , and define the process  0 = 0,   = ∑  =1    .Then (  )  is commonly referred to as random walk in random scenery and was introduced in Kesten and Spitzer [2], where functional limit theorems were obtained for  [𝑛𝑡] under appropriate normalization for the case  = 1.The case  = 2, with   centered with nonsingular covariance matrix, was treated in [3] where it 2 International Journal of Stochastic Analysis was shown that  [] /√ log  converges weakly to Brownian motion.As is obvious from the identities   = ∑ ∈Z  (, )  and var(  ) = var[  (2)] var(  ), limit theorems for (  )  usually require asymptotic results for the local times of the random walk (  )  .
In this paper, motivated by the results of Spitzer [1] for genuinely -dimensional random walks and the approach of Becker and König [10], we will study the asymptotic behavior of var(  ()) without imposing any moment assumptions on the random walk.The central idea behind our approach is to compare the self-intersection local times   () of a general -dimensional walk with those of its symmetrised version.In addition we will compare the self-intersection local times of a general -dimensional random walk with those of the -dimensional simple symmetric random walk, (SRW  ) ∈N 0 .It is well known that, for some positive constants Several other cases have been treated in the literature, using a variety of methods.
A careful look at the literature reveals that the most difficult case in  = 2 is the near transient recurrent case, where P(  = 0) ∼ /, which corresponds to genuinely 2dimensional symmetric recurrent random walks, which will be referred to as a critical case.Surprisingly enough, the variance of the self-intersection local times in the critical case is asymptotically the largest.Theorem 1.Let ,  1 ,  2 , . . .be independent, identically distributed, and genuinely -dimensional Z  -valued random variables, for any  ≥ 1.Then, there exist positive constants  , >  , > 0, depending on  and the distribution of , such that for all  large enough The result was motivated by [1,10] and improves related results of Becker and König for  = 3 and  = 4. Several cases treated in [3,4,[10][11][12][13] can then be obtained as particular cases.
Moreover, we also show the surprising converse.More precisely, we show that the right asymptotic behaviour of var(  ) implies that the jumps must have zero mean and finite second moment.Theorem 2. Let ,  1 ,  2 , . .., be independent, identically distributed, and genuinely -dimensional with  ≤ 3.If then E|| 2 < ∞ and E = 0.
For any genuinely -dimensional random walk with finite second moments and zero mean, the asymptotic behaviour of var(  ()) is similar to that of the -dimensional simple symmetric random walk.Also, as it follows from our general bounds (see Proposition 4 and Corollary 7) that the asymptotic results for the genuinely -dimensional random walk can be reproduced by those of the symmetric one-dimensional random walk with appropriately chosen heavy tails, as was indicated by Kesten and Spitzer [2].The proofs are based on adapting the Tauberian approach developed in [13].
Notice that since we assume that  1 ≤  1 , we have The terms with V = 1 vanish, while the terms with V = 2 will be considered separately.
Terms with V ≥ 3. We first consider the sum   over the terms with V ≥ 3 for which we drop the negative part and obtain the bound Summing over the free index  0 , it is clear that For any  = ( 1 , . . .,  2−1 ) with V() = V, exactly  fl 2−1−V elements are equal to 0, and therefore by Assumption (A) with  = 0 we have Letting ( S ) ∈N 0 denote an independent copy of the random walk (  ) ∈N 0 and assuming without loss of generality that

International Journal of Stochastic Analysis
Let   fl ∑  =0 ().Since  is nonincreasing we have that and iterating this procedure, for V ≥ 3, we have that Δ ,V ≤ Δ ,3  V−3  .Combining the two bounds and summing over V = 3, . . ., 2 − 1, we have that where () is a constant depending only on .
Terms with V = 2. Next we consider the sum   over the terms with V = 2, which occurs when, for some , the indices  1 , . . .,   all lie in [  ,  +1 ].Then it is easy to see that this sum   is bounded above by The following corollary provides explicit bounds in the cases that are usually considered in the literature.
Corollary 5. Assume that the conditions of Proposition 4 are satisfied with () =  − and (, ) =  −−1 ( ∧ ).Then, It is straightforward to see that Corollary 5 includes random walks with mean zero and finite second moment; for example,  = 2 corresponds to  = 1 and  = 3 to  = 3/2.Therefore several relevant results in [3,[7][8][9][10][11][12][13] are obtained as a special case of Corollary 5 and extended to the case of independent but not necessarily identically distributed variables, for example, by applying the local limit theorem, as conducted in [8].
Also when the random walk increment  is in the domain of attraction of the one-dimensional symmetric Cauchy law [13,14] or in the case of planar random walk with second moments [3,[7][8][9]11], it is well known that the conditions of Proposition 4 are satisfied with () = / and (, ) =  −2 ( ∧ ).
However, we can do better for symmetric variables and show that condition (A) implies (B), which together with the comparison technique motivates the following results.For a real number , we write [] for the integer part of .
Then there exists another positive constant  = (, , ) such that var (  ()) ≤  ( Proof of Proposition 6.Using the notation of Proposition 4, for positive integers , , , and V, with  +  ≤ ,   = ±1, and any  ∈ To find (, V), notice that since () ≥ 0, A telescoping argument implies that On the other hand for  ≤ V we can obtain a tighter bound through Combining the two bounds above it follows that (B) is satisfied with (, V) fl ([/2]) min(, V)/.Thus all conditions of Proposition 4 are satisfied and the result follows.
The following corollary allows for the case where () is regularly varying.
The following example of genuinely 2-dimensional recurrent walk with infinite variance was motivated by Spitzer [1, pp. 87].

Bounds for Identically Distributed Variables
Proposition 9 (general upper bound for i.i.d.).Let ,  1 ,  2 , . .., be independent, identically distributed, International Journal of Stochastic Analysis Z  -valued random variables.Suppose that for any  ∈ Z  and all positive integers , , , and V, with  +  ≤ , it holds that where {()} ∈N 0 is a nonincreasing sequence.Then for some constant  = () we have that Proof of Proposition 9.By inspecting the proof of Proposition 6, we notice that we only need to bound the term   .Consider typical ordering and let us change variables to ( 0 , . . .,  2 ) such that  0 + ⋅ ⋅ ⋅ +  2 = .Then the contribution to   is given by We keep   fixed for  ̸ = ,  +  and we sum over  =   +  + from 0 to some  = (, {  }  ̸ =,+ ).Then for given  +1 , . . .,  +−1 , the term in the sum is where  * = max{ +1 , . . .,  +−1 }.The result follows by summing over all indices apart from  * and changing the order of summation.

Proofs of Main Results
Proof of Theorem 1.We apply a comparison argument found to be useful in many areas (e.g., Montgomery-Smith and Pruss [15], and Lefèvre and Utev [16]).More specifically we bound the quantity var(  ) by the corresponding quantity for the symmetrised random walk.Following Spitzer's argument we notice that with () = E[exp(i ⋅  1 )] Since |()| 2 is the characteristic function of a symmetric random variable in Z  , for some positive , we have 1 − |()| 2 ≥ || 2 , and, hence, The result follows from Proposition 9 applied with () =  −/2 .
The proof of Theorem 2 will be based on the following lemma.
Note that for  = 1 the situation is much simpler since then var( SRW  ()) ∼ [E SRW  (, )] 2 and if Proof of Theorem 3. We first give the proof for the case  = 1.
As in the proof of Proposition 4 we begin from expression (10) and define the sequences   and   for  = 1, . . ., 2 − 1, and the quantity V() = ∑ 2−1 =1 |  |.Recall that V() measures the interlacement of the two sequences  1 , . . .,   and  1 , . . .,   .For example, V() = 1 occurs when either   ≤  1 or   ≤  1 , in which case the contribution vanishes by the Markov property.On the other hand V() = 2 when, for example,  1 , . . .,   ∈ [  ,  +1 ] for some .Finally V() = 3 occurs when, for example, From the proof of Proposition 4, and using the bound P(  = 0) ≤ /, the terms of the sum are bounded above by  2 log() 2−1−V() , and thus the leading term appears when either V() = 2, 3, with other terms giving strictly lower order.We will therefore analyze these two situations in detail in order to derive the exact asymptotic constants.When V = 3, the two terms in the difference individually give the correct order and will be treated by the classical Tauberian theory.However for V = 2, the two terms only give the correct order when considered together.This however forbids the use of Karamata's Tauberian theorem since the monotonicity restriction would require roughly that   is symmetric.Thus the complex Tauberian approach, as developed in [13], is required to justify the answer.
Case 1 (V() = 3).Assume that part of the sequence l = { 1 , . . .,   } lies between   and  +1 and the rest between   and  +1 .Then using the change of variables International Journal of Stochastic Analysis we rewrite the positive term in (10) as Notice that from [13] we have that Then, by direct calculations and Fourier inversion formula Next we consider the negative term in ( 10) By direct calculations and (6), and using Fourier inversion and (6) the internal sum behaves as The approach developed in [13] can then be used to bound the error terms and show that () ∼ [( − 1)/2() 2−2 ] 2 log() 2−4 .
The case for  = 2 is very similar, so we move on to the case  = 3.
The term () − () is trickier to compute.As usual we consider the power series