Properties of the Parabolic Anderson Model and the Anderson Polymer Model

In this article we examine some properties of the solutions of the parabolic Andersonmodel. In particular we discuss intermittency of the field of solutions of this random partial differential equation, when it occurs and what the field looks like when intermittency doesn’t hold. We also explore the behavior of a polymer model created by a Gibbs measure based on solutions to the parabolic Anderson equation.


Introduction
It has been twenty years since the publication of the seminal work "Parabolic Anderson Problem and Intermittency" by Carmona and Molchanov.Their memoir has inspired an enormous amount of research on the subject in the intervening years.In this paper we hope to give an account of what is now known about the behavior of solutions of the parabolic Anderson equation as well as the behavior of typical paths under the Anderson polymer measure.Perhaps the initial and still most compelling reason for studying the parabolic Anderson model is physical.In the three-dimensional case it provides a model for the growth of magnetic fields in young stars.In addition, it also has an interpretation as a population growth model.Further, since the work of [1] it has also provided a model for a polymer in a random media.Furthermore, it is a part of the theory of stochastic partial differential equations.Besides its interest as what can be described as a canonical object, in the sense that Brownian motion is a canonical object, its interest also derives from its relations to many other important models.In particular, it is a close relative of other models, for example, the stepping stone model [2], catalytic branching [3], super random walk, the Burger's equation [4], and the KPZ equation [4].The original motivation of Anderson concerned the question of whether there were bound states for electrons in crystals with impurities.The crystal structure is taken to be Z  and the impurities are modeled by means of an  random field {  :  ∈ Z  }.The phenomena of localization can be expressed as the existence of eigenfunctions of the Hamiltonian  = Δ+.
The equation satisfied by a magnetic field generated by a turbulent flow leads naturally to an analogous parabolic equation with a time varying random field {(, ) :  ∈ [0, ∞),  ∈ R 3 } as opposed to the time stationary field {  :  ∈ Z  } which arises in the original localization question.The difference is that the random medium changes rapidly in the case of the magnetic field generated by turbulent flow whereas the impurities in the localization can be considered to be unchanging in time.In the latter case, the fluctuations in the medium are slow compared to the phenomena of interest, capture of electrons.
The magnetic field in a star is generated by the turbulent flow of electrical charges.Turbulent flows are modeled by means of a randomly evolving velocity field, see, for example, [5] or [6] for a canonical example.Let {(, ) :  ∈ [0, ∞),  ∈ R 3 } denote such a velocity field on R 3 .Typically, this field is incompressible, Markov in time and Gaussian together with other spectral properties.The magnetic field generated by charges carried by the Lagrangian flow of {(, ) :  ∈ [0, ∞),  ∈ R 3 }, H = ( 1 ,  2 ,  3 ) is incompressible and evolves according to an equation of the form where in this case Δ is the standard Laplacian on R 3 .This equation has been studied in [7][8][9][10][11], to name but a few references.We mention now some of the results of [9], on a tractable version of the model which one gets when R  is replaced by Z  and in the environment one takes the time correlation in the velocity field to 0. In this version of the model, the field {(, ) :  ∈ [0, ∞),  ∈ R 3 } is replaced by a 3 × 3 matrix Wiener process, {(, ) :  ≥ 0,  ∈ Z 3 } on some probability space (Ω, F  ,  ≥ 0, ) satisfying   (, )   (, ) =  0 ( − )     .
This last inequality is the definition of full intermittency, and the second moment grows strictly faster than twice the first moment.As a consequence, the field H(, ) has widely separated large peaks.This explains the well-known phenomena of sun spots.They are widely separated sites of high magnetic field energy.The main question of interest in astrophysics is whether the magnitude of H grows exponentially, a.s., in other words, is the a.s.Lyapunov exponent positive?This is the question of whether lim A further question regards the physically relevant asymptotic behavior of the a.s.Lyapunov exponent () as  → 0.
Since  is the inverse Reynolds number, which in this situation is very small.Another interesting question is whether  −/2 H(, ) has a limiting distribution.These questions have an affirmative answer in the scalar case and remain difficult open problems in the multidimensional model just discussed.

Parabolic Anderson Model
The commonly studied parabolic Anderson equation is a scalar version of the magnetic field equation ( 1) and most progress has occurred in Z  and we will treat this case first.The velocity field {(, ) :  ≥ 0,  ∈ R  } may be replaced by a random environment that is either a stationary in time random field {  :  ∈ Z  } or an evolving random field {  () :  ∈ Z  ,  ≥ 0}.Typically, the variables   and   ,  ∈ Z  , in both cases are assumed to be .The stationary in time field can be thought of as a model where the phenomena of interest evolves much more rapidly than the evolution of the environment.The nonstationary case models a phenomena whose evolution is on a time scale comparable to the time scale of the evolution of the random environment.
In this paper we will only discuss the nonstationary case.
The enormous literature has been developed in this time stationary case.A partial random sample of works on this topic includes [12][13][14][15][16][17].We will only discuss the nonstationary case.When considering the discrete model, that is, Z  , we take the operator Δ to be the discrete Laplacian as mentioned above, whereas it is the ordinary Laplacian when the model is set in R  .The parameter  > 0 in the model is called the diffusivity.The parabolic Anderson model is defined as a parabolic partial differential equation.A canonical version of this model is with the random environment provided by a white noise potential.In the Z  case, one takes W = {  :  ∈ Z  } to be  standard, one-dimensional Brownian motions defined on some probability space (Ω, F  , ), where F  = {  () :  ∈ Z  , 0 ≤  ≤ }.This field is assumed to be -correlated in time and space, that is, [  ()  ()] =  ∧  0 ( − ) where  denotes expectation with respect to .
The differential form of the parabolic Anderson equation is then The notation   () indicates the Stratonovich differential, (see [18] for a description of Stratonovich differentials) and this is preferred over the Itô differential because of the simplicity of the Feyneman-Kac representation which will appear shortly.The equivalent integral formulation is The differential formulation can be expressed converting to the Itô differential by  (, ) = Δ (, )  +  (, )   () At times it will be convenient to discuss the Itô solution defined by The relation between the two is that   (, ) =  −/2 (, ).Typically, it is assumed that  0 ≥ 0 and that it is bounded from above, which guarantees existence and uniqueness of the solution.The positivity of  0 assures that (, ) ≥ 0 for all (, ) ∈ [0, ∞)×Z  .Fundamental information on this equation, including existence and uniqueness results, and its applications are contained in [19].The field of solutions U() ≡ {(, ) :  ∈ Z  } exhibits interesting behavior as revealed by the growth properties of its moments which will be discussed at length below.At the risk of stating what is obvious, we stress that the random variables in the field {(, ) :  ∈ Z  } are dependent.The correlation structure of this field is examined in detail in Theorem 2 later in this paper.The solution of ( 20) has a Feynman-Kac representation as an average over a path space.This is done by means of a family of measures Then the measures {   :  ∈ Z  } are taken to be the ones that make {  ,  ≥ 0} the pure jump Markov process on Z  with generator Δ.This process satisfies at    ( 0 = ) = 1 and  waits at  for a random amount of time  = inf{ :   ̸ = } which is exponentially distributed with parameter 2,    ( ≥ ) =  −2 , and then selects, using the uniform measure on its neighbors, one of the 2 neighbors of  and jumps there and then proceeds afresh as if starting from time 0 at the new position.
The solution to (20) can then be expressed as Unless explicitly mentioned otherwise, we will take  0 () ≡ 1 so that

Relations to Other Processes and Equations
The parabolic Anderson equation is closely related to equations for many other models.
If ℎ = −1 in (26) and  0 () +  0 () ≡ 1, then (, ) + (, ) ≡ 1,  ≥ 0 and (, ) solves This is known as the stepping stone model from population genetics and has been the subject of the works [2,16,26] among others.Finally, when ℎ = 1 in (26) and  0 ≡  0 one gets (, ) ≡ (, ) and (, ) is the solution of (20) with   () =  1  ().Other versions of noise have been considered as well and this can lead to substantial technical difficulties.For example, the recent works [27][28][29][30][31] replace the field W by catalysts which are from interacting particle systems such as the voter model or exclusion processes.Among the problems unresolved here is the existence of the a.s.Lyapunov exponent.The subadditive argument which worked so well for the environment W fails for these models.

Moment Lyapunov Exponents
One of the most interesting properties of the solutions of (20) is that of intermittency.Intermittency is defined in terms of the moment Lyapunov exponents.These are the limits for  > 0 of The existence of these limits was proved in [19].For  = 1 one uses (116) to compute  1 ().This is easily done with the observation that for any fixed path {( − ) : 0 ≤  ≤ }, the random variable is Gaussian with mean 0 and variance .Thus, by Fubini's theorem, so that  1 () = 1/2.An interesting quantity, the overlap, arises when computing  2 ().Here we will use  ,⊗2  to denote the product measure of This implies For general  ≥ 1, Hölder's inequality implies and  →   () is convex.Full intermittency is then the property that   () is strictly convex on [1, ∞), that is, It is by now classical and was proven in [19], that in dimensions  = 1, 2, full intermittency holds for all  > 0 but in dimensions  ≥ 3, full intermittency only holds for 0 <  <  2 () where  2 () is a dimension-dependent constant.This was done in [19] by noting that in dimensions  = 1 and 2 the operator 2Δ +  0 has a positive eigenvalue for any  > 0. By contrast, in dimensions  ≥ 3 there is a positive constant  2 () such that 2Δ +  0 has a positive eigenvalue only for 0 <  <  2 ().When there is a positive eigenvalue,  0 (), one can conclude that lim When the spectrum of 2Δ +  0 has no eigenvalue, that is, when in dimension  ≥ 3 with  >  2 (), one has lim Using this with (120) we have the following consequence due to Carmona and Molchanov [19].
Below we will give a probabilistic proof of this result which identifies the value  2 ().In addition, in [19], it was shown that there is a sequence   () which was proved to be strictly increasing in [32], such that This has been refined, extended, and improved in the recent work of Greven and den Hollander, [32].The physical phenomena of intermittency is the property that a random field possesses widely separated high peaks.The most well-known field exhibiting this property is the magnetic field energy in a star.In our sun, this exhibits itself as sun spots where most of the magnetic field energy is concentrated, thereby lowering the temperature and causing the darkening which appears as a spot.Intermittency properties of the field {(, ) :  ∈ Z  } were established in [19] and this will be discussed below.

Itô Solution and Probabilistic Proof of Intermittency
The Itô solution,   (, ), is the solution when the Itô differential is used in (20) The relation between  and   is simply Then defining and so full intermittency holds if and only if   2 () > 0. We show that in dimensions greater than 2, that the Itô solution   (, ) has bounded second moment for  >  2 () and that the second moment grows exponentially for  ≤  2 ().This was shown in [19] using spectral considerations outlined in the previous section, so here we will give a probabilistic argument which was alluded to in [19] but does not seem to have appeared anywhere.First, recall that the jump rate of  is 2.Thus, if  is an independent copy of , the jump rate of we have Now we introduce some helpful notation.We let If the underlying chain is recurrent, then all of these stopping times are finite.If the chain is transient, then only a finite number of these times is finite.Set, for  ≥ 1, The distribution of   with respect to  2 0 is exponential with parameter 4.For all  ≥ 1, the random variable   is the duration of the th sojourn at 0 and   is the duration of the th excursion from 0. In the recurrent case, the random variables  1 ,  2 , . . .and  1 ,  2 , . . .are all finite and independent.They are also independent in the transient case "up to the time they all become infinite." For a given , if The skeletal random walk of  can now be defined using these stopping times.This is the Markov chain on Z  which ISRN Probability and Statistics keeps track of the sites visited by , namely,  0 =  0 ,   =    + ,  ≥ 1. Observe that if  = ∑ ≥0  0 (  ), then  is a geometric random variable with parameter  =  2 0 ( = 1), the probability of no return to the origin which is independent of .For  ≥ 3, and it is well known that  > 0, see, for example, [2].Since  and the random variables   are independent, an easy argument shows that the total time spent at the origin is thus and is an exponentially distributed random variable with parameter 4 with respect to the measure We remark that by stationarity of the field W and the choice of Consequently, since and there is no intermittency for large  in dimensions  ≥ 3.
For  = 1 or 2, however, we proceed as follows to show that there is intermittency for all  > 0. It will be useful to make a change of variable and to set up some more notation.
We return now to  ≥ 3 so that  > 0 and show that intermittency holds for  < 1/4.Again, it suffices to show lim  → ∞ (1/) log  1/2 0 [   () ] > 0. Recall that the   are exponential distributed with parameter 1 with respect to  1/2 0 .Standard large deviation estimates (see, e.g., [33,Theorem 9.3]) show that for fixed  > 0, Also, As in the previous paragraph, for every  > 0 we can find a  such that Choosing Recalling (56), it follows that for all  sufficiently large, If  >  and we choose  with  < ( − )/(4(1 − )), with the corresponding choice of  as in (61), then for all sufficiently large , Therefore, lim ), for every  > 0 and intermittency holds in dimension  = 1 or 2 for all  > 0.

Covariance Structure and Association
Now we will examine the covariance structure of the field U() in the intermittent regime, that is, when  <  2 () so that 2 1 () <  2 ().Recall that this range for  corresponds exactly to the range of  for which the operator 2Δ +  0 has a positive eigenvalue.We will also establish that this field is associated.The mixed second moments [(, )(, )] are significant in understanding the structure of the field U().They are given by The asymptotics of the function [(, )(, )] can be evaluated as follows.Define Then observe that But there is a scaling relation since  ⋅ =  (2) −1 ⋅ is a rate 2 simple symmetric random walk on Z d with respect to  2  .This shows that The function  , () ≡  1 , () arises as the partition function of a homopolymer, [34], and by the Feyman-Kac formula,  , () solves    , () = Δ , () +  0 ()  , () ,  ,0 () ≡ 1. ( The spectrum of the operator on the right hand side, satisfies and for  ≥ 3, there is a dimension-dependent   () such that In the above,  0 () > 0 is a simple eigenvalue for   and the portion [−4, 0] is purely a.c.spectrum.In fact, one now sees that   () = 1/2 2 () from the section on intermittency.We denote by   the eigenfunction corresponding to  0 () and note that it is given by, see [34], where is the symbol (Fourier transform) of Δ and T  is the dimensional torus.The representation (74) can be used to establish the exponential decay of  0 , and there is a positive constant  = (, ) such that By the spectral theorem, letting   be the resolution of the identity for the operator  1/2 , one has Note that The following result was used in [19] to prove a central limit theorem for sum of the form ∑ ∈Λ  (, ) which will be described later in this paper.
This implies exponential decay in the spatial variable for the covariance of solutions of (20).Recalling the equivalence in law stated in (69) it follows from (78) that Corollary 3.For  = 1, 2 and any  > 0 or for  ≥ 3 and 0 <  <  2 (), Cov ( (, ) ,  (, )) Consequently, Note that (84) gives a quantitative expression for the intermittency condition  2 () > 2 1 ().Since  1 () = 1/2, we see that We note that  0 (1/2) → 0 as  ↗  2 ().Its rate of decay depends on the dimension.An important property of the field U() is that the random variables in this field are associated.A collection of random variables {  ,  ∈ }, where  is a countable set, is said to be associated if for any  and coordinate-wise increasing functions ,  : R  → R, and any finite subcollection   1 ,   2 , . . .,    , it holds that This notion was introduced in [35] and is of course related to the FKG inequality.One important aspect of this property was developed in [36] where Newman established a central limit theorem for the collection {  ,  ∈ Z  } under the assumptions that the {  ,  ∈ Z  } are associated, stationary and satisfy finite susceptibility Note that the bound provided by (82) implies that the field U() has finite susceptibility in the intermittent regime.A classical application of the Newman's central limit theorem is to take the   = (),  ∈ {0, 1} Z  , the spins of a ferromagnetic stochastic Ising model and derive a central limit theorem for sums, ∑ ∈Λ (), over growing boxes Λ ⊂ Z  , with respect to a Gibbs state.The spins are correlated, but they possess the property of being associated and stationary with respect to the Gibbs state.
The solutions of the parabolic Anderson equation ( 20) are associated.The following result was established in [14].The proof uses a result of Pitt, [37], which states that a necessary and sufficient condition for the associativity of a Gaussian vector is the point-wise nonnegativity of its correlation function.Since associativity is preserved by convergence in distribution, the result below is proved using a simple approximation procedure.Theorem 4. Let {  :  ∈ R  } be a field of , standard, one-dimensional Brownian motions on some probability space (Ω, ).Then U() = {(, ) :  ∈ Z  }, and the field of solutions of (20) is associated.

Almost Sure Lyapunov Exponents
In the previous section we examined the behavior of the moments of (, ).We now turn our attention to the a.s.behavior of the solution of (20); that is, we consider the a.s.Lyapunov exponent defined by The existence of this limit was first established by Carmona and Molchanov in [19] in the case when either  0 () =  0 () or, more generally, when  0 has compact support.The technique of proof used Liggett's subadditive ergodic theorem, [38].The sub-additivity, when  0 =  0 , is an easy consequence of the Feynman-Kac representation ( 116) The Markov property is used in going from line 2 to line 3 and this technique broke down at this step in the case of  0 ≡ 1.The latter case was established in [39] using a block argument from percolation theory.This type of block argument originated in [40].Using the fact that time increments of the field W are independent over disjoint space-time blocks in [0, ∞) × Z  , the proof established an oriented percolation scheme and applied a recurrence result from [41] for such schemes.
In view of the application to stellar magnetic fields, a significant aspect of () is its asymptotic behavior as  ↘ 0. The exact asymptotics were established independently in and in [42].
The asymptotics are derived from the Feynman-Kac representation (116) through analysis of the Gaussian field {  () :  ∈ D  } where This field places a natural metric  on D  by means of The index set D  of the field {  () :  ∈ D  } is too large from the point of view of the metric entropy, see [43] for an explanation of the metric entropy, induced by the metric .Thus we restrict the index set by specifying the number of jumps of its elements  in the interval.So, using (, ) to denote the number of jumps of  in [0, ] we can define the space of paths The superadditive functional is the supremum of a Gaussian field indexed by the set D , .This set has a suitably bounded entropy, which, by a theorem of Fernique and Talagrand, implies [ , ] ≤ .This bound allows, by means of Liggett's subadditive ergodic theorem, the conclusion that there is a positive constant  such that lim An interesting and presumably difficult problem is to determine the proper scale  , −  in order to obtain a limit law.It is conjectured that ( , − )/ 2/3 should have a nontrivial limit law, possibly related to the Airy distribution.This conjecture comes from related results arising in random matrix theory such as [44,45].Similarly, one would expect nontrivial limit laws for (log (, ) − ())/ 2/3 .
The asymptotics established in [39,42] for () is that lim These asymptotics are arrived at by decomposing the Feynman-Kac representation, (116), where L = denotes equality in law (distribution.)Note that the only change has been to have the time direction be the same in both  and .The intuition now is that the conditional expectation in the th term should be nearly   , .One quickly realizes that sum over  ∉ [, ] for suitable choices of  and  is not significant, then only terms of the from   , matter.But, by Brownian scaling,  , L = (1/√) , .Using this in (97) and simple large deviations results for the Poisson distributed (, ) leads to an upper bound for the asymptotics of ().The lower bound comes from looking at a particular path  that dominates the Feynamn-Kac expectation and using similar estimates.
Thus, for small , () ≪  1 () ≡ 1/2 which says the a.s.behavior is much smaller than the first moment behavior.This just reflects the fact that the expectation of (, ) is dominated by large values of (, ) which occur with small, but not too small, probability.This is related to the intermittency and will be examined in the section on sums over boxes below.

Solution of PAM as Interacting Diffusion
An interesting point of view regarding the solution of (20) was proposed in [47,48] which grew out of work on the stepping stone model in [26].In [48], Shiga and Shimizu, the authors view the entire field U  () = {  (, ) :  ∈ Z  } as a Markov process in a subset of a particular  1 -space.In [47,48] and the related works [32,49], a more general underlying Markov process is used than the one with generator Δ.However, for simplicity we will confine our discussion to the case where the operator appearing in ( 20) is Δ.Take any summable collection of positive numbers {  :  ∈ Z  } that also satisfies for some positive , Then set for  > 0, The space L 1 () is endowed with the product topology.We recall the following theorem of [48].
Theorem 6.Given U  (0) ∈ L 1 (), the SDE (20) has a unique strong solution with (U  ()) ≥0 ⊂ L 1 () and strongly continuous paths in L 2 () a.s.The process (U  ()) ≥0 is a Markov process on L 1 () with semigroup   which satisfies for  depending smoothly on only finitely many coordinates and where and is a Feller diffusion.
One interesting aspect of this theorem is the door it opens into applying the techniques of diffusion processes to the process U  .So one can ask what are the invariant measures for the semigroup   ?Since the components of U  are interacting processes, it also brings the point of view of interacting particle systems onto the scene.In the latter field, characterization of all the   -invariant, shift invariant, and ergodic measures is a common theme.The reader is referred to [50] or [51] for examples and many references.One example of such a result is the following due to Shiga from [47].To describe his results we introduce some notation and relevant objects.Denote by P(L 2 ()) the probability measures on (L 2 (), B(L 2 ())) where B(L 2 ()) is the Borel field on B(L 2 ()).Denote by  *  the action of the dual of   on the space P(L 2 ()), that is, with ⟨, ⟩ = ∫ L 2 () ()(), ⟨ *  , ⟩ = ⟨, ⟩ ,  ∈   (L 2 ()) . (107) Then we can define the invariant measures for the process U  by We can also consider the class of probability measures that are invariant with respect to the group of shifts   ,  ∈ Z  , Let Consider the initial configuration U  (0) = Θ which means all coordinates take on the value  ∈ [0, ∞).View this as starting the process U  with initial distribution  Θ .An important result of Shiga, with additions made by [32,49], explains the asymptotic behavior of U  in the nonintermittent regime,  >  2 ().
(iii) the measure   is associated.
We observe that the association of   follows from the association of the field U  () since this property is preserved by limits in distribution.As a complement, Shiga in [47] also gave the behavior of the field U  when  = 1,2 so that the process is in the intermittent regime.More recently, this was extended to  ≥ 3 by Cox and Greven, [49], and then Greven and den Hollander, [32], into the intermittent region, 0 <  <  2 ().In this case, it turns out that the process U  () dies out in the following sense that its law tends to the point mass on the configuration of all 0's.By this we mean the element 0 ∈ [0, ∞) Z  all of whose components are 0. We denote the point mass on 0 by  0 .Theorem 8.If  = 1, 2 then for every  > 0 and every As pointed out in [32], one may take  =  Θ ,  ∈ [0, ∞), in which case [(, )] = ,  ∈ Z  ,  ≥ 0. Thus, while the system is dying out, this implies that there are very high, widely separated peaks in the field U  () for large values of .This is the phenomena of intermittency.

Intermittency and Sums over Boxes
In an effort to quantify the intermittency effect, we consider sums of the field of solutions U() = {(, ) :  ≥ 0,  ∈ Z  } over boxes in Z  that grow in size as  → ∞.This is inspired as well by the developments of [52], where limit laws for sums of products of exponentials of nonnegative,  random variables {  }, namely, ∑ () =1   ∑  =1   were studied.Under a Cramér type condition, [   ] < ∞ for some  > 0, a weak law of large numbers, central limit theorem, and convergence to stable laws was established for ∑ () =1   ∑  =1   under appropriate rates of growth of () and proper centerings and scalings.Earlier efforts in this direction for time-stationary models were done in [53,54].The transition to this setting is provided by considering the Feynman-Kac representation of solutions of the parabolic Anderson model which resembles sums of the form with (),  ≥ 0 a random walk on Z started at  under the measure   .Recall that the solution of the stochastic equation is given by means of the Feynman-Kac formula Thus, the analog of (113) would be where and |Λ  | is the number.Two observations follow from our previous considerations.First, if  is fixed while  → ∞, then the existence of the a.s.Lyapunov exponent implies that lim We will refer to (118) as the quenched average.On the other hand, if  remains fixed while  → ∞, then the ergodic theorem implies lim We will refer to (119) as the annealed average.When  ≥ 3 and  >  2 () there is no discrepancy between the quenched and annealed averages since, as was shown above, () =  1 () = 1/2.However, when  = 1 or 2 or when  ≥ 3 and  <  2 (), one has () <  1 () and as a result there arises a discrepancy between the quenched and annealed averages.This discrepancy is a manifestation of intermittency and if we allow  = () to grow at an appropriate rate, we can begin to quantify how large a box Λ  must be in order to capture the large peaks in the field that are giving rise to the annealed average.
Recall that for  <  2 (), full intermittency of the field {(, ) :  ∈ Z  } occurs which was defined as It is the condition (120) which is at work behind the scenes in [55] and the main result gives some information on the spread of the high peaks as  → ∞.We refer the reader to a more comprehensive exposition on intermittency in Grimmett and Kesten [56].

Association and Central Limit Theorem
In a previous section we discussed the property of association for the U().This property also held for the limiting laws   of the field U  () which arose when the distribution of U 0 was   .
A central limit theorem for sums of the elements of the field U() was established in [55].This theorem took the form with Λ  = { ∈ Z  : ‖‖ < } provided that  = () =   with  > (2).The proof followed Bernstein's method of decomposing the sum (127) into sums over disjoint, slightly separated boxes.The proof in [55] was quite technical and relied on approximation of the solution (, ) to obtain some degree of independence and a difficult large deviation result from [39].An alternate proof of a stronger result, Theorem 11, using the ideas of Newman about associated random variables was given in [49].The proof is simpler than the proof in [55] and yields more information about the variance of ∑ ∈Λ  (, ), relating it to the first eigenfunction of 2Δ +  0 .The new proof also gives the joint distribution of these sums over disjoint growing boxes.
The key to the proof of a central limit theorem for associated random variables is the following inequality of [36].
Theorem 10 (Newman).Suppose  1 ,  2 , . . .,   have finite variance and are associated.Then, for any The content of this theorem is that if the sum of the covariances can be controlled, then the distribution of associated random variables can be compared to the distribution of independent random variables.The application of Newman's ideas only require an extension to triangular arrays of random variables and a verification of the finite susceptibility condition in Theorem 10.
Switching to  to denote time, we will be concerned with sums of the variables (, ) over boxes for  ∈ Z  .The central limit result is then the following.
Theorem 11.Suppose 0 <  <   () and let {(, ) :  ∈ Z  } be the solution of the parabolic Anderson equation (20).Define the random variables ISRN Probability and Statistics 13 If () =   with  > (2), then where the field is composed of  N(0, ) random variables with In order to verify the finite susceptibility condition, one uses (82), which gives It then follows that Then, since Var Define which, by stationarity, does not depend on  ∈ Z  and  = ( ∑ By inverting the Fourier transform in (74), we have so that by (74), we get Then, by (137), one can conclude that lim  → ∞  () =  which gives the variance of the limit in the theorem.
Several open problems remain.The first question is whether a central limit theorem holds for a field with distribution   .This would only require checking the finite susceptibility condition and applying (87).Work similar to this is found in [57].The second question is whether an invariance principle holds.There are invariance principles for associated fields established in [58][59][60].Finally, one expects convergence of properly normalized and centered sums to a stable law of index  ∈ (0, 2) when () =   and  = ().Results that suggest this have appeared in [52,61].

Large Deviations and Concentration Effects
In this section we examine some results on the distribution of the elements of the field U().We will begin with a concentration of measure result for an element of the field U().
Early works on the concentration of measure phenomenon appeared in [62], for example.Recently, this subject has justifiably received a lot of attention, typical references might include [63][64][65] and the wonderful monograph of Ledoux [66], which has an extensive list of references.We recall an observation of Talagrand from [65].The Chernoff bound for  Bernoulli random variables   states that Talagrand's observation is "a random variable that smoothly depends on the influence of many independent random variables satisfies Chernoff-type bounds." The solution of (20), (, ), depends smoothly on many independent random variables, namely, the increments in the Brownian field W() ≡ {  () :  ∈ Z  , 0 ≤  ≤ }.In order to make sense of this, it is natural to make use of the Malliavin calculus.Perhaps the first use of Mallaivin in disordered systems appeared in [67].In the context of solutions of (20), Rovira and Tindel, [68], used the Malliavin calculus to establish the concentration inequality.

ISRN Probability and Statistics
The use of the Malliavin calculus, see [69] for more details, starts by expressing the functional, for a fixed path , in the form where ℎ(, ) depends on  by the relation ℎ(, ) =   (()).
The Malliavin derivative  of a square integrable random variable  defined on this space is, when it exists, a random element of  2 ([0, ] × Z  ) that we will view as a stochastic process  = ( , ) , indexed by time and space.The Malliavin derivative  , is heuristically equal to /(  ()) and can be formally computed as such.The Malliavin derivative of   () is thus the element of  2 ([0, ] × Z  ) defined by (147) Using again the chain rule, we obtain where   is the probability measure on D  defined by Notice then that The concentration results are thus closely related to large deviation results established in [39,[71][72][73].For example, by means of a block argument, it was shown in [39] that for every  > 0 there is a () > 0 such that ≤  −(2+(()+) 2 /2) . (158) The probability of lower deviations for (1/) log (, ) below () −  has a much smaller order of magnitude than the probability for deviations above () + .This is similar to phenomena found in the first passage percolation [74] or in increasing subsequences in  samples as in [75], to mention just a couple of instances.However, in contrast to the present situation, in the cited examples, the random variables involved have been positive.Since the model in [74] is close to the present case, we will give a brief description of the first passage percolation model and refer the reader to [16,56,76,77].The functionals involved in the first passage percolation stand in close analogy to the functional  , in Section 7. In first passage percolation on Z  the edge set E is the set of edges between adjacent vertices in the usual lattice structure.The edges are endowed with an  random field {() :  ∈ E} of non-negative random variables with common distribution function .The random variable () assigned to an edge is thought of as the time required to traverse the edge .One then considers "up-right" paths in the lattice, that is, sequences  = ( 0 ,  1 ,  1 ,  2 , . . .,   ,   ) of alternating vertices and edges, where   and  +1 are connected by the edge   and the components of  +1 are greater than or equal to the components of   .The passage time of  is Then in [56] it was shown that the functional It was also observed there that () = 0 if and only if (0) ≥   where   is the critical probability for the existence of an infinite cluster in Bernoulli percolation in Z  .By means of a sub-additive argument, Chow and Zhang [74] showed that lim whereas lim inf The intuitive reason for the difference is that for the event { 0, ≥ ( + )} to occur, the entire media need to be anomalous.However, for the event { 0, ≤ ( − )} to occur, the media only need to be anomalous along a single path from 0 ∈ Z  to (, , . . ., ).
In [73], a similar phenomenon was pointed out for the functional  , defined in (93).The limit (95) holds as in the case of the first passage percolation.There are two differences: the first is the rather minor one that the functional  , involves a sup rather than an inf.This has the effect of switching the roles of upper versus lower deviations.Now the event { , ≥ ( + )} can occur if the media are anomalous along one path, whereas { , ≤ ( − )} can only occur if the entire media up to time  are anomalous.The fact that the random variables involved in the functional  , can be negative.However, due to the fact that the negative tails are not too heavy, this difficulty can be overcome and the following result holds.Theorem 12.For  , as defined above in Model 1 in dimension  and  > 0, for the lower large deviations one has whereas for the upper large deviations, One has An analogous result was established in [72] for solutions of (20).Generalizations of these results appeared in [71].
Theorem 13.For Model 1 and each  > 0 for the lower large deviations, One has and for the upper large deviations, One has

Parabolic Anderson in 𝑅 𝑑
The situation becomes technically more difficult when one considers the R  version of (20).For the R  model we let {  () :  ∈ R  } be a Gaussian field of identically distributed standard Brownian motions, defined on a probability space (Ω, , ).We can no longer assume that the motions are independent and obtain a solution to the analog of (20).The dependence will need to be spatially smooth for this so denote the correlation function of the field by Γ(),

ISRN Probability and Statistics
Notice that we have the symmetry Γ() = Γ(−).We will assume that Γ is continuously differentiable with first derivative Holder continuous of order  − 1 for some  > 1.
The normalization Γ(0) = 1 is forced by the assumption that  is a standard Brownian motion.This gives the important approximation It is necessary to assume that  > 0 in order to avoid the degenerate case Γ ≡ 1.
The continuous space version of (20) in integrated form is thus where Δ denotes the Laplacian in R  .Equation ( 170) is called the parabolic Anderson model in R  .As noted in [78], unless the function Γ() is twice continuously differentiable, (170) will not have a solution in that any prospective solution would lack a well-defined spatial Laplacian.Accordingly, the equation was reformulated as the integral equation for ( − , , ) the Gausian kernel corresponding to speed  Brownian motion.This equation has the same solution as (170) in the case of a sufficiently smooth field {  :  ∈ R  }.In [78], existence of solutions and other results was established.In particular, the Feynman-Kac representation remains valid for the following solution: Here the expectation is taken with respect to {() :  ≥ 0} a speed  -dimensional Brownian motion, that is, the diffusion with generator (/2)Δ.
The principal result of [79] is the existence of the a.s.Lyapunov exponent for (170).The subadditivity results which worked so readily in the discrete spatial setting do not easily carry over to the present case.This technical difficulty can be overcome by a probabilistic version of a parabolic Harnack inequality.Theorem 14.There exists a positive constant () such that for any nonnegative bounded function  0 on R  with  0 > 0 on a set of strictly positive Lebesgue measure in R  , the solution  of (170) with (0, ⋅) =  0 (⋅) satisfies for any The  → 0 asymptotics are of a different order of magnitude in R  than in Z  .In [79] it was shown that  1/3 ≤  () ≤  1/5 ,   → 0.
(174) However, recently, Rael [80] has shown that  1/3 is the correct order of magnitude, It is an open problem to show that for some constant  > 0, lim

Anderson Polymer Models
The field W() can be viewed as a random media through which the process (,    ) will evolve up to time .The influence of the media on the process is obtained by a change of measure.The resulting measure on D  is viewed as a measure on polymers.The Anderson polymer model is the Gibbs measure on D  = D([0, ], Z  ) defined by The functions   (, ) and  ,, () thus have the same distribution so one can make use of the properties of (, ) and apply them to  ,, ().In particular, as in the case of the a.s.Lyapunov exponent, the limit exists a.s.Brownian scaling gives that Thus, by (96), it follows that We will use the notation   ,  ,, and  ,, when  = 0. Similar discrete models have received considerable attention in recent years.Earlier works by [87,[96][97][98] and many others focused on a discrete model.In this model, the path space is O = { = (  ) :   ∈ Z  ,  ≥ 1} and   () =   is the coordinate map.The process is simple random walk started at  in Z  .The random media {(, ) :  ∈ Z  ,  ≥ 1} consist of  random variables, defined on a probability space (Ω, ).A typical choice would be standard centered Gaussians so that the log moment generating function () =  2 /2 defined by is defined for all  ∈ R. Then the discrete polymer measure on Ω is defined by where the partition function   () = [ ∑  =1 ((  ,)− 2 /2) ].The questions now focus on the behavior of the paths {  : 1 ≤  ≤ } with respect to the measure  , .It was observed by Bolthausen [96] that   () is a positive martingale with respect to the -field Returning to the continuous model,  ,,  − 2 /2 is a positive martingale with respect to F  = {  () :  ∈ Z  , 0 ≤  ≤ } which therefore converges, lim  → ∞  ,,  − 2 /2 =  ∞ .Again the weak disorder-strong disorder distinction is defined by the dichotomy ( ∞ = 0) = 0 or ( ∞ = 0) = 1.Denote the product measure of  ,, with itself by  ⊗2 ,, .In [99], two versions of the overlap were considered as The quantity  ,, measures, for two independent samples,  and , sharing the same environment, the proportion of time spent together.The version  ,, of the overlap measures the amount of time up to  that the endpoints of independent samples drawn with respect to the measure  ,, agree.By taking the logarithmic Malliavin derivative of the partition function with respect to   (), one arrives naturally at  ,, .By taking the logarithmic Itô derivative of the partition function, one arrives at  ,, .The overlap occurs in statistical mechanics in a natural way and of course the present model is essentially similar to standard statistical mechanical models in that it involves a Gibbs measure.In statistical mechanics, counterparts of these overlaps can be found, for the Sherrington-Kirkpatrick model and other ones for disordered systems.Coming via integration by parts, the first overlap has been the most successful in the last decade [100][101][102][103] to study the low temperature regime.Now in [68,84,87], it was demonstrated that strong disorder and weak localization are equivalent as ,, ( () =  ())  = ∞.
In [85], the stronger result was established as ,, ( () = ) ≥  > , for some  > 0. ( There are several related results on localization.Strong concentration for the directed polymer in a random environment for parabolic Anderson model (space dependent only) with a Pareto potential was established in [17].The main difference is that there the favourable sites in the environment have a simple characterization in terms of the potential.In the discrete time case with heavy tailed potentials, see [82] for similar conclusions.When the tails are less heavy, the favourite corridors can no longer be characterized by maxima of the potential, and they are no longer explicit, but complete site localization can still be proved [95].Note that in the discrete case, only little is known on the random geodesics [104] in first passage percolation, which are the zero-temperature ISRN Probability and Statistics favourite paths.For the solution of the KPZ equation in one dimension, the distribution of the favourite end-point has been recently computed in [93], and it is the argmax of an Airy 2 process minus a parabola.
In [99], the behavior of the overlaps  ,, and  ,, was quantified.This used, among other things, the relation ( 182 for some critical value Υ  ∈ [0, ∞) depending only on the dimension.This is the weak disorder regime.
In dimension  ≥ 3, it is known by second moment method [85] that  ,, exp{− 2 /2} converges to a positive limit, so that the equality holds in the left member of (202) for  2 / small.Hence, Υ  > 0 in that case.
There are many interesting open problems on Anderson polymers.