CENTRAL LIMIT THEOREM FOR SOLUTIONS OF RANDOM INITIALIZED DIFFERENTIAL EQUATIONS: A SIMPLE PROOF

We study the central and noncentral limit theorems for the convolution of a certain kernel h with F(ξ(·)), where ξ is a stationary Gaussian process and F is a square integrable function with respect to the standard Gaussian measure. Our method consists in showing that in the weak dependence case, we can use the Lindeberg method, approaching the initial Gaussian process by an m-dependent process. We could say that only variance computations are needed to get the two types of limits. Then we apply the obtained results to the solutions of the certain differential equations.


Introduction
In this work, we use an m-dependent approximation to obtain the asymptotic behavior as t → ∞ of the solution of several differential equations, with random initial condition F(ξ(x)), where ξ is a stationary Gaussian process.As usual, this behavior depends heavily on the integrability condition of the covariance function of the process ξ.The main novelty of our approach is that we do not use the method of moments, avoiding lengthy calculations.We only need, via the m-dependent approximation, L 2 computations and a slight generalization of the Hoeffding-Robbins theorem [9] for m-dependent random variables.The problem is settled in a general framework, considering the class of processes that can be written as the convolution of a certain kernel h(t,x) with F(ξ(x)).
We can justify our study through three basic examples.First we consider the Burgers' equation with random initial condition, also known in the literature as Burgers' turbulence (see Section 6.1 for definition and properties), then the equation 4  (1.1) 2 CLT for solutions of random initialized PDE that can be interpreted as a heat-type equation of fourth order, finally the KDV (Korteweg-de Vries) linear equation, which is a third-order heat-type equation given by Note that in all these examples, the solution is equal, or closed, to a convolution integral: for a certain kernel h.Thus the problem of solving differential equations with initial conditions written as u 0 (•) = F(ξ(•)) is settled in a general framework, considering besides solutions of partial differential equations, the class of processes that can be written as the convolution of a certain kernel h(t,•) with F(ξ(•)).
Why these situations are of some interest?The answer will be exposed through three different sources.
When dealing with the Burgers' equation having a random initial condition, the observed solutions u(t,x) correspond better to reality, as noticed by Woyczy ński [17]: "If one measures the velocity v(t,x) in a turbulent flow for two different time intervals, then the profiles look totally different and are not repeatable.However if one concentrates on the probability distribution of the measured turbulent signal, one obtains a predictable object."Thus it seems natural to consider the solutions as stochastic processes.
For the heat-type equation of fourth order, we refer to the work of Tanner and Berry, see [15], who consider the experiment of spinning oil on to a smooth flat surface, which provides an optically flat film.This may be disturbed in a randomly distributed way by rolling.The profile of the disturbed surface u(t,x, y) is shown to satisfy the equation where γ and τ are parameters and u(0,x, y) = ξ(x, y) is a bidimensional stationary Gaussian process.Nevertheless, our work will concentrate only on the study of the asymptotic behavior for the problem in a one-dimensional space.Finally let us consider the third example based on the paper of Beghin et al. [2].These authors studied the asymptotic behavior of the third-order heat-type equation, recalling that this type of equation emerges in the context of trimolecular chemical reactions and also as linear approximation of the Korteweg-de Vries equation.
Here we will study the asymptotic behavior of the rescaled functional U t , given in (1.3).Let us explain what type of rescaling we use, by taking the case of the fourth-order heat-type equation (1.1).The solution of such an equation can be written as u(t,x) = ∞ −∞ p t (x− y)u 0 (y)dy, where p t (•) is the inverse Fourier transform of the function e −|γ| 4 t/2 .We look for a solution which is self-similar, that is, λu(λ 4 ,λx) ≡ u(t,x).Note that lim λ→∞ λu λ 4 t,λx = 1 t 1/4 p 1 x t 1/4 := V (t,x); (1.5) I. Iribarren and J. R. Le ón 3 and is straightforward to check that V (t,x) is self-similar.By considering t = 1 and λ = t 1/4 , we obtain thus the above change of scale stabilizes the solution at infinity.This type of rescaling will be very useful to obtain the asymptotic normality of the random initial condition solution.
As was indicated in the abstract, to prove this asymptotic behavior, we will proceed by another approach than the method of moments and the diagram formula used by several authors, see, for instance, [1,4,5,11,17].
We define the filtered processes by where h is the filter and (1) F is a function belonging to L 2 (R;ϕ(z)dz), ϕ(z)dz being the standard Gaussian measure; (2) u(•) is a positive function tending to infinity.It will be chosen such that the limit theorems for U t (ξ,x) can be deduced of those obtained for U t (ξ,x); (3) h is a continuous bounded function such that h ∈ L 2 (R), and for 0 < β < 1, where h(x) = h(−x) and * denotes the convolution; (4) h(t, y) is the scaling of h given by where v(t) and w(t) are positive functions tending to infinity as t → ∞ and satisfying for any δ > 0.
Remark 2.1.In the two examples considered below; Burgers' turbulence and the fourthorder heat equation, we will choose v(t) = w 2 (t) with w = t 1/2 in the former case and v(t) = w(t) with w = t 1/4 in the latter.These choices correspond to the type of scaling indicated in Section 1 that stabilize the solution at infinity.
We are looking for the asymptotic behavior of U t (ξ,x) and U t (ξ,x) when t → ∞.

Asymptotic variance U t (ξ,x)
Recall that the set of Hermite polynomials H m defined by is a complete orthogonal system in the Hilbert space L 2 (R,ϕ(z)dz).
Hence the function F has a Hermite expansion given by where Note that Let us assume that C 1 = 0. Recall also Mehler's formula [16], which allows computing easily the variance of nonlinear functionals of stationary Gaussian processes.We have, where I. Iribarren and J. R. Le ón 5 We will consider the cases when Σ satisfies one of the following two assumptions.(H1) Σ is such that and κ ∈ L p ((0,∞)) for p = 1,2.(H2) There exist 0 < α < 1 and a slowly varying function L, such that notice that Σ ∈ L p ((0,∞)) for every p > 1/α.By using (3.4), (3.5), (3.6), and (3.7), we compute the variance V of U t (ξ,x) and we obtain where (2) if Σ satisfies the assumption (H2), then where γ α was defined in (2.2).
Proof.Let us consider the first case.We have the following estimation: Then the result follows, under (H1) and formula (3.10), by applying Fubini's theorem and Lebesgue's dominated convergence theorem.To apply the Fubini's theorem, we have Moreover, since κ(•) is bounded and the use of the Lebesgue's convergence theorem yields the result.
Let us assume (H2).On one hand, if k < 1/α, we have where the symbol means equivalence at infinity.On the other hand, for k ≥ 1/α, since Σ ∈ L k , for all k > 1/α.Let N α := max{k : k < 1/α}, we have because the two last terms in the above sum tend to zero.
Remark 3.2.In the precedent estimation, we assumed C 1 = 0, otherwise the rate of convergence would have been determined by the first nonzero term in the Hermite expansion.For instance, if C 1 = 0 and C 2 = 0, assumption (H2) for 1/2 < α < 1 entails that Σ 2 belongs to L 1 and we have again I. Iribarren and J. R. Le ón 7 Hence only the case 0 < α < 1/2 must be considered.The rate of convergence is then v(t)/L(w(t))w 1−α (t) and the variance limit C 2 2 γ 2α .From the above theorem, we can deduce the following proposition.Proposition 3.3.If Σ satisfies the assumption (H1), then If Σ satisfies the assumption (H2), then Proof.Assumption (H1) also yields and if To show this, it is enough to proof that we consider only the first integral, the second is similar, Hence we have The second part of the proposition is straightforward.

Main result: central limit theorem
We have the following theorems.

Proof of theorems
By using Proposition 3.3, we only need to proof the above theorems for U t (ξ,x) instead of U t (ξ,x).
Proof of Theorem 4.1.We have (5.1) Using Theorem 3.1, the second term tends to zero in probability when t → ∞, the first one is Gaussian, mean zero, and the computation of the asymptotic variance yields the result.
Proof of Theorem 4.2.The proof of this theorem, strongly inspired by Malevich [12] and Berman [3] methods, will be divided into several lemmas.Let M be a positive integer, and let us define We can write Σ(x) = ∞ −∞ e ixλ g(λ)dλ, where g is the spectral density of the process ξ.The process ξ has a Wiener's spectral representation where W is a complex Brownian motion satisfying Let us suppose that the L 2 norm of φ with respect to Lebesgue's measure equals one.Then we have Let us define the process where The covariance function of ξ is which implies that the process ξ is 1/ -dependent (since ψ vanish outside of [−1,1]).
Lemma 5.1.For every > 0, there exists a process ξ that is 1/ -dependent, such that Proof.We have (5.9) Let us consider the case when k = 1.Since we obtain, by using the spectral representation, e −isλ h(s)ds dW(λ). (5.11) where Thus we can write Let us show that E[J(t, )] 2 converges to zero when t tends to infinity uniformly in .In fact, using the It ô-Wiener isometry and (5.12), we get (5.14) The same type of computation holds true for the process ξ.
Hence we get For k ≥ 2, we use where σ (z) = E[ξ(y + z)ξ (y)].We consider each of the terms in the above sum separately.It is easy to see that lim (5.19) We finish studying the middle term of (5.17  Proof.By using (3.10), we get the integrand is the tail of a convergent series.Finally we get one version of the Hoeffding-Robbins theorem [9], the proof is included for completeness.
Lemma 5.3.If Σ satisfies (H1), then Proof.For fixed integer M, let us define (5.28)By using (3.5), we have ( (5.30) and where (5.32) By similar computations as in the above section, we get that where , and for the first and last intervals, we have h(t,z)h(t,z − y)dz dy, (5.34)where ( (5.37) To prove this statement, we have, for every k, where (5.39) We have used where A M = {j = ( j 1 , j 2 , j 3 ) : Given that Σ (|y|) = 0 when |y| > 1/ , and Σ and h are bounded, we have that tends to zero as t → ∞.This yields the result.
Then we prove Theorem 4.2 by using the three precedent lemmas, Theorem 3.1, and Proposition 3.3.

Applications
Theorems 4.1 and 4.2 can be applied to study the asymptotic behavior not only of the solutions of linear differential equation with random initial conditions, but also of the solutions of some particular nonlinear equations, as Burgers' equation.

Burgers' equation.
The Burgers' equation with random initial data is known as the Burgers' turbulence problem.Burgers' turbulence has been considered as a model for various physical phenomena, from the hydrodynamic turbulence to the evolution of the density of matter in the universe.The equation can be viewed as a simplified version of the Navier-Stokes equation with the pressure term omitted.See [1,4,5,11,17].
We consider the one dimensional Burgers' equation with viscosity parameter μ > 0, Let us suppose that v(x) = F(ξ(x)), where ξ denotes a mean zero stationary Gaussian centered process with covariance function Σ.
It is well known (see [14]) that the solution of problem (6.1) can be written as where If Σ satisfies (H1) or (H2), then V[J(x,t)] → 0, as t → ∞.This implies that J(x,t) tends to E[J(x,t)] = C 0 in probability, uniformly in x (C 0 denoting the first Hermite's coefficient of F).Then we are leaden to consider only where (2) if Σ satisfies (H2), then where Hence we obtain for p = 1,2, and we have the following expression for the asymptotic variances: (6.13) Example 6.3.Consider F(x) = x 2 , in this case, A straightforward computation gives ).According to Remark 3.2, if (H2) holds, for 0 < α < 1/2, we have For one slightly different treatment of this problem, see Leonenko and Orsingher [11].

Extension. The results can be easily generalized to
and F : R → R. We consider the initial process defined as ξ(y) = ξ 1 (y),...,ξ d (y) , (6.19) where ξ i (y) for i = 1,...,d are independent stationary Gaussian standard processes independent with covariance function Σ.This type of problems has been considered using the method of moment in [5], however their approach allows a more general form of function F and of process ξ.
In this case, (6.20)where C are the Hermite's coefficients of function e −F(x)/2μ .Here the process I t (ξ) is defined by h(t, y)e −F(ξ1(x−y),...,ξd (x−y))/2μ dy.(6.21) By the independence assumption, we have With the obvious modifications, we can also prove that if (H1) or (H2) hold, then The expression for the variance is (2)

.27)
Remark 6.4.The extension of Theorem 4.1 is immediate by using the independence between the coordinate processes.This allows to approach the process (ξ 1 (x),...,ξ d (x)) by the d-dimensional process (ξ 1 (x),...,ξ d (x)), where each coordinate is defined in the same way as in the proof of Theorem 4.1 becoming independent.

Linear differential equations.
The heat-type equation of higher order than two has been considered for a long time.This study began in 1960 with the seminal paper of Krylov [10], since then there is a huge literature about these equations.We can mention in particular the papers [6,7] of Hochberg, who used the fundamental solutions of this type of equations to build signed measures related to Wiener measure.For these measures, he also gets analogous results to the law of large numbers and the central limit theorem.More recently, similar problems in [8,13] have been treated.Let us study for instance the fourth-order heat-type equation, the extension to higher order being always possible.
Thus let us define the fourth-order heat-type equation in R + × R with random initial condition:
We obtain, as a corollary of Theorem 3.1 and Proposition 3.3, the following theorem.
We keep the same hypothesis as that in the general case.It is well known that problem (6.32) admits a solution in the form We conclude that v(t) = w(t) = t 1/4 , corresponding with our previous computations, and by choosing u(t) = t 1/2 , then v, w, and u verify(2.4).Defining U t (ξ,x) = h(t, y)F ξ(x − y) dy, U t (ξ,x) = ∞ −∞ h(t, y)F ξ(x − y) dy, (6.35)