Quantitative Fourth Moment Theorem of Functions on the Markov Triple and Orthogonal Polynomials

In this paper, we consider a quantitative fourth moment theorem for functions (random variables) defined on the Markov triple ðE, μ, ΓÞ, where μ is a probability measure and Γ is the carré du champ operator. A new technique is developed to derive the fourth moment bound in a normal approximation on the random variable of a general Markov diffusion generator, not necessarily belonging to a fixed eigenspace, while previous works deal with only random variables to belong to a fixed eigenspace. As this technique will be applied to the works studied by Bourguin et al. (2019), we obtain the new result in the case where the chaos grade of an eigenfunction of Markov diffusion generator is greater than two. Also, we introduce the chaos grade of a new notion, called the lower chaos grade, to find a better estimate than the previous one.


Introduction
The aim of this paper is to find the fourth moment bound in the normal approximation of a random variable related to a general Markov diffusion generator. A central limit theorem, known as the fourth moment theorem, was first discovered in [1] by Nualart and Peccati, where the authors found a necessary and sufficient condition such that a sequence of random variables, belonging to a fixed Winer chaos, converges in distribution to a Gaussian random variable.
Throughout this paper, we use the mathematical expectation for the integral on a probability space ðΩ, F, ℙÞ, so that, for example, the integral Ð Ω Fdℙ is denoted by E½F, and a real-valued measurable function defined on a probability space will be called a random variable. Also, we define the variance of a random variable F in L 2 ðΩ, ℙÞ as Theorem 1 (fourth moment theorem). Fix an integer q ≥ 2, and consider a sequence of random variables fF n , n ≥ 1g belonging to the qth a Wiener chaos with E½F 2 n = 1 for all n ≥ 1. Then, F n ⟶ L Z if and only if E½F 4 n ⟶ 3, where Z is a standard Gaussian random variable and the notation ⟶ L denotes the convergence in distribution.
Such a result constitutes a dramatic simplification of the method of moments and cumulants. In the paper [2], the fourth moment theorem is expressed in terms of Malliavin derivative. However, the results given in [1,2] do not provide any estimates, whereas the authors in [3] prove that Theorem 1 can be recovered from the estimate of the Kolmogorov (or total variation, Wasserstein) distance obtained by using the techniques based on the combination between Malliavin calculus (see, e.g., [4][5][6]) and Stein's method for the normal approximation (see, e.g., [7][8][9]). Also, we refer to the papers [3,4,[10][11][12][13] for an explanation of these techniques.
For estimates for a normal approximation, we consider the Kolmogorov distances of the type where Z is a standard Gaussian random variable. The following statement is the remarkable achievement of Nourdin-Peccati [3] approach (see Theorem 3.1 in [3]). Let F ∈ D 1,2 be such that E½F = 0 and E½F 2 = 1. Then, the following bound holds: where Γ 1 ðFÞ = hDF,−DL −1 Fi H . The notations D 1,2 , D, and L −1 , related to Malliavin calculus, are explained in [5] or [6]. In the particular case where F is an element in the qth Wiener chaos of X with E½F 2 = 1, the upper bound in (3) is given by Here, ð1/6ÞðE½F 4 − 3Þ is just the fourth cumulant κ 4 ðFÞ of F.
Recently, the author in [14] proves, from a purely spectral point of view, that the fourth moment theorem also holds in the general framework of Markov diffusion generators. Precisely, under a certain spectral condition on a Markov diffusion generator, a sequence of eigenfunctions of such a generator satisfies the bound given in (4) with a different constant. In particular, this new technique avoids the use of complicated product formula. The authors in [15] introduce a Markov chaos of eigenfunctions being less restrictive than the notion of Markov chaos defined in [14] and also obtain the quantitative four moment theorem for convergence of the eigenfunctions towards Gaussian, gamma, and beta distributions. Furthermore, Bourguin et al. in [16] prove that convergence of the elements of a Markov chaos to a Pearson distribution can be still bounded with just the first four moments of the form where d is a suitable distance, Z is a random variable with the law belonging to the Pearson family, and G is a chaotic random variable defined on ðE, μÞ. Here, P and Q in (5) are polynomials of degree four, and the constants ξ 1 and ξ 2 are determined in terms of a chaos grade defined in Definition 3.5 of [16].
In this paper, we find a bound of the form for the Kolmogorov distance between a stand Gaussian random variable Z and a random variable F defined on ðE, μ, ΓÞ related to a Markov diffusion generator L with an invariant measure μ. Since usually the central limit theorem (normal approximation) is a main topic of convergence in distribution, we confine our interest in a normal approximation.
In the line of this research, the motivations and contribu-tions of our work in comparison with other works will be summarized below: (i) Compared to previous works [14][15][16], our studies are not limited to an element belonging to a fixed eigenspace of Markov chaos. Our result is a remarkable extension in comparison with other works dealing with only random variables in a fixed eigenspace. To achieve our goal, the starting point is the following bound where L −1 is the pseudo-inverse of the underlying Markov generatorL and Γ is the carré du champ operator, which is the result of the study in [16]. However, in order to find the fourth moment bound (6), we introduce a new technique relying on two types of the operators given in [17,18]. First, we prove that the right-hand side of (7) can be represented as the sum of two integrals with the operators mentioned above, and use this representation to prove that the fourth moment bound (6) holds. This is the innovation point of this work (ii) If the upper chaos grade of F is strictly greater than two, then, the constant ξ 2 and QðGÞ in the bound (5) are given as follows: ξ 2 > 0 and QðGÞ = VarðG 2 Þ . This fact means that the fourth moment theorem in Theorem 1 is not working. However, applying the technique developed in this paper can eliminate the second term in (5), which means that, in such a random variable G, the fourth moment theorem holds unlike the previous result in [16], where the upper bound (5) for a sequence fF n g of chaotic random variables is given in (111) of Remark 12 (iii) In this paper, another notion of chaos grade, called a lower chaos grade, is introduced and used to provide a better estimate than the previous one obtained from (5) in [16]. Throughout this paper, the existing chaos grade in Definition 3.5 of [16] will be called an upper chaos grade The rest of the paper is organized as follows: Section 2 introduces some basic notations and reviews the results of Markov diffusion generator. In Section 3, a new notion of chaos grade in a finite sum of Markov chaos is defined, and the orthogonal polynomials will be considered in order to illustrate the concept on chaos grades. In Section 4, our main result is covered in Theorem 8. Finally, as an application of our main result, in Section 5, we derive upper bounds in the Kolmogorov distance for an eigenfunction belonging to a fixed Markov chaos.

Preliminaries
In this section, we recall some basic facts about Markov diffusion generator. The reader is referred to [19] for a more detailed explanation. We begin by the definition of Markov Journal of Function Spaces triple ðE, μ, ΓÞ in the sense of [19]. For the infinitesimal generator L of a Markov semigroup P = ðP t Þ t≥0 with L 2 ðμÞ -domain DðLÞ, we associate a bilinear form Γ. Assume that we are given a vector space A 0 of DðLÞ such that for every ðF, GÞ of random variables defined on a probability space ðE, F, μÞ, the product FG is in DðLÞ (A 0 is an algebra). On this algebra A 0 , the bilinear map (carré du champ operator) Γ is defined by for every ðF, GÞ ∈ A 0 × A 0 . As the carré du champ operator Γ and the measure μ completely determine the symmetric Markov generator L, we will work throughout this paper with Markov triple ðE, μ, ΓÞ equipped with a probability measure μ on a state space ðE, FÞ and a symmetric bilinear map Γ : A 0 × A 0 such that ΓðF, FÞ ≥ 0.
Next, we construct the domain DðEÞ of the Dirichlet form E by completion of A 0 and then obtain, from this Dirchlet domain, the domain DðLÞ of L. Recall the Dirchlet form E as If A 0 is endowed with the norm the completion of A 0 with respect to this norm turns it into a Hilbert space embedded in L 2 ðμÞ. Once the Dirchlet domian DðEÞ is constructed, the domaion DðLÞ ⊆ D ðEÞ is defined as all elements F ∈ DðEÞ such that, for all G ∈ DðEÞ, where c F is a finite constant only depending on F. On these domains, a relation of L and Γ holds, namely, the integration by parts formula By the integration by parts formula (12) and ΓðF, FÞ ≥ 0, the operator −L is nonnegative and symmetric, and therefore, the spectrum of −L is contained S ⊆ ½0,∞Þ.
A full Markov triple is a standard Markov triple for which there is an extended algebra A 0 ⊂ A, with no requirements of integrability for elements of A, satisfying the requirements given in Section 3.4.3 of [19]. In particular, the diffusion property holds: for any We also define the operator L −1 , called the pseudoinverse of L, satisfying for any F ∈ DðLÞ, 3. Chaos Grade and Orthogonal Polynomials 3.1. Chaos Grade. Fix a probability space ðE, F, μÞ. We assume that −L has a discrete spectrum Λ = f0 = λ 0 < λ 1 <⋯<λ k <⋯g. Obviously, the zero is always an eigenfunction such that −Lð1Þ = 0. By the assumption on the spectrum of L, one has that Now, we define chaotic random variables as follows.
The random variable F is called chaotic if there exist u > 1 and g < 1 such that uλ ℓ M ∈ Λ and gλ ℓ M ∈ Λ satisfy In this case, the largest number g satisfying (16) is called the lower chaos grade of F. On the other hand, the smallest number u satisfying (16) is called the upper chaos grade of F.

Remark 3.
(1) The authors in [16] define the chaos grade of F, corresponding to the upper chaos grade in Definition 2, in the case where F is an eigenfunction with respect to an eigenvalue of the generator L. In this paper, we introduce the lower chaos grade of F, which will be used to obtain a better estimate for the four moments theorem than the estimate given in Theorem 3.9 of [16] in the particular case where the target distribution is a standard Gaussian distribution can be expanded as a sum of eigenfunctions with the 3 Journal of Function Spaces eigenvalue of the largest magnitude λ ℓ i ,ℓ j max and the eigenvalue of the smallest magnitude λ From (17), it follows that where λ min = min 1≤i,j≤M λ (3) In the paper [16], the authors describe how the a chaos grade, corresponding to the upper chaos grade in our works, behaves under tensorization. If has the lower chaos grades g ℓi . Then, the lower chaos grade See Corollary 4.1 in [16] for the upper chaos grade of F.
Next, we consider a finite dimensional eigenfunction and a finite sum of eigenfunctions to illustrate the concept on chaos grades.

Ornstein-Uhlenbeck Operator.
We consider the d -dimensional Ornstein-Uhlenbeck generator L, defined for any test function f by action on We where H m i , m i = 0, 1, ⋯, denotes the Hermite polynomial of order m i . Then, we have that F ∈ Ker ðL + qIdÞ. By the wellknown product formula of Hermite polynomials and a change of variables, we have that where C m, n, r ð Þ= a m a n r! m r ! n r ! : ð26Þ can be expanded as a sum of eigenfunctions with the smallest eigenvalue, among positive eigenvalues, being given by Journal of Function Spaces Obviously, λ min = 2, so that the lower chaos grade is g = 2/q. On the other hand, the largest eigenvalue in the expansion of F 2 , as a sum of eigenfunctions, is given by |m + n | = |m| + |n| = 2q. Therefore, the upper chaos grade is given by u = 2.
If F = ∑ M i=1 H q i ðxÞ, where 0 < q 1 < ⋯<q d , then the lower and upper chaos grades are given, respectively, by 3.3. Jacobi Operator. For α, β > −1, we consider the d -dimensional Jacobi generator L, defined for any test function f by action on Its spectrum Λ is of the form where λ k = kðk + α + β − 1Þ. Let us set where P α,β m i ðxÞ, m i = 0, 1, ⋯, denotes the m i th Jacobi polynomial being given by Recall that p F q denotes the generalized hypergeometric function with p numerator and q denominator, given by where Then the Jacobi polynomials can be expressed as The well-known product formula of Jacobi polynomials yields that where C m, n, r ð Þ= First, observe that It follows, from (39), that for any indices m and x such that where the notation deg ðFÞ denotes the degree of F.

Journal of Function Spaces
Successive applications of the arguments for (39) yield that For any another index x such that |x | = deg ðFÞ and λ |x| = λ, we have, from (41), that so that Let f ðxÞ = ∑ d i=1 m i x i . Now, we find a point yielding the maximum value of f ðxÞ under the restriction given by (43). Obviously, Lagrange's method shows that where δ is a Lagrange multiplier. Plugging x i = m i /2δ into (43) yields that 4δ 2 = 1, so that x i = m i for all i = 1, ⋯, d. Therefore, it follows, from (40), that Hence, the upper chaos grade is given by Next, we find the lower chaos grade of F. From (37), the square of F can be expanded as a sum of eigenfunctions with the smallest eigenvalue, among positive eigenvalues, being given by When jm i − n i j = 0 for i ≠ k, ℓ ∈ f1, ⋯, dg and jm i − n i j = 1 for i = k, ℓ ∈ f1, ⋯, dg, the sum ∑ d i=1 λ jm i −n i j has the minimum value. Hence, λ min = 2λ 1 , so that the lower chaos grade is given by g = ð2ðα + βÞÞ/λ.

Fourth Moment Theorem
In this section, we derive an upper bound on Kolmogorov distance d Kol ðF, ZÞ, where F, not necessarily belonging to a fixed eigenspace, is a random variable related to Markov diffusion generator L, and Z is a standard Gaussian random variable.

Lemmas.
We begin with stating a useful lemma, which is going to be frequently used in this section. Lemmas, which appeared in this section, are well-known in the particular case where F is a functionals of Gaussian random fields.

Lemma 6.
Suppose that Γ j ðFÞ ∈ DðLÞ, j = 0, 1, 2, with E½F = 0. We have Proof. By the definition of the operator Γ * 1 and the carré du champ operator Γ, we have that Using the diffusion property, the first expectation in (59) can be written as Lemma 4 shows that the second expectation in (59) can be computed as The diffusion property shows that the third expectation in (59) can be represented as follows: Plugging (60), (61), and (62) into (59) yields that From Lemma 4 and (63), it follows that The above result (64) deduces that Hence, the desired result follows.
Lemma 7. For any random variable F related to a Markov diffusion generator L such that VarðΓ 1 ðFÞÞ > 0, one has that CðFÞ ≠ ∅.
Theorem 8 (fourth moment bound). Suppose that Γ j ðFÞ ∈ DðLÞ, j = 0, 1, 2, with E½F = 0 and VarðΓ 1 ðFÞÞ > 0. If the law of F is absolutely continuous with respect to the Lebesgue measure, there exists a constant c ≠ 2 such that Proof. By Stein's equation, we have that, for z ∈ ℝ, where f z is a solution of Stein's equation. Since E½F = 0, we have that F = LL −1 F. Therefore, by the integration by parts formula (12) and the derivation of Γ, the right-hand side of (68) can be written as Since ∥f z ∥ ∞ ≤ 1, we have, from Lemma 5 and Lemma 7 together with (69), that Remark 9.
(1) If E½Γ 3 ðFÞ ≥ 0, then we see, from the proof of Lemma 7, that c ∈ ð−∞,2Þ, and hence, the value in the square root in (67) is of positive. On the other hand, if E½Γ 3 ðFÞ < 0, then c ∈ ð2,∞Þ. This means that the value in the squre root of upper bound also is of positive (2) If 2E½Γ 3 ðFÞ = E½Γ * 3 ðFÞ, then, by (70), F is a random variable having a standard Gaussian distribution. Conversely, suppose that F is a random variable having a standard Gaussian distribution. Then, we have that −LF = F and ΓðF,−L −1 FÞ = ΓðFÞ = 1. Hence, 2Γ 3 ðFÞ = Γ * 3 ðFÞ = 0 As far as we know, the following is the first result of the quantitative fourth moment theorem for a random variable F belonging to a sum of Wiener chaoses.

Corollary 10.
Let F = H p ðxÞ + H q ðxÞ for p > q ≥ 2, where H p and H q are Hermite polynomials of order p and q, respectively. Then, one has that where the lower chaos grade g of F is given by Proof. We compute the expectation E½Γ * 3 ðFÞ. First, note that Let us set Obviously, the well-known product formula and the definition of carré du champ operator prove that for i, j = p, q, From the right-hand side of (76), it follows that On the other hand, since L −1 c = 0, we get so that From (77) and (79), we have that Obviously, when c = 2/p, we have Similarly, if c = ðp − qÞ/p, then Therefore, if c = ð2 ∧ ðp − qÞÞ/p, then A i ðcÞ ≥ 0 for i = 1, 2, 3, so that By a similar computation as for (77), one has that On the other hand, Journal of Function Spaces Therefore, one has, from (84) and (85), that Using the same arguments as for the case of A i ðcÞ, i = 1 , 2, 3, yields that B i ðcÞ ≥ 0, i = 1, 2, 3, for c = ð2 ∧ ðp − qÞÞ/p. This implies that Similarly, Combining the above results (83), (87), (88), and (89), we can show that Hence, the proof of this corollary is completed.

Markov Chaos
In this section, as an application of Theorem 8, chaotic random variables such that F ∈ Ker ðL + λIdÞ will be considered.
Theorem 11. Let F be a chaotic eigenfunction of −L with respect to eigenvalue λ with E½F = 0 and VarðΓ 1 ðFÞÞ > 0.
Suppose that F has an upper chaos grade u and a lower chaos grade g.