The Maximal Strichartz Family of Gaussian Distributions : Fisher Information , Index of Dispersion , and Stochastic Ordering

We define and study several properties of what we callMaximal Strichartz Family of Gaussian Distributions. This is a subfamily of the family of Gaussian Distributions that arises naturally in the context of the Linear Schrödinger Equation and Harmonic Analysis, as the set of maximizers of certain norms introduced by Strichartz. From a statistical perspective, this family carries with itself some extrastructure with respect to the general family of Gaussian Distributions. In this paper, we analyse this extrastructure in several ways. We first compute the Fisher Information Matrix of the family, then introduce some measures of statistical dispersion, and, finally, introduce a Partial Stochastic Order on the family. Moreover, we indicate how these tools can be used to distinguish between distributions which belong to the family and distributions which do not. We show also that all our results are in accordance with the dispersive PDE nature of the family.


Introduction
The most important multivariate distribution is the Multivariate Normal (MVN) Distribution.To fix the notation, we give here its definition.Definition 1.One says that a random variable  is distributed as a Multivariate Normal Distribution if its probability density function (pdf)   : R  → R (1) takes the form where  fl [] ∈ R  is the mean value vector and Σ fl Var() ∈ Sym +  is the  ×  positive definite symmetric Variance-Covariance Matrix.
Its importance derives mainly (but not only) from the Multivariate Central Limit Theorem which has the following statement.
Due to its importance, several authors have tried to give characterizations of this family of distributions.See, for example, [1,2] for an extended discussion on multivariate distributions and their properties.Here, we concentrate on characterizing the MVN through variational principles, such as the maximization of certain functionals.A well-known characterization of the Gaussian Distribution is through the maximization of the Differential Entropy, under the constraint of 2 International Journal of Differential Equations fixed variance Σ.We focus on the case of when the support of the pdf is the whole Euclidean Space R  .Theorem 3. Let  be a random variable whose pdf is   .The Differential Entropy ℎ() is defined by the following functional: The Multivariate Normal Distribution has the largest Differential Entropy ℎ() amongst all the random variables  with equal variance Σ.Moreover, the maximal value of the Differential Entropy ℎ() is ℎ(MVN(Σ)) = (1/2) log[(2)  |Σ|].
We refer to Appendix for a proof of this well-known theorem.This characterization is, in some sense, not completely satisfactory because it is given just with the restriction of fixed variance.A more general characterization of the Gaussian Distribution has been given in a setting which, at first sight, seems very far, and it is the one of Harmonic Analysis and Partial Differential Equations.We first introduce the so-called admissible exponents.Definition 4. Fix  ≥ 1.One calls a set of exponents (, ) admissible if 2 ≤ ,  ≤ +∞ and Remark 5.These exponents are characteristic quantities of certain norms, the Strichartz Norms, naturally arising in the context of Dispersive Equations and can vary from equation to equation.We refer to [3] for more details.
Here is the precise characterization of the Multivariate Normal Distribution, through Strichartz Estimates.
Remark 7.This characterization does not need the restriction of fixed variance as the one achieved using the Differential Entropy Functional and so it is, in some sense, more "general."The result is conjectured to be true for any dimension  ≥ 1. See, for example, [7], where the optimal constant has been computed in any dimension  ≥ 1, under the hypothesis that the maximizers are Gaussians also in dimension  ≥ 3.
We refer to [7] for the relation of this result with harmonic analysis and restriction theorems.
Strichartz Estimates are a fundamental tool in the problem of global well-posedness of PDEs and measure a particular type of dispersion (see, e.g., [3-5, 7, 12, 13]).Strichartz Estimates bring with themselves some interesting statistical features and this is what we want to analyse in the present paper.
The symmetries of the functional in (6) give rise to a family of distributions that we call Maximal Strichartz Family of Gaussian Distributions: We refer to Section 2 for its precise construction.This is a subfamily of the family of Gaussian Distributions and, among the other things, it has the feature that the Mean Vector  and the Variance-Covariance Matrix Σ depend on common parameters.Therefore, from a statistical perspective, this family carries with itself some extrastructure with respect to the general family of Gaussian Distributions.This extrastructure becomes evident from the form of the Fisher Information Metric of the family.
Theorem 8. Consider (, ), a probability distribution function belonging to the Maximal Strichartz Family of Gaussian Distributions F, defined in (9).The vector of parameters , indexing F, is given by Then, the Fisher Information Matrix of (, ) is given (ii) in the elliptical case (   =  2  ; see Section 3 for the precise definition) by Remark 9. Technically, the only possible case inside the Maximal Strichartz Family of Gaussian Distributions is when    =  × , since  ∈ () (the spherical case, with  2 = 1).The form of the Fisher Information Matrix, in that case, is simplified to a lower dimension.Nevertheless, the computation performed in the way we did gives the possibility to compute a distance (in some sense centred at the Maximal Strichartz Family of Gaussian Distributions) between members of the Maximal Strichartz Family of Gaussian Distributions and other Gaussian Distributions, for which the orthogonal matrix condition    =  × is not necessarily satisfied.In particular, it can distinguish between Gaussians evolving through the PDE flow (see Section 2) and Gaussians which do not.
Remark 10.We believe that using the flow of a Partial Differential Equation is a natural way to produce probability density functions, in particular, in this case, since the flow of the PDE, that we are using, preserves the probability constraint.See Section 2.2 for more details on this comment.
As we said, Strichartz Estimates are a way to measure the dispersion caused by the flow of the PDE to which they are related.In statistics, dispersion explains how stretched or squeezed is a distribution.A measure of statistical dispersion is a nonnegative real number which is small for data which are very concentrated and increases as the data become more spread-out.Common examples of measures of statistical dispersion are the variance, the standard deviation, the range, and many others.Here, we connect the two closely related concepts (dispersion in statistics and PDEs) by introducing some measures of statistical dispersion like the Index of Dispersion in Definition 38 (see Section 4) which reflect the dispersive PDE nature of the Maximal Strichartz Family of Gaussian Distributions.Definition 11.Consider the norms ‖ ⋅ ‖  and ‖ ⋅ ‖  on the space of Variance-Covariance Matrices Σ and ‖ ⋅ ‖  on the space of mean values .One defines the following Index of Dispersion: International Journal of Differential Equations with  ̸ = 0 and where () is as follows: while Σ() is given by One calls I   the -Dispersion Index of the Maximal Family of Gaussians and one calls -Static Dispersion Index of the Maximal Family of Gaussians.
We compute this Index of Dispersion for our family of distributions and show that it is consistent with PDE results.We refer to Definition 38 for more details.
Another important concept in probability and statistics is the one of Stochastic Order.A Stochastic Order is a way to consistently put a set of random variables in a sequence.Most of the Stochastic Orders are partial orders, in the sense that an order between the random variables exists, but not all the random variables can be put in the same sequence.Many different Stochastic Orders exist and have different applications.For more details on Stochastic Orders, we refer to [14].Here, we use our Index of Dispersion to define a Stochastic Order on the Maximal Strichartz Family of Gaussian Distributions and see how there are natural ways of partially ranking the distributions of the family (see Section 5), in agreement with the flow of the PDE.Definition 12. Consider two random variables  1 and  2 such that   1 ( 1 ) =   2 (), for any  1 and  2 .One says that the two random variables are ordered according to their Dispersion Index I if and only if the following condition is satisfied: Remark 13.In this definition the index I can vary according to the context and the choices of the norms in the definition of the index.
An important tool which will be fundamental in our analysis is what we call 1/-Characteristic Function (see Section 2 and [7,15]).We conclude the paper with an appendix in which, among the other things, we use the concept of 1/-Characteristic Function to define generalized types of Momenta that exist also for the Multivariate Cauchy Distributions.

Construction of the Maximal Strichartz Family of Gaussian Distributions
This section is devoted to the construction of the Maximal Strichartz Family of Gaussian Distributions; see Figure 1.This is basically done through PDE methods.The program is the following.
(3) By means of 1/-Characteristic Functions, we give the explicit expression of (, ) the generator of the family.
(4) We use symmetries and invariances to build the complete family F.
2.1.The 1/-Characteristic Functions.Following the program, we first need to introduce the tool of 1/-Characteristic Functions to characterize F. It is basically the Fourier Transform, but, differently from the Characteristic Function, the average is not taken with the pdf, but with a power of the pdf.
Definition 14.Consider  : R  → C to be a Schwartz function, namely, a function belonging to the space with  and  being multi-indices, endowed with the following norm: Moreover, suppose that namely, ||  defines a continuous probability distribution function.Then, one defines One calls    () the 1/-Characteristic Function of .Moreover, one defines the Inverse 1/-Characteristic Function by We refer to the note [15] for examples and properties of 1/-Characteristic Functions and to Appendix for a simple straightforward application of this tool.In particular, we notice that      () = ().
Remark 15.If  is complex valued (not just real valued) and, for example,  =  ∈ N, then there are -distinct complex roots of || 2 .In our discussion, this will not create to us any problem, because our process starts with  and produces || 2 .We want to remark that the map ||   →  is a multivalued function.For this reason, we cannot reconstruct uniquely a generator, given the family that it generates.See formula (39) below and [15] for more details.
Remark 16.We could define 1/-Characteristic Functions for more general functions  :  →  with  a locally compact abelian group and  a general field.We do not pursue this direction here and we will leave it for a future work.We notice that    () can be considered also as a 1/-Expected Value:

Conservation of Mass and Flow on the Space of Probability
Measures.In this subsection, we show that if  0 () = | 0 | 2 defines a probability distribution, then also   () = | Δ  0 | 2 defines a probability distribution.This is mainly a consequence of the property of  Δ of being a unitary operator.
Theorem 17.Consider P(R  ), the set of all probability distributions on R  and  : (0, ∞) × R  → C a solution to (27).
Then  induces a flow in the space of probability distributions.
Remark 18.This situation is in striking contrast with respect to the heat equation, where if you start with a probability distribution as initial datum, instantaneously the constraint of being a probability measure is broken.

Fundamental Solution for the Linear Schrödinger Equation
Using 1/-Characteristic Functions.In this subsection, we solve the Linear Schrödinger Equation with initial datum  0 () =  −|| 2 ∈ S(R  ).This will produce a natural generator (, ) of a family of probability distributions, due to Theorem 17.
We first notice that the initial datum becomes a probability density function, if and only if multiplied by a constant.But, since the equation is linear, we can do everything without that constant and then include it at the end.
Remark 19.These computations are well known, but we perform them in detail here, in order to clarify what we will compute in the context of 1/-Characteristic Functions.
Since  0 () ∈ S(R  ), then also   (, ) ∈ S(R  ) and Δ(, ) ∈ S(R  ).So, we can apply the 1/2-Characteristic Function to both sides of (27) and get whose solution is We now need to compute the 1/-Characteristic Function of the initial datum and then the Inverse 1/2-Characteristic Function of   2 (, ) to get the explicit form of the solution.We have by using contour integrals.We notice that, with a simple change of variables, we have Hence International Journal of Differential Equations With this, we can conclude Now, we make the change of variables  = √1/4 −  + /( √ 1 − 4) to get Hence, we obtain Now, we have to find a constant  > 0 such that, for every  ∈ R, the function (, ) =  2 |(, )| 2 is a probability density function.The condition to be satisfied is the following: which implies  = (/2) −/4 .Therefore, the function induces the probability density function: so is going to be the generator of the family of distributions F.
Remark 20.This procedure works because the Gaussian Distribution is, up to constants, a fixed point of the is given by Moreover, if one defines  ℎ () fl  ℎ (, 2 + 4/, 2 + 4/) by then, for every  ≥ 1 (always supposing that Gaussians are maximizers), one has that ℎ () is a decreasing function of  and For any  ≥ 1 and (q, r) admissible pair, the Sharp Dual Homogeneous Strichartz Constant   (, , ) =   (, ) is defined by One has that  ℎ (, ) =   (, ).

International Journal of Differential Equations 7
This is the version of the theorem on Strichartz Estimates without the restriction ‖‖ 2 R  = 1, as proved in [7].From this, we can very easily deduce Theorem 6.
Proof of Theorem 6.Just substitute the condition ‖‖ 2 R  = 1 in all the statements.
As explained for example in [12], Strichartz Estimates are invariant by the following set of symmetries.
The only point here is that not all these symmetries leave the set of probability distributions P(R  ) invariant.Therefore, we need to reduce the set of symmetries in our treatment and, in particular, we need to combine the scaling and the parabolic dilations in order to have all the family inside the space of probability distributions P(R  ).
Lemma 24.Consider  , = ( 2 , ) such that (, ) ∈ P(R  ) maximizes (6); then  =  /2 .Proof.Consider Remark 25.We notice that some of the symmetries can be seen just at the level of the generator of the family  but not by the family of probability distributions   ().For example, the phase shifts (, )  → Remark 28.Let (, ) be the pdf defined in (39).Then, choose p () ∈ F with  = ,  0 = V 0 = 0, and  = 0.This implies that p () = (, ) ∈ F. For this reason, we call (, ) the Family Generator of F. We notice also that, in the definition of the family and with respect to Theorem 26, we used as scale parameter  1/2 instead of .This is done without loss of generality, since  > 0.
Right away we can compute the Variance-Covariance Matrix and Mean Vector of the family.
Corollary 29.Suppose X is a random variable with pdf   () ∈ F. Then its Expected Value is and its Variance is Proof.The proof is a direct computation.
Remark 30.We see here that, differently from the general family of Gaussian distributions, here the Mean Vector and the Variance-Covariance Matrix are related by a parameter, which represents the time flow.

The Fisher Information Metric of the Maximal Strichartz Family F
Information geometry is a branch of mathematics that applies the techniques of differential geometry to the field of statistics and probability theory.This is done by interpreting probability distributions of a statistical model as the points of a Riemannian Manifold, forming in this way a statistical manifold.The Fisher Information Metric provides a natural Riemannian Metric for this manifold, but it is not the only possible one.With this tool, we can define and compute meaningful distances between probability distributions, in both the discrete and the continuous cases.Crucial is then the set of parameters on which a certain family of distributions is indexed and the geometrical structure of the parameter set is also crucial.We refer to [16] for a general reference on information geometry.The first one to introduce the notion of distance between two probability distributions has been Rao in [17], who used the Fisher Information Matrix as a Riemannian Metric on the space of parameters.
In this section, we restrict our attention to the Fisher Information Metric of the Maximal Strichartz Family of Gaussian Distributions F and provide details on the additional structure that the family has with respect to the hyperbolic model of the general Family of Gaussian Distributions.See, for example, [18][19][20].

The Fisher Information
Metric for the Multivariate Gaussian Distribution.First, we give the general definition of the Fisher Information Metric.
Definition 31.Consider a statistical manifold S, with coordinates given by  = ( 1 ,  2 , . . .,   ) and with probability density function (; ).Here,  is a specific observation of the discrete or continuous random variables .The probability is normalized, so that ∫  (, ) = 1 for every  ∈ S. The Fisher Information Metric   is defined by the following formula: Remark 32.The integral is performed over all values  that the random variable  can take.Again, the variable  is understood as a coordinate on the statistical manifold S, intended as a Riemannian Manifold.Under certain regularity conditions (any that allows integration by parts),   can be rewritten as Now, to compute explicitly the Fisher Information Matrix of the family F, we use the following theorem that you can find in [21].

Theorem 33. The Fisher Information Matrix for an 𝑛-variate
Gaussian Distribution can be computed in the following way.Let  () = [ 1 () ,  2 () , . . .,   ()]  (54) be the vector of Expected Values and let Σ() be the Variance-Covariance Matrix.Then, the typical element I , , 0 ≤ ,  < , of the Fisher Information Matrix for  ∼ N((), Σ()) is where (⋅)  denotes the transpose of a vector, tr(⋅) denotes the trace of a square matrix, and Now, we have just to compute the Fisher Information Matrix entry by entry, following the theorem.We recall here that we are considering the following family of Gaussian Distributions: and that, in particular, we have that the Expected Value of a random variable  with distribution belonging to the family F is given by while the Variance-Covariance Matrix is given by Remark 34.We remark again that  and Σ depend on some common parameters, like the time .

The General Multivariate Gaussian Distribution.
As pointed out in [18,19], for general Multivariate Normal Distributions, the explicit form of the Fisher distance has not been computed in closed form yet even in the simple case where the parameters are  = 0,  = 0, and V 0 = 0. From a technical point of view, as pointed out in [18,19], the main difficulty arises from the fact that the sectional curvatures of the Riemannian Manifold induced by F and endowed with the Fisher Information Metric are not all constant.We remark again here that the distance induced by our Fisher Information Matrix is centred at the Maximal Strichartz Family of Gaussian Distributions, to enlighten the difference between members of the Maximal Strichartz Family of Gaussian Distributions and other Gaussian Distributions, for which    =  × is not necessarily satisfied.In particular, our metric distinguishes between Gaussians evolving through the PDE flow (see Section 2) and Gaussians who do not.
Remark 35.We say that two parameters  and  are orthogonal if the elements of the corresponding rows and columns of the Fisher Information Matrix are zero.Orthogonal parameters are easy to deal with in the sense that their maximum likelihood estimates are independent and can be calculated separately.In particular, for our family F the parameters  0 and V 0 are both orthogonal to both the parameters  and  2 .Some partial results, for example, when either mean or variance is kept constant, can be deduced.See, for example, [18][19][20].
Remark 36.The Fisher Information Metric is not the only possible choice to compute distances between pdfs of the family of Gaussian Distributions.For example, in [20], the authors parametrize the family of normal distribution as the symmetric space ( + 1)/( + 1) endowed with the following metric: Moreover, the authors in [20] computed the Riemann Curvature Tensor of the metric and, in any dimension, the distance between two normal distributions with the same mean and different variance and also the distance between two normal distributions with the same variance and different mean.
Remark 37.If we consider just the submanifold given by the restriction to the coordinates  = 1, . . .,  and  = 2 + 2 on the ellipse  2 + 16 2 = 4, we recover the hyperbolic distance: The geometry, however, does not seem the one of a product space, at least considering the fact that mixed entries are not zero, in our parametrization.

Overdispersion, Equidispersion, and Underdispersion for the Family F
As we said, Strichartz Estimates are a way to measure the dispersion caused by the flow of the PDE to which they are related.In statistics, dispersion explains how a distribution is spread-out.
Here, we discuss some particular cases and compute the dispersion indexes I   and I   for certain specific norms ‖ ⋅ ‖  , ‖ ⋅ ‖  and ‖ ⋅ ‖  .
(i) In the case  = 0, the -Static Dispersion Index of the Maximal Family of Gaussians that we choose is given by the variance of the distribution.We choose ‖Σ‖  = det(Σ) and so we get Now, in the spherical case    =  2 , one gets So, the distribution is Therefore, with ‖Σ‖  = det(Σ), the type of dispersion does not depend on the dimension .
Remark 39.In the strictly Strichartz case  2 = 1, we have that the dispersion is measured just by the scaling factor .
Choosing instead ‖Σ‖  = tr(Σ) as -Static Dispersion Index of the Maximal Family of Gaussians, we have some small differences: Now, in the spherical case    =  2 , we get So, the distribution is So with ‖Σ‖  = tr(Σ) the type of dispersion does depend on the dimension .
(ii) In the case  ∈ R, when  is different from zero, we can express Σ as a function of .In fact we have and so For example, if now we choose  0 = 0 and  = V 0 = (1, 0, . . ., 0)  we get Remark 40.In particular, from this, we notice that if at  = 0 the distribution is -equidispersed, an instant after the distribution is -overdispersed, in fact This is in agreement with the dispersive properties of the family F and legitimates, in some sense, our choice of Indexes of Dispersion.Moreover, if By Theorem 6, one has that 0 ≤ I  ≤ 1.When the index is close to one, the distribution is close, in some sense, to the family F, while, when the index is close to zero, the distribution is very far from F. This index clearly does not distinguish between distributions in the family F. It would be very interesting to see if the closeness to one of the Indexes of Dispersion I  computed on a general distribution implies a proximity to the Maximal Family of Gaussian Distributions from the distribution point of view also and not just from the point of view of the dispersion.

Partial Stochastic Ordering on F
Using the concept of Index of Dispersion, we can give a Partial Stochastic Order to the family F. For a more complete treatment on Stochastic Orders, we refer to [14].We start the analysis of this section with the definition of Mean-Preserving Spread.
Definition 43.A Mean-Preserving Spread (MPS) is a map from P(R  ) to itself where (;  1 ) and (;  2 ) are, respectively, the pdf of the random variables  1 and  2 with the property of leaving the Expected Value unchanged: for any  1 and  2 in the space of parameters.
The concept of a Mean-Preserving Spread provides a partial ordering of probability distributions according to their level of dispersion.We then give the following definition.
Definition 44.Consider two random variables  1 and  2 such that   1 ( 1 ) =   2 (), for any  1 and  2 .One says that the two random variables are ordered according to their Dispersion Index I if and only if the following condition is satisfied: Now, we give some examples of ordering according to the Indexes of Dispersion that we discussed previously.
(i) In case  = 0, we choose ‖Σ‖  = det(Σ) and so we get Now, in the spherical case    =  2 , one gets Using this index, we have the following partial ordering: This order does not depend on the dimension .By choosing instead ‖Σ‖  = tr(Σ), we obtain Now, again in the spherical case    =  2 , one gets which is the same ordering as before.This order does not depend on the dimension  again and this seems to suggest that even if the value of the Dispersion Index might depend on the choice of the norms, the Partial Order is less sensible to it.
Remark 45.In the strictly Strichartz case  2 = 1, we have that the Stochastic Order is given just by the scaling factor .
(ii) In the case when  is different from zero, we have If now we choose  0 = 0 and  = V 0 = (1, 0, . . ., 0)  , we get Remark 47.In the case of the the -Static Dispersion Index of the Maximal Family of Gaussians I   , the role of  2 and  2 seems interchangeable.This suggests a dimensional reduction in the parameter space, but, when  ̸ = 0,  2 and the parameter  decouple and start to play a slightly different role.This suggests again a way to distinguish between Gaussian Distributions which come from the family F and Gaussians which do not and so to distinguish between Gaussians which are solutions of the Linear Schrödinger Equation and Gaussians which are not.
Remark 48.Using the definition of Entropy, we deduce that, for Gaussian Distributions, ℎ() = (1/2) log(2)  det(Σ).We see that, for our family F, the Entropy increases, every time we increase ,  2 , and , but not when we increase  0 and V 0 .In particular, the fact that the Entropy increases with  is in accordance with the Second Principle of Thermodynamics.
Remark 49.It seems that the construction of similar indexes can be performed in more general situations.In particular, we think that an index similar to I   can be computed in every situation in which a family of distributions has the Variance-Covariance Matrix and the Expected Value which depend on common parameters.

Conclusions
In this paper, we have constructed and studied the Maximal Strichartz Family of Gaussian Distributions.This subfamily of the family of Gaussian Distributions arises naturally in the context of Partial Differential Equations and Harmonic Analysis, as the set of maximizers of certain functionals introduced by Strichartz [4] in the context of the Schrödinger Equation.We analysed the Fisher Information Matrix of the family and we showed that this matrix possesses an extrastructure with respect to the general family of Gaussian Distributions.We studied the spherical and elliptical case and computed explicitly the Fisher Information Metric in both cases.We interpreted the Fisher Information Metric as a distance which can distinguish between Gaussians which maximize the Strichartz Norm and Gaussians which do not and also as a distance between Gaussians which are solutions of the Linear Schrödinger Equation and Gaussians which are not.After this, we introduced some measures of statistical dispersion that we called -Dispersion Index of the Maximal Family of Gaussian Distributions and -Static Dispersion Index of the Maximal Family of Gaussian Distributions.We showed that these Indexes of Dispersion are consistent with the dispersive nature of the Schrödinger Equation and can be used again to distinguish between Gaussians belonging to the family F and other Gaussians.Moreover, we showed that our Indexes of Dispersion induce a Partial Stochastic Order on the Maximal Strichartz Family of Gaussian Distributions, which is in accordance with the flow of the PDE.

B. On the 1/𝛼-Momenta of Order 𝑘 and the Cauchy Distribution
In this subsection, we discuss another application of the concept of 1/-Characteristic Functions.In particular, we build 1/-Momenta in a similar way of what happens for the usual characteristic function and usual Momenta.We apply this tool to the case of the Cauchy Distribution and see that, in certain cases, in contrast to the well-known case of  = 1, we can build some finite generalized Momenta.We refer to [15] for a more detailed discussion on 1/-Characteristic Functions.Proof.This is a direct and simple computation.
Remark 51.Here, we do not consider the possibility of different roots of the unity that can appear in the computation of the 1/-Characteristic Function.We refer, for the precise theory, to [15].
From now on, we concentrate only on the case of the Multivariate Cauchy Distribution.where the constant  may vary from step to step.So, we have that the 1/-Momenta of order  exist and are finite, when       In the case  =  = 1, we need 0 <  < 1 and, in general, we need 0 <  < ( + 1)/( + ) in order to have that the 1/-Moment of order ,  1/ [  ] is well defined.
In this section, we connect the two closely related concepts (dispersion in statistics and PDEs) by introducing some measures of statistical dispersion like the Index of Dispersion in Definition 38 (see Section 4) which reflect the dispersive PDE nature of the Maximal Strichartz Family of Gaussian Distributions.We compute this Index of Dispersion for our family of distributions and show that it is consistent with PDE results.Consider the norms ‖⋅‖  and ‖⋅‖  on the space of Variance-Covariance Matrices Σ and ‖ ⋅ ‖  on the space of mean values .One defines the following Index of Dispersion: so for ‖Σ‖  = ‖Σ‖  = det(Σ) and ‖‖ 2  = ∑  =1 |  | 2 we get, in the spherical case    =  2 , 110) is actually different from I   = 5  , namely,  2 = 1, we can argue that the Gaussian Distribution that we are analysing does not come from the Maximal Strichartz Family of Gaussian Distributions.International Journal of Differential EquationsRemark 41.This index is different from the Fisher Index which is basically the variance to mean ratio the natural one for count data. Te index I  is then more appropriate for families of distributions related to the Poisson distribution and that are dimensionless.In fact, in our case and in contrast with the Poisson case, we scale the Variance-Covariance Matrix as the square of the Expected Value: Σ ≃  2 .