Asymptotic Behavior of the Likelihood Function of Covariance Matrices of Spatial Gaussian Processes

The covariance structure of spatial Gaussian predictors aka Kriging predictors is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood of spatial Gaussian predictor models as a function of its hyperparameters is investigated theoretically. Asymptotic sandwich bounds for the maximum likelihood function in terms of the condition number of the associated covariance matrix are established. As a consequence, the main result is obtained: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training is necessarily bound to fail.


Introduction
Spatial Gaussian processing, also known as best linear unbiased prediction, refers to a statistical data interpolation method, which is nowadays applied in a wide range of scientific fields, including computer experiments in modern engineering context; see, for example, 1-5 .As a powerful tool for geostatistics, it has been pioneered by Krige in 1951 6 , and to pay tribute to his achievements, the method is also termed Kriging; see 7, 8 for geostatistical background.
In practical applications, the data's covariance structure is modeled through covariance functions depending on the so-called hyperparameters.These, in turn, are estimated by optimizing the corresponding maximum likelihood function.It has been demonstrated by many authors that the accuracy of Kriging predictors relies both heavily on hyperparameter-based model training and, from the numerical point of view, on the condition number of the associated Kriging correlation matrix.In this regard, we relate to the following, nonexhaustive selection of papers: Warnes and Ripley 9 and Mardia and

Kriging in a Nutshell
Kriging is a statistical approach for estimating an unknown scalar function based on a finite data set of sample locations x 1 , . . ., x n ∈ U ⊂ Ê d with corresponding responses y 1 : y x 1 , . . . ,y n : y x n ∈ Ê obtained from measurements or numerical computations.The collection of responses is denoted by Y T y 1 , . . . ,y n ∈ Ê n .

2.2
The function y : U → Ê to be estimated is assumed to be the realization of an underlying random process given by a regression model and a random error function x with zero mean.More precisely where the components of the row vector function f : Ê d → Ê p 1 are the basis functions of the regression model and β β 0 , . . ., β p is the corresponding vector of regression coefficients.By assumption, E x 0 ∀x.

2.4
The component functions of f can be chosen arbitrarily, yet they should form a function basis suitable to the specific application.The most common choices for practical applications are and higher-order polynomials.
Introducing the regression design matrix

2.5
the vector of errors at the sampled sites can be written as Note that the first column of F equals 1 ∈ Ê n for all polynomial regression models.
The Kriging predictor y estimates y at an untried site x as a linear combination of the sampled data

2.7
For each x ∈ Ê d , the unique vector of weights ω x ω 1 x , . . ., ω n x that leads to an unbiased prediction minimizing the mean squared error is given by the solution of the Kriging equation system

2.8
Here, denote the covariance matrix and the covariance vector, respectively, and the entries of the vector μ μ 0 , . . ., μ p are Lagrange multipliers.Solving 2.8 by Schur matrix complement inversion and substituting in 2.7 leads to the Kriging predictor formula where scf k θ, x i , x j .

2.11
Here θ θ 1 , . . . ,θ d ∈ Ê d is a vector of distance weights, which models the influence of the coordinate-wise spatial correlation on the prediction.The correlation matrix is defined by

2.12
In order to avoid ambiguity due to different parameterizations of the correlation models, we fix for the rest of the paper the following.

Convention 1
Large distance weight values correspond to weak spatial correlation, and small distance weight values correspond to strong spatial correlation.More precisely, we assume that feasible spatial correlation functions are always parameterized such that

2.13
at distinct locations p / q.All correlation models applied in all the papers briefly reviewed in the introduction can be parameterized accordingly.A collection of spatial correlation functions is given in several publications on Cokriging/Kriging, including 21, Table 2.1 .For example, the Gaussian correlation function parameterized with respect to the convention above is given by

2.14
The results and proofs presented below hold true without change for every admissible spatial correlation model, assuming that the sample errors are normally distributed and that the process variance is stationary; that is, σ is independent of the locations x i , x j .In this setting, hyper-parameter training for Kriging models consists of optimizing the corresponding maximum likelihood function For θ fixed, optima for σ σ θ and β β θ can be derived analytically, see 20 , so that hyper-parameter training for Kriging models is reduced to the following optimization problem: where the dependency on θ is as follows: Because the logarithm is monotonic, this is equivalent to minimizing the so-called condensed log-likelihood function often encountered in the literature.

Asymptotic Behavior of the Maximum Likelihood Function-Why Kriging Model Training Is Tricky: Part I
The condition number of a regular matrix R ∈ Ê n×n with respect to a given matrix norm • is defined as cond R : R • R −1 .For the matrix norm induced by the euclidean vector norm, where λ max , λ min are the largest, the smallest eigenvalue of R, respectivley.In order to prevent the solution of the Kriging equation system 2.8 from being spoiled by numerical errors, it is important to prevent the covariance matrix from being severely ill-conditioned.However, the next theorem shows that eventually, when the condition number approaches infinity, so does the associated likelihood function.Keep in mind that we have formulated likelihood estimation as a minimization problem; see 2.16 .Throughout this section, we will assume that the regression design matrix F from 2.5 features the vector 1 ∈ Ê n as first column.This is the case of the highest practical relevance and, in fact, is of particular difficulty, since in this case the first column of F coincides with a limit eigenvector of the correlation matrix as will be seen in the following.
Theorem 3.1.Let x 1 , . . . ,x n ∈ Ê d , Y : y x 1 , . . ., y x n ∈ Ê n be a data set of sampled sites and responses.Let R θ be the associated spatial correlation matrix, and let Σ θ be the vector of errors with respect to the chosen regression model.Furthermore, let cond R θ be the condition number of R θ .Suppose that the following conditions hold true: 2 the derivatives of the eigenvalues do not vanish in the limit: d/dτ | τ 0 λ j τ1 λ j 0 / 0 for all j 2, . . ., n, and there exist constants c 1 , c 2 ∈ Ê such that Remark 3.2. 1 The conditions given in the above theorem cannot be proven to hold true in general, since they depend on the data set in question.However, they hold true for nondegenerate data set.In Appendix A, a relationship between condition 2 and the regularity of R 0 is established, giving strong support that condition 2 is generally valid.Concerning the third condition, it will be shown in Lemma 3.4, that the limit Σ 0 exists, given conditions 1 and 2. Note that the set span{1} ∪ 1 ⊥ is of Lebesgue measure zero in Ê n .In all practical applications known to the author, these conditions were fulfilled.2 It holds that lim θ → ∞ R θ i,j I ∈ Ê n×n .Hence, the likelihood function approaches a constant limit for θ → ∞ and lim θ → ∞ cond R θ 1.The corresponding predictor behavior is investigated in Section 4.
3 Even though Theorem 3.1 shows that the model likelihood becomes arbitrarily bad for hyperparameters θ → 0, the optimum might lie very close to the blowup region of the condition number, leading to still quite ill-conditioned covariance matrices 11 .This fact as well as the general behavior of the likelihood function as predicted by Theorem 3.1 is illustrated in Figure 1.
4 Figure 2 b , provides an additional illustration of Theorem 3.1.5 Theorem 3.1 offers a strategy for choosing starting solutions for the optimization problem 2.16 : take each θ k , k ∈ 1, . . ., d as small as possible such that the corresponding correlation matrix is still numerically positive definite.
6 A related investigation of interpolant limits has been performed in 18 but for standard radial basis functions.
In order to support readability, we divide the proof of Theorem 3.1 into smaller units, organized as follows.As a starting point, we establish two auxiliary lemmata on the existence of limits of eigenvalue quotients and of errors vectors.Subsequently, the proof of the main theorem is conducted relying on the lemmata.Lemma 3.3.In the setting of Theorem 3.1, let λ i θ , i 1, . . ., n be the eigenvalues of R θ , ordered by size.Then, const.> 0 ∀i, j 2, . . ., n.

3.4
Proof.Because of 2.13 , it holds that R 0 i,j 11 T ∈ Ê n×n for every admissible spatial correlation function.Since 11 T 1 n1 ∈ Ê n and 11 T W 0 for all W ∈ 1 ⊥ ⊂ Ê n , the limit eigenvalues of the correlation matrix ordered by size are given by λ 1 0 Under the present conditions, the eigenvalues λ i are differentiable with respect to θ.

3.7
we can restate 2.19 as
Writing columnwise F 0 , F 1 , . . ., F p : F, a direct computation shows 3.9 Note that F 0 1 √ nX 1 0 for the default choices of regression basis functions, such that F 0 , X 1 0 / 0 and F 0 , X j 0 0 for j 2, . . ., n.The desired result follows from 3.6 and Lemma 3.3.Remark 3.5.Actually, one cannot prove for LF T QΛ −1 to be regular in general, since this matrix depends on the chosen sample locations.It might be possible to artificially choose samples such that, for example, F has not full rank.Yet if so, the whole Kriging exercise cannot be performed, since 2.19 is not well defined in this case.For constant regression, that is, F 1, this is impossible.Note that F is independent of θ.Now, let us prove Theorem 3.1 using notation as introduced above.
Proof.As shown in the proof of Lemma 3.

3.10
Because the correlation matrix is symmetric and positive definite, a decomposition Case 1. Suppose that W i 0 / 0 for all i 1, . . ., n.

3.15
For the index m defined by λ m : min j∈J {λ j }, it holds that m ∈ {2, . . ., n}.By Lemma 3.3, the result can be established as in Case 1.
The estimate of the upper bound of ML is obtained in an analogous manner.Let where we used Bernoulli's inequality at * , Lemma 3.3, and the fact that n/ n − 1 ≤ 2.
Remark 3.6.The extension of the main theorem to Cokriging prediction 7, 20 is a straight-forward exercise, since the limit eigenvectors of the Cokriging correlation matrix corresponding to nonzero eigenvalues can also be determined explicitly.

Why Kriging Model Training Is Tricky: Part II
The following simple observation illustrates Kriging predictor behavior for large-distance weights θ.Notation is to be understood as introduced in Section 3.
Observation 1. Suppose that sample locations {x 1 , . . . ,x n } ⊂ Ê d and responses y i y x i ∈ Ê, i 1, . . ., n are given.Let y be the corresponding Kriging predictor according to 2.10 .Then, for Ê d x / ∈ {x 1 , . . ., x n } and distance weights θ → ∞, it holds that Put in simple words: if too large distance weights are chosen, then the resulting predictor function has the shape of the regression model, x → f x β, with peaks at the sample sites, compare Figure 2, dashed curve.
Proof.According to 2.10 it holds that where C σ 2 R. By 2.13 , it holds that R θ → I, for θ → ∞ for every admissible spatial correlation model of the form of 2.11 .By Cauchy-Schwartz' inequality, Remark 4.1.The same predictor behavior arises at locations far away from the sampled sites, that is, for dist x, {x 1 , . . ., x n } → ∞.This has to be considered, when extrapolating beyond the sample data set.
Figure 2 shows an example data set for which the Kriging maximum likelihood function is constant over a large range of θ values.This example was not constructed artificially but occured in the author's daily work of computing approximate fluid flow solutions based on proper orthogonal decomposition POD followed by coefficient interpolation as described in 23, 24 .The sample data set is given in Table 1.The Kriging estimator given by the dashed line shows a behavior as predicted by Observation 1.Note that from the model training point of view, all distance weights θ > 1 are equally likely, yet lead to quite different predictor functions.Since the ML features no local minimum, classical hyperparameter estimation is impossible.

A. On the Validity of Condition 2 in Theorem 3.1
The next lemma strongly indicates that the second condition in the main Theorem 3.1 is given in nondegenerate cases.Let λ i θ , i 1, . . ., n be the eigenvalues of R ordered by size with corresponding eigenvector matrix Q X 1 , . . . ,X n , and define θ : Ê → Ê d , τ → θ τ : τ1.Suppose that the eigenvalues are mutually distinct for τ > 0 close to zero.
Denote the directional derivative in the direction 1 with respect to τ by a prime ', that is, d/dτ R τ1 R τ and so forth.Then, it holds that for all i 1, . . ., n, with at most one possible exception.
Proof.For every admissible spatial spatial correlation function r θ, •, • of the form 2.11 and Thus, R 0 i,j 11 T ∈ Ê n×n .It holds that 11 T 1 n1 ∈ Ê n and 11 T W 0 for all W ∈ 1 ⊥ ⊂ Ê n ; therefore, the limits of the eigenvalues of the correlation matrix ordered by size are given by λ The assumption, that no multiple eigenvalues occur, ensures that the eigenvalues λ i and corresponding normalized, oriented eigenvectors X i are differentiable with respect to τ.Let Q τ X 1 τ , . . ., X n τ ∈ Ê n×n be the orthogonal matrix of eigenvectors, such that where Q 0 is the continuous extension of Q τ ; see, for example, 22 .Hence, Let us assume, that there exist two indices j 0 , k 0 , j 0 / k 0 such that λ k 0 0 0 λ j 0 0 .
For most correlation models, the derivative R 0 can be computed explicitly.

B. Test Setting Corresponding to Figure 1
In order to produce Figure 1, the following test function has been applied: y : 0, 100 × 0, 100 −→ Ê, The Kriging predictor function displayed in this figure has been constructed based on the fifteen randomly chosen sample points shown in Table 2.

Lemma 3 . 4 .Figure 1 :
Figure 1: Ordinary Kriging estimation a of a two-dimensional analytical test function b based on 15 samples points.c and d Two views of the associated LogML, scaled by a factor of 1/100.This example shows that hyperparameter optima might lie very close to the blowup of the Log ML due to illconditioning, that is proved to occur by Theorem 3.1.Model function and sample locations are listed in Appendix B.

Figure 2 :
Figure 2: Kriging of data stemming from reduced-order modeling of solutions to the Navier-Stokes equations via POD coefficient interpolation; see, for example, 23 .a Kriging predictors for different choices of the distance weight θ. b Corresponding condensed log-likelihood function, showing the limit behavior as predicted by Theorem 3.1.Numerically, the function features no local minimum.Thus, it is impossible to say which one of the estimator functions in the left picture is the most likely.

Lemma A. 1 .
Let Ê d θ → R θ ∈ Ê n×n be the correlation matrix function corresponding to a given set of Kriging data and a fixed spatial correlation model.
is the generalized least squares solution to the regression problem Fβ Y .For details, see, for example, 20, 21 .

Table 1 :
Sampled sites corresponding to the example displayed in Figure2.