SOME ASYMPTOTIC THEORY FOR FUNCTIONAL REGRESSION AND CLASSIFICATION

Exploiting an expansion for analytic functions of operators, the asymptotic distribution of an estimator of the functional regression parameter is obtained in a rather simple way; the result is applied to testing linear hypotheses. The expansion is also used to obtain a quick proof for the asymptotic optimality of a functional classification rule, given Gaussian populations.


Introduction
Certain functions of the covariance operator such as the square root of a regularized inverse are important components of many statistics employed for functional data analysis. If Σ is a covariances operator on a Hilbert space, Σ a sample analogue of this operator, and ϕ a function on the complex plane, which is analytic on a domain containing a contour around the spectrum of Σ, a tool of generic importance is the comparison of ϕ Σ and ϕ Σ by means of a Taylor expansion: It should be noted thatφ Σ is not in general equal to ϕ Σ , where ϕ is the numerical derivative of ϕ; see also Section 3. In this paper, two further applications of the approximation in 1.1 will be given, both related to functional regression. The first application Section 4 concerns the functional regression estimator itself. Hall and Horowitz 1 have shown that the IMSE of their estimator, based on a Tikhonov type regularized inverse, is rate optimal. In this paper, as a complementary result, the general asymptotic distribution is obtained, with potential application to testing linear hypotheses of 2 Advances in Decision Sciences arbitrary finite dimension, mentioned in Cardot et al. 2 as an open problem: these authors concentrate on testing a simple null hypotheses. Cardot et al. 3 establish convergence in probability and almost sure convergence of their estimator which is based on spectral cutoff regularization of the inverse of the sample covariance operator. In the present paper, the covariance structure of the Gaussian limit will be completely specified. The proof turns out to be routine thanks to a "delta-method" for ϕ Σ − ϕ Σ , which is almost immediate from 1.1 .
The second application Section 5 concerns functional classification, according to a slight modification of a method by Hastie et al. 4 , exploiting penalized functional regression. It will be shown that this method is asymptotically optimal Bayes when the two populations are represented by equivalent Gaussian distributions with the same covariance operator. The simple proof is based on an upper bound for the norm of ϕ Σ − ϕ Σ , which follows at once from 1.1 .
Let us conclude this section with some comments and further references. The expansion in 1.1 can be found in Gilliam et al. 5 , and the ensuing delta method is derived and applied to regularized canonical correlation in Cupidon et al. 6 . For functional canonical correlation see also Eubank and Hsing 7 , He et al. 8 ,and Leurgans et al. 9 . When the perturbation Σ − Σ in the present case commutes with Σ the expansion 1.1 can already be found in Dunford & Schwartz 10, Chapter VII , and the derivative does indeed reduce to the numerical derivative. This condition is fulfilled only in very special cases, for instance, when the random function, whose covariance operator is Σ, is a second order stationary process on the unit interval. In this situation, the eigenfunctions are known and only the eigenvalues are to be estimated. This special case, that will not be considered here, is discussed in Johannes 11 who in particular deals with regression function estimators and their IMSE is Sobolev norms, when the regression is such a stationary process. General information about functional data analysis can be found in the monographs by Ramsay and Sliverman 12 and Ferraty and Vieu 13 . Functional time series are considered in Bosq 14 ; see also Mas 15 .

Preliminaries
As will be seen in the examples below, it is expedient to consider functional data as elements in an abstract Hilbert space H of infinite dimension, separable, and over the real numbers. Inner product and norm in H will be denoted by ·, · and · , respectively. Let Ω, F, P be a probability space, X : Ω → H a Hilbert space valued random variable i.e., measurable with respect to the σ-field of Borel sets B H in H , and η : Ω → R a real valued random variable. For all that follows it will be sufficient to assume that The mean and covariance operator of X will be denoted by Advances in Decision Sciences 3 respectively, where a ⊗ b is the tensor product in H. The Riesz representation theorem guarantees that these quantities are uniquely determined by the relations Throughout Σ X,X is assumed to be one-to-one. Let L denote the Banach space of all bounded linear operators T : H → H equipped with the norm · L . An operator U ∈ L is called Hilbert-Schmidt if which is the number in 2.4 . The tensor product for elements a, b ∈ H will be denoted by a ⊗ b, and that for elements U, V ∈ L HS by U⊗ HS V . The two problems to be considered in this paper both deal with cases where the best linear predictor of η in terms of X is linear: Just as in the univariate case Rao 17, Section 4g , we have the relation It should be noted that if Σ X,X is one-to-one and Σ X,η in its range, we can solve 2.8 and obtain Since the underlying distribution is arbitrary, the empirical distribution, given a sample X 1 , η 1 , . . . , X n , η n of independent copies of X, η , can be substituted for it. The 4 Advances in Decision Sciences minimization property is now the least squares property, the same formulas are obtained with μ X , Σ X,X , μ η , and Σ X,η replaced with their estimators Let us next specify the two problems.

Functional Regression Estimation
The model here is where ε is a real valued error variable and the following assumption is satisfied. quadratic in the inner product of H, is in fact linear in the inner product of L HS , because We will not pursue this example here.
In the infinite dimensional case Σ X,X cannot be one-to-one, and in order to estimate f from the sample version of 2.9 , a regularized inverse of Tikhonov type will be used, as in Hall & Horowitz 1 . Thus, we arrive at the estimator see also 2.11 and 2.13

2.20
Let us also introduce In Section 4, the asymptotic distribution of this estimator will be obtained, and the result will be applied to testing.

Functional Classification
The method discussed here is essentially the one in Hastie In this case, the distribution of X is π 1 P 1 π 2 P 2 , with mean μ X π 1 μ 1 π 2 μ 2 , 2.23 and covariance operator

Advances in Decision Sciences
Hastie et al. 19 now introduce the indicator response variables η j 1 {j} I , j 1, 2, and assume that the η j satisfies 2.7 for α j ∈ R and f j ∈ H. Note that

2.26
Since η j is Bernoulli, we have, of course, E η j | X P{I j | X}. Precisely as for matrices Rao & Toutenburg 20, Theorem A.18 , the inverse of the operator in 2.24 equals provided that the following assumption is satisfied.
Assumption 2.5. The vector μ 1 − μ 2 lies in the range of Σ, that is, It will also be assumed that Assuming 2.28 , 2.8 can be solved and yields after some algebra

2.30
If only X, and not I, is observed, the rule in Hastie et al. 19 assigns X to P 1 if and only if Because of assumption 2.29 , the rule reduces to Advances in Decision Sciences 7 Hastie et al. 19 claim that in the finite dimensional case, their rule reduces to Fisher's linear discriminant rule and to the usual rule when the distributions are normal. This remains in fact true in the present infinite dimensional case. Let us assume that where G μ, Σ denotes a Gaussian distribution with mean μ and covariance operator Σ. It is well known 21-23 that under Assumption 2.5 these Gaussian distributions are equivalent. This is important since there is no "Lebesgue measure" on H 24 . However, now the densities of P 1 and P 2 with respect to P 1 can be considered; it is well known that This leads at once to 2.33 as an optimal Bayes rule, equal in appearance to the one for the finite dimensional case.
In most practical situations, μ 1 , μ 2 , and Σ are not known, but a training sample I 1 , X 1 , . . . , I n , X n of independent copies of I, X is given. Let and we have cf. 2.24 Once again the operator Σ and Σ X,X for that matter cannot be one-to-one. In order to obtain an empirical analogue of the rule 2.32 , Hastie et al. 4 employ penalized regression, and Hastie et al. 19 also suggest to use a regularized inverse. The methods are related. Here the latter method will be used and X will be assigned to P 1 if and only if Section 5 is devoted to showing that this rule is asymptotically optimal when Assumption 2.5 is fulfilled.

A Review of Some Relevant Operator Theory
It is well known 16 that the covariance operator Σ is nonnegative, Hermitian, of finite trace, and hence Hilbert-Schmidt and therefore compact. The assumption that Σ is one-to-one is 8 equivalent to assuming that Σ is strictly positive. Consequently, Σ has eigenvalues σ 2 1 > σ 2 2 > · · · ↓ 0, all of finite multiplicity. If we let P 1 , P 2 , . . . be the corresponding eigenprojections, so that ΣP k σ 2 k P k , we have the spectral representation:

Advances in Decision Sciences
The spectrum of Σ equals σ Σ {0, σ 2 1 , σ 2 2 , . . .} ⊂ 0, σ 2 1 . Let us introduce a rectangular contour Γ around the spectrum as in Figure 1 We are interested in approximations of ϕ Σ ϕ Σ Π , where Π ∈ L is a perturbation. The application we have in mind arises for Π Π Σ − Σ and yields an approximation of ϕ Σ ; see also Watson 25 for the matrix case. Therefore, we will not in general assume that Π and Σ will commute. In the special case where X is stationary, as considered in Johannes 11 , there exists a simpler estimator Σ of Σ, such that Σ and Π do commute, which results in a simpler theory; see also Remark 4.1 .
The resolvent of Σ, is analytic on the resolvent set ρ Σ , and the operator Advances in Decision Sciences 9 is well defined. For the present operator Σ, as given in 3.1 , the resolvent equals more explicitly 3.5 Substitution of 3.5 in 3.4 and application of the Cauchy integral formula yields Example 3.1. The two functions are analytic on their domain that satisfies the conditions. With the help of these functions we may write cf. 2.20 and 2.21 Regarding the following brief summary and slight extension of some of the results in 5 , we also refer to Dunford & Schwartz 10 , Kato 26 , and Watson 25 . Henceforth, we will assume that For such perturbations, we have σ Σ σ Σ Π ⊂ Ω, so that the resolvent set of Σ satisfies It should also be noted that

Advances in Decision Sciences
The basic expansion similar to Watson 25 can be written as 3.14 useful for analyzing the error probability for δ → 0, as n → ∞, and also as useful for analyzing the convergence in distribution of the estimators. Let us decompose the contour Γ into two parts Γ 0 {− 1/2 δ iy : −1 ≤ y ≤ 1} and Γ 1 Γ \ Γ 0 , write M ϕ max z∈Γ |ϕ z |, and observe that 3.10 and 3.12 entail that 3.16 We now have

3.17
Multiplying both sides by ϕ z , taking 1/2πi Γ , and using 0 < C < ∞ as a generic constant that does not depend on Π or δ, 3.14 and 3.15 yield the following.

3.19
Advances in Decision Sciences 11 whereφ Σ : L → L is a bounded operator, given bẏ Remark 3.3. If Σ and Π commute, so will P k and Π, and R z and Π, and the expressions simplify considerably. In particular, 3.20 reduces tȯ so that condition 3.10 is fulfilled with arbitrary high probability for n sufficiently large. Expansions 3.14 and 3.15 and the resulting inequalities hold true for Σ replaced with Σ ω Σ Π ω for ω ∈ { Π L ≤ δ/4}. Example 3.4. Application to asymptotic distribution theory. In this application δ > 0 will be kept fixed: see also Section 4.1. It is based on the delta method for functions of operators 6 which follows easily from 3.19 . In conjunction with 3.22 this yields

3.24
In turn this yields for any f ∈ H, by the continuous mapping theorem. This result will be used in Section 4.

Advances in Decision Sciences
Example 3.5. Application to classification. Here we will let δ δ n ↓ 0, as n → ∞, and write ϕ 1,n z 1/ δ n z to stress this dependence on sample sizes. Since max z∈Γ ϕ 1,n z ≤ 1 δ n , 3.26 it is immediate from 3.18 that a result that will be used in Section 5.

The Asymptotic Distribution
The central limit theorem in Hilbert spaces entails at once where G 0 is a zero mean Gaussian random variable in H, and where G Σ is a zero mean random variable in L HS . These convergence results remain true with μ replaced by X and, because ε ⊥ ⊥ X by assumption 2.15 , we also have that jointly where G Σ is the same in 3.22 , and

13
Because the limiting variables are generated by the sums of iid variables on the left in 4.1 and 4.2 we have for the respective covariance structures. These are important to further specify the limiting distribution of the regression estimator as will be seen in Section 4.2.
Let us write, for brevity, where, according to 2.20 and 2.21 ,
With statistical applications in mind, it would be interesting if there would exist numbers a n ↑ ∞ and δ n ↓ 0, as n → ∞, such that a n f δ n − f d −→ H, as n −→ ∞, in H, 4.9 where H is a nondegenerate random vector. It has been shown in Cardot et al. 28 , however, that such a convergence in distribution when we center at f is not in general possible. Further information about the structure of the covariance operator of the random vector H on the right in 4.10 will be needed in order to be able to exploit the theorem for statistical inference. This will be addressed in the next section.

Further Specification of the Limiting Distribution
It follows from 4.5 that G 0 has a Karhunen-Loève expansion where the real valued random variables Z j j ∈ N are iid N 0, 1 .

4.12
Accordingly H 1 in 4.10 can be further specified as The Gaussian operator in 4.2 has been investigated in Dauxois et al. 27 , and here we will briefly summarize some of their results in our notation. By evaluating the inner product in L HS in the basis p 1 , p 2 . . . it follows from 4.6 that

4.14
This last expression does not in general further simplify. However, if we assume that the regressor X satisfies it can be easily seen that the so that the expression in 4.14 equals zero if j, k / α, β . As in Dauxois et al. 27 , we obtain in this case where v 2 j,k 2σ 4 j , j k, σ 2 j · σ 2 k , j / k.

15
Consequently the p j ⊗ p k j ∈ N, k ∈ N are an orthonormal basis of eigenvectors of the covariance operator of G Σ with eigenvalues v 2 j,k . Hence G Σ has the Karhunen-Loève expansion in L HS where the random variables Let us, for brevity, write see 3.7 for ϕ 2

4.22
Summarizing, we have the following result.

Asymptotics under the Null Hypothesis
Let us recall that f δ is related to f according to 2.21 so that the equivalence where again δ > 0 is fixed, holds true. The following is immediate from Theorem 4.1.
The distribution on the right in 4.31 is rather complicated if q 1 , . . . , q M remain arbitrary. But if we are willing to assume 4.20 , it follows from 4.27 that

4.32
A simplification is possible if we are willing to modify the hypothesis in 4.30 and use a so-called neighborhood hypothesis. This notion has a rather long history and has been investigated by Hodges & Lehmann 29 for certain parametric models. Dette & Munk 30 have rekindled the interest in it by an application in nonparametric regression. In the present context we might replace 4.30 with the neighborhood hypothesis H 0,ε : Q ⊥ f δ 2 ≤ ε 2 , for some ε > 0. 4.33 It is known from the literature that the advantage of using a neighborhood hypothesis is not only that such a hypothesis might be more realistic and that the asymptotics are much simpler, but also that without extra complication we might interchange null hypothesis and alternative. This means in the current situation that we might as well test the null hypothesis which could be more suitable, in particular in goodness-of-fit problems. The functional g → Q ⊥ g 2 , g ∈ H, has a Fréchet derivative at f δ given by the functional g → 2 g, Q ⊥ f δ , g ∈ H. Therefore, the delta method in conjunction with Theorem 4.1 entails the following result.

Advances in Decision Sciences
The limiting distribution on the right in 4.35 is normal with mean zero and complicated variance

4.36
Remark 4.7. As we see from the expressions in 4.24 , 4.32 , and 4.36 , the limiting distributions depend on infinitely many parameters that must be suitably estimated in order to be in a position to use the statistics for actual testing. Estimators for the individual parameters are not too hard to obtain. The eigenvalues σ 2 j and eigenvectors p j of Σ, for instance, can in principle be estimated by the corresponding quantities of Σ. Although in any practical situation only a finite number of these parameters can be estimated, theoretically this number must increase with the sample size and some kind of uniform consistency will be needed for a suitable approximation of the limiting distribution. This interesting question of uniform consistency seems to require quite some technicalities and will not be addressed in this paper.
Remark 4.8. In this paper we have dealt with the situation where Σ is entirely unknown. It has been observed in Johannes 11 that if X is a stationary process on the unit interval, the eigenfunctions p j of the covariance operator are always the same, known system of trigonometric functions, and only its eigenvalues σ 2 j are unknown. Knowing the p j leads to several simplifications. In the first place, Σ can now be estimated by the expression on the right in 3.1 with only the σ 2 k replaced with estimators. If Σ is this estimator, it is clear that Σ and Π Σ − Σ commute, so that the derivativeφ 2,Σ now simplifies considerably see Remark 3.3 . Secondly, we might consider the special case of H 0 in 4.30 , where q j p j , j 1, . . . , M. We now have f δ ∈ M p 1 , . . . , p M ⇐⇒ f ∈ M, 4.37 so that even for fixed δ we can test the actual regression function. In the third place, under the null hypothesis in 4.37 , the number of unknown parameters in 4.32 reduces considerably because now Q ⊥ p j 0 for j 1, . . . , M. When the p j are known, in addition to all the changes mentioned above, also the limiting distribution of Σ differs from that of Σ. Considering all these modifications that are needed, it seems better not to include this important special case in this paper.

Asymptotic Optimality of the Classification Rule
In addition to Assumption 2.5 and 2.34 , it will be assumed that the smoothness parameter δ δ n in 2.39 satisfies δ n −→ 0, δ n n −1/4 , as n −→ ∞.

5.1
We will also assume that the sizes of the training samples n 1 and n 2 see 2.36 are deterministic and satisfy n n 1 n 2 0 < lim n → ∞ inf n j n ≤ lim n → ∞ sup n j n < 1.

5.3
Since X j − μ j O p n −1/2 , ϕ 1,n Σ L O δ −1 n , and, according to 3.21 , it follows from 5.1 that the limit of the misclassification probability equals

5.5
where Φ is the standard normal cdf. For 5.5 we have used the well-known property of regularized inverses that δ Σ −1 Σf − f → 0, as δ → 0, for all f ∈ H, and the fact that we may choose f Σ −1 μ 1 − μ 2 by Assumption 2.5. Since rule 2.33 is optimal when parameters are known, we have obtained the following result.