Stochastic analysis of Gaussian processes via Fredholm representation

We show that every separable Gaussian process with integrable variance function admits a Fredholm representation with respect to a Brownian motion. We extend the Fredholm representation to a transfer principle and develop stochastic analysis by using it. We show the convenience of the Fredholm representation by giving applications to equivalence in law, bridges, series expansions, stochastic differential equations, and maximum likelihood estimations.


Introduction
The stochastic analysis of Gaussian processes that are not semimartingales is challenging.One way to overcome the challenge is to represent the Gaussian process under consideration, , say, in terms of a Brownian motion, and then develop a transfer principle so that the stochastic analysis can be done in the "Brownian level" and then transferred back into the level of .
One of the most studied representations in terms of a Brownian motion is the so-called Volterra representation.A Gaussian Volterra process is a process that can be represented as where  is a Brownian motion and  ∈  2 ([0, ] 2 ).Here, the integration goes only up to , hence the name "Volterra."This Volterra nature is very convenient: it means that the filtration of  is included in the filtration of the underlying Brownian motion .Gaussian Volterra processes and their stochastic analysis have been studied, for example, in [1,2], just to mention a few.Apparently, the most famous Gaussian process admitting Volterra representation is the fractional Brownian motion and its stochastic analysis indeed has been developed mostly by using its Volterra representation; see, for example, the monographs [3,4] and references therein.
In discrete finite time the Volterra representation (1) is nothing but the Cholesky lower-triangular factorization of the covariance of , and hence every Gaussian process is a Volterra process.In continuous time this is not true; see Example 16 in Section 3.
There is a more general representation than (1) by Hida; see [5,Theorem 4.1].However, this Hida representation includes possibly infinite number of Brownian motions.Consequently, it seems very difficult to apply the Hida representation to build a transfer principle needed by stochastic analysis.Moreover, the Hida representation is not quite general.Indeed, it requires, among other things, that the Gaussian process is purely nondeterministic.The Fredholm representation (2) does not require pure nondeterminism.Example 16 in Section 3, which admits a Fredholm representation, does not admit a Hida representation, and the reason is the lack of pure nondeterminism.
The problem with the Volterra representation (1) is the Volterra nature of the kernel , as far as generality is concerned.Indeed, if one considers Fredholm kernels, that is, kernels where the integration is over the entire interval [0, ] under consideration, one obtains generality.A Gaussian

International Journal of Stochastic Analysis
Fredholm process is a process that admits the Fredholm representation where  is a Brownian motion and   ∈  2 ([0, ] 2 ).In this paper we show that every separable Gaussian process with integrable variance function admits representation (2).The price we have to pay for this generality is twofold: (i) The process  is generated, in principle, from the entire path of the underlying Brownian motion .
Consequently,  and  do not necessarily generate the same filtration.This is unfortunate in many applications.
(ii) In general the kernel   depends on  even if the covariance  does not, and consequently the derived operators also depend on .This is why we use the cumbersome notation of explicitly stating out the dependence when there is one.In stochastic analysis this dependence on  seems to be a minor inconvenience, however.Indeed, even in the Volterra case as examined, for example, by Alòs et al. [1], one cannot avoid the dependence on  in the transfer principle.
Of course, for statistics, where one would like to let  tend to infinity, this is a major inconvenience.
Let us note that the Fredholm representation has already been used, without proof, in [6], where the Hölder continuity of Gaussian processes was studied.
Let us mention a few papers that study stochastic analysis of Gaussian processes here.Indeed, several different approaches have been proposed in the literature.In particular, fractional Brownian motion has been a subject of active study (see the monographs [3,4] and references therein).More general Gaussian processes have been studied in the already mentioned work by Alòs et al. [1].They considered Gaussian Volterra processes where the kernel satisfies certain technical conditions.In particular, their results cover fractional Brownian motion with Hurst parameter  > 1/4.Later Cheridito and Nualart [7] introduced an approach based on the covariance function itself rather than on the Volterra kernel .Kruk et al. [8] developed stochastic calculus for processes having finite 2-planar variation, especially covering fractional Brownian motion  ≥ 1/2.Moreover, Kruk and Russo [9] extended the approach to cover singular covariances, hence covering fractional Brownian motion  < 1/2.Furthermore, Mocioalca and Viens [10] studied processes which are close to processes with stationary increments.More precisely, their results cover cases where E(  −  ) 2 ∼  2 (|− |), where  satisfies some minimal regularity conditions.In particular, their results cover some processes which are not even continuous.Finally, the latest development we are aware of is a paper by Lei and Nualart [11] who developed stochastic calculus for processes having absolute continuous covariance by using extended domain of the divergence introduced in [9].Finally, we would like to mention Lebovits [12] who used the S-transform approach and obtained similar results to ours, although his notion of integral is not elementary as ours.
The results presented in this paper give unified approach to stochastic calculus for Gaussian processes and only integrability of the variance function is required.In particular, our results cover processes that are not continuous.
The paper is organized as follows.
Section 2 contains some preliminaries on Gaussian processes and isonormal Gaussian processes and related Hilbert spaces.
Section 3 provides the proof of the main theorem of the paper: the Fredholm representation.
In Section 4 we extend the Fredholm representation to a transfer principle in three contexts of growing generality: First we prove the transfer principle for Wiener integrals in Section 4.1, then we use the transfer principle to define the multiple Wiener integral in Section 4.2, and, finally, in Section 4.3 we prove the transfer principle for Malliavin calculus, thus showing that the definition of multiple Wiener integral via the transfer principle done in Section 4.2 is consistent with the classical definitions involving Brownian motion or other Gaussian martingales.Indeed, classically one defines the multiple Wiener integrals either by building an isometry with removed diagonals or by spanning higher chaos by using the Hermite polynomials.In the general Gaussian case one cannot of course remove the diagonals, but the Hermite polynomial approach is still valid.We show that this approach is equivalent to the transfer principle.In Section 4.3 we also prove an Itô formula for general Gaussian processes and in Section 4.4 we extend the Itô formula even further by using the technique of extended domain in the spirit of [7].This Itô formula is, as far as we know, the most general version for Gaussian processes existing in the literature so far.
Finally, in Section 5 we show the power of the transfer principle in some applications.In Section 5.1 the transfer principle is applied to the question of equivalence of law of general Gaussian processes.In Section 5.2 we show how one can construct net canonical-type representation for generalized Gaussian bridges, that is, for the Gaussian process that is conditioned by multiple linear functionals of its path.In Section 5.3 the transfer principle is used to provide series expansions for general Gaussian processes.

Preliminaries
Our general setting is as follows: let  > 0 be a fixed finite time-horizon and let  = (  ) ∈[0,] be a Gaussian process with covariance  that may or may not depend on .Without loss of any interesting generality we assume that  is centered.We also make the very weak assumption that  is separable in the sense of the following definition.
Example 2. If the covariance  is continuous, then  is separable.In particular, all continuous Gaussian processes are separable.
Definition 5 (Wiener integral).(ℎ) is the Wiener integral of the element ℎ ∈ H  with respect to .One will also denote Remark 6.Eventually, all the following will mean the same: Remark 11.Note that it may be that  ∈ H 0  but for some   <  we have 1   ∉ H 0   ; compare [14] for an example with fractional Brownian motion with Hurst index less than half.For this reason we keep the notation H  instead of simply writing H.For the same reason we include the dependence of  whenever there is one.

Fredholm Representation
Theorem 12 (Fredholm representation).Let  = (  ) ∈[0,] be a separable centered Gaussian process.Then there exists a kernel   ∈  2 ([0, ] 2 ) and a Brownian motion  = (  ) ≥0 , independent of , such that in law if and only if the covariance  of  satisfies the trace condition Representation ( 8) is unique in the sense that any other representation with kernel K , say, is connected to (8) where (   ) ∞ =1 and (   ) ∞ =1 are the eigenvalues and the eigenfunctions of the covariance operator Moreover, (   ) ∞ =1 is an orthonormal system on  2 ([0, ]).Now,   , being a covariance operator, admits a square root operator   defined by the relation for all    and    .Now, condition (9) means that   is trace class and, consequently,   is Hilbert-Schmidt operator.In particular,   is a compact operator.Therefore, it admits a kernel.Indeed, a kernel   can be defined by using the Mercer expansion (10) as This kernel is obviously symmetric.Now, it follows that and representation (8) follows from this.Finally, let us note that the uniqueness up to a unitary transformation is obvious from the square root relation (12).

International Journal of Stochastic Analysis
Remark 13.The Fredholm representation (8) holds also for infinite intervals, that is,  = ∞, if the trace condition (9) holds.Unfortunately, this is seldom the case.Remark 14.The above proof shows that the Fredholm representation (8) holds in law.However, one can also construct the process  via (8) for a given Brownian motion .In this case, representation (8) holds of course in  2 .Finally, note that in general it is not possible to construct the Brownian motion in representation (8) from the process .Indeed, there might not be enough randomness in .To construct  from  one needs that the indicators 1  ,  ∈ [0, ], belong to the range of the operator   .
Remark 15.We remark that the separability of  ensures representation of form (8) where the kernel   only satisfies a weaker condition   (, ⋅) ∈  2 ([0, ]) for all  ∈ [0, ], which may happen if the trace condition (9) fails.In this case, however, the associated operator   does not belong to  2 ([0, ]), which may be undesirable.
Example 17.Consider a truncated series expansion where   are independent standard normal random variables and where ẽ  ,  ∈ N, is an orthonormal basis in  2 ([0, ]).Now it is straightforward to check that this process is not purely nondeterministic (see [15] for definition) and, consequently,  cannot have Volterra representation while it is clear that  admits a Fredholm representation.On the other hand, by choosing the functions ẽ  to be the trigonometric basis on  2 ([0, ]),  is a finite-rank approximation of the Karhunen-Loève representation of standard Brownian motion on [0, ].Hence by letting  tend to infinity we obtain the standard Brownian motion and hence a Volterra process.
Example 18.Let  be a standard Brownian motion on [0, ] and consider the Brownian bridge.Now, there are two representations of the Brownian bridge (see [16] and references therein on the representations of Gaussian bridges).The orthogonal representation is Consequently,  has a Fredholm representation with kernel   (, ) = 1  () − /.The canonical representation of the Brownian bridge is Consequently, the Brownian bridge has also a Volterra-type representation with kernel (, ) = ( − )/( − ).

Transfer Principle and Stochastic Analysis
Definition 19 (adjoint associated operator).The adjoint associated operator Γ * of a kernel Γ ∈  2 ([0, ] 2 ) is defined by linearly extending the relation Remark 20.The name and notation of "adjoint" for  *  comes from Alòs et al. [1] where they showed that in their Volterra context  *  admits a kernel and is an adjoint of   in the sense that for step-functions  and  belonging to  2 ([0, ]).It is straightforward to check that this statement is valid also in our case.
Example 21.Suppose the kernel Γ(⋅, ) is of bounded variation for all  and that  is nice enough.Then Theorem 22 (transfer principle for Wiener integrals).Let  be a separable centered Gaussian process with representation (8) and let  ∈ H  .Then Proof.Assume first that  is an elementary function of form for some disjoint intervals   = ( −1 ,   ].Then the claim follows by the very definition of the operator  *  and Wiener integral with respect to  together with representation (8).Furthermore, this shows that  *  provides an isometry between H  and  2 ([0, ]).Hence H  can be viewed as a closure of elementary functions with respect to ‖‖ H  = ‖ *  ‖  2 ([0,]) which proves the claim.

Multiple Wiener Integrals.
The study of multiple Wiener integrals goes back to Itô [17] who studied the case of Brownian motion.Later Huang and Cambanis [18] extended the notion to general Gaussian processes.Dasgupta and Kallianpur [19,20] and Perez-Abreu and Tudor [21] studied multiple Wiener integrals in the context of fractional Brownian motion.In [19,20] a method that involved a prior control measure was used and in [21] a transfer principle was used.
Our approach here extends the transfer principle method used in [21].We begin by recalling multiple Wiener integrals with respect to Brownian motion and then we apply transfer principle to generalize the theory to arbitrary Gaussian process.
Let  be an elementary function on [0, ]  that vanishes on the diagonals; that is, where Δ  fl [ −1 ,   ) and   1 ...  = 0 whenever   =  ℓ for some  ̸ = ℓ.For such  we define the multiple Wiener integral as where we have denoted Δ   fl    −   −1 .For  = 0 we set   0 () = .Now, it can be shown that elementary functions that vanish on the diagonals are dense in  2 ([0, ]  ).Thus, one can extend the operator   , to the space  2 ([0, ]  ).This extension is called the multiple Wiener integral with respect to the Brownian motion.
Remark 23.It is well-known that   , () can be understood as multiple or iterated.Itô integrals if and only if ( 1 , . . .,   ) = 0 unless  1 ≤ ⋅ ⋅ ⋅ ≤   .In this case we have For the case of Gaussian processes that are not martingales this fact is totally useless.
For a general Gaussian process , recall first the Hermite polynomials: For any  ≥ 1 let the th Wiener chaos of  be the closed linear subspace of  Let us now consider the multiple Wiener integrals  , for a general Gaussian process .We define the multiple integral  , by using the transfer principle in Definition 25 and later argue that this is the "correct" way of defining them.So, let  be a centered Gaussian process on [0, ] with covariance  and representation (8) with kernel   .
Definition 24 (-fold adjoint associated operator).Let   be the kernel in (8) and let  *  be its adjoint associated operator.Define In the same way, define Here the tensor products are understood in the sense of Hilbert spaces; that is, they are closed under the inner product corresponding to the -fold product of the underlying inner product.
Definition 25.Let  be a centered Gaussian process with representation (8) and let  ∈ H , .Then The following example should convince the reader that this is indeed the correct definition.
Example 26.Let  = 2 and let ℎ = ℎ 1 ⊗ ℎ 2 , where both ℎ 1 and ℎ 2 are step-functions.Then International Journal of Stochastic Analysis The following proposition shows that our approach to define multiple Wiener integrals is consistent with the traditional approach where multiple Wiener integrals for more general Gaussian process  are defined as the closed linear space generated by Hermite polynomials.Proposition 27.Let   be the th Hermite polynomial and let ℎ ∈ H  .Then Proof.First note that without loss of generality we can assume ‖ℎ‖ H  = 1.Now by the definition of multiple Wiener integral with respect to  we have where Consequently, by [22, Proposition 1.1.4]we obtain which implies the result together with Theorem 22.
Proposition 27 extends to the following product formula, which is also well-known in the Gaussian martingale case but apparently new for general Gaussian processes.Again, the proof is straightforward application of transfer principle. Hence Remark 30.In the literature multiple Wiener integrals are usually defined as the closed linear space spanned by Hermite polynomials.In such a case Proposition 27 is clearly true by the very definition.Furthermore, one has a multiplication formula (see, e.g., [23]): where ⊗ H  ,  denotes symmetrization of tensor product and {  ,  = 1, . ..} is a complete orthonormal basis of the Hilbert space H  .Clearly, by Proposition 27, both formulas coincide.This also shows that (39) is well-defined.

Malliavin
where  ∈  ∞  (R  ), that is,  and all its derivatives are bounded.The Malliavin derivative    =     of  is an element of  2 (Ω; H  ) defined by In particular,     = 1  .
Definition 32.Let D 1,2 = D 1,2  be the Hilbert space of all square integrable Malliavin differentiable random variables defined as the closure of S with respect to norm The divergence operator   is defined as the adjoint operator of the Malliavin derivative   .
Definition 33.The domain Dom   of the operator   is the set of random variables  ∈  2 (Ω; H  ) satisfying for any  ∈ D 1,2 and some constant   depending on .
For  ∈ Dom   the divergence operator   () is a square integrable random variable defined by the duality relation for all  ∈ D 1,2 .
We use the notation (50) Proof.The proof follows directly from transfer principle and the isometry provided by  *  with the same arguments as in [1].Indeed, by isometry we have where ( *  ) −1 denotes the preimage, which implies that Moreover, if ℎ ∈ H 0 , is such that it is  times iteratively Skorohod integrable, then (55) still holds.
Proof.Again the idea is to use the transfer principle together with induction.Note first that the statement is true for  = 1 by definition and assume next that the statement is valid for  = 1, . . ., .We denote   = ∏  =1 ℎ  (  ).Hence, by induction assumption, we have Put now  =  , (  ) and () = ℎ +1 ().Hence by [22,Proposition 1.3.3]and by applying the transfer principle we obtain that  belongs to Dom   and Hence the result is valid also for  + 1 by Proposition 28 with  = 1.
The claim for general ℎ ∈ H 0 , follows by approximating with a product of simple function.
We end this section by providing an extension of Itô formulas provided by Alòs et al. [1].They considered Volterra processes; that is, they assumed the representation where the kernel  satisfied certain technical assumptions.In [1] it was proved that in the case of Volterra processes one has for some  > 0 and  < (1/4)(sup 0≤≤ E 2  ) −1 .In the following we will consider different approach which enables us to (i) prove that such formula holds with minimal requirements, (ii) give more instructive proof of such result, (iii) extend the result from Volterra context to more general Gaussian processes, (iv) drop some technical assumptions posed in [1].
For simplicity, we assume that the variance of  is of bounded variation to guarantee the existence of the integral If the variance is not of bounded variation, then integral (62) may be understood by integration by parts if   is smooth enough or, in the general case, via the inner product ⟨⋅, ⋅⟩ H  .
In Theorem 40 we also have to assume that the variance of  is bounded.
The result for polynomials is straightforward, once we assume that the paths of polynomials of  belong to  2 (Ω; H  ).
for every random variable  from a total subset of  2 (Ω).
In other words, it is sufficient to show that (69) is valid for random variables of form  =    (ℎ ⊗ ), where ℎ is a stepfunction.
Note first that it is sufficient to prove the claim only for Hermite polynomials   ,  = 1, . ... Indeed, it is well-known that any polynomial can be expressed as a linear combination of Hermite polynomials and, consequently, the result for arbitrary polynomial  follows by linearity.
We proceed by induction.First it is clear that first two polynomials  0 and  1 satisfy (64).Furthermore, by assumption   2 ( ⋅ )1  belongs to Dom   from which (64) is easily deduced by [22,Proposition 1.3.3].Assume next that the result is valid for Hermite polynomials   ,  = 0, 1, . . ., .Then, recall well-known recursion formulas The induction step follows with straightforward calculations by using the recursion formulas above and [22,Proposition 1.3.3].We leave the details to the reader.
We will now illustrate how the result can be generalized for functions satisfying the growth condition (61) by using Proposition 38.First note that the growth condition (61) is indeed natural since it guarantees that the left side of ( 60) is square integrable.Consequently, since operator   is a mapping from  2 (Ω; H  ) into  2 (Ω), functions satisfying (61) are the largest class of functions for which (60) can hold.However, it is not clear in general whether   ( ⋅ )1  belongs to Dom   .Indeed, for example, in [1] the authors posed additional conditions on the Volterra kernel  to guarantee this.As our main result we show that E‖  ( ⋅ )1  ‖ 2 H  < ∞ implies that (60) holds.In other words, not only is the Itô formula (60) natural but it is also the only possibility.
Theorem 40 (Itô formula for Skorohod integrals).Let  be a separable centered Gaussian process with covariance  such that all the polynomials ( ⋅ )1  ∈  2 (Ω; H  ).Assume that  ∈  2 satisfies growth condition (61) and that the variance of  is bounded and of bounded variation.If for any  ∈ [0, ], then Proof.In this proof we assume, for notational simplicity and with no loss of generality, that sup 0≤≤ (, ) = 1.
First it is clear that (66) implies that   ( ⋅ )1  belongs to domain of   .Hence we only have to prove that for every random variable  =    (ℎ ⊗ ).Now, it is well-known that Hermite polynomials, when properly scaled, form an orthogonal system in  2 (R) when equipped with the Gaussian measure.Now each  satisfying the growth condition (61) has a series representation Indeed, the growth condition (61) implies that Furthermore, we have where the series converge almost surely and in  2 (Ω), and similar conclusion is valid for derivatives   (  ) and   (  ).
Then, by applying (66) we obtain that for any  > 0 there exists  =   such that we have where Consequently, for random variables of form  =    (ℎ ⊗ ) we obtain, by choosing  large enough and applying Proposition 38, that Now the left side does not depend on  which concludes the proof.
Remark 41.Note that actually it is sufficient to have from which the result follows by Proposition 38.Furthermore, taking account growth condition (61) this is actually sufficient and necessary condition for formula (60) to hold.Consequently, our method can also be used to obtain Itô formulas by considering extended domain of   (see [9] or [11]).This is the topic of Section 4.4.
Example 42.It is known that if  =   is a fractional Brownian motion with  > 1/4, then   ( ⋅ )1  satisfies condition (66) while for  ≤ 1/4 it does not (see [22,Chapter 5]).Consequently, a simple application of Theorem 40 covers fractional Brownian motion with  > 1/4.For the case  ≤ 1/4 one has to consider extended domain of   which is proved in [9].Consequently, in this case we have (75) for any  ∈ S.
We end this section by illustrating the power of our method with the following simple corollary which is an extension of [1, Theorem 1].
Corollary 43.Let  be a separable centered continuous Gaussian process with covariance  that is bounded such that the Fredholm kernel   is of bounded variation and Then, for any  ∈ [0, ] one has

International Journal of Stochastic Analysis
Proof.Note that assumption is a Fredholm version of condition (K2) in [1] which implies condition (66).Hence, the result follows by Theorem 40.

Extended Divergence Operator.
As shown in Section 4.3, the Itô formula (60) is the only possibility.However, the problem is that the space  2 (Ω; H  ) may be too small to contain the elements   ( ⋅ )1  .In particular, it may happen that not even the process  itself belongs to  2 (Ω; H  ) (see, e.g., [7] for the case of fractional Brownian motion with  ≤ 1/4).This problem can be overcome by considering an extended domain of   .The idea of extended domain is to extend the inner product ⟨, ⟩ H  for simple  to more general processes  and then define extended domain by (47) with a restricted class of test variables .This also gives another intuitive reason why extended domain of   can be useful; indeed, here we have proved that Itô formula (60) is the only possibility, and what one essentially needs for such result is the following: (ii) Equation ( 75) is valid for functions satisfying (61).
Consequently, one should look for extensions of operator   such that these two things are satisfied.
To facilitate the extension of domain, we make the following relatively moderate assumption: Remark 44.Note that we are making the assumption on the covariance , not the kernel   .Hence, our case is different from that of [1].Also, [11] assumed absolute continuity in ; we are satisfied with bounded variation.
We will follow the idea from Lei and Nualart [11] and extend the inner product ⟨⋅, ⋅⟩ H  beyond H  .
Consider a step-function .Then, on the one hand, by the isometry property we have where   () = (, ) ∈  2 ([0, ]).On the other hand, by using adjoint property (see Remark 20) we obtain where, computing formally, we have Consequently, This gives motivation to the following definition similar to that of [11, In particular, this implies that, for  and  as above, we have We define extended domain Dom    similarly as in [11].
Definition 46.A process  ∈  (92) Proof.Taking into account that we have no problems concerning processes to belong to the required spaces, the formula follows by approximating with polynomials of form ()() and following the proof of Theorem 40.

Applications
We illustrate how some results transfer easily from the Brownian case to the Gaussian Fredholm processes.

Equivalence in Law.
The transfer principle has already been used in connection with the equivalence of law of Gaussian processes in, for example, [24] in the context of fractional Brownian motions and in [2] in the context of Gaussian Volterra processes satisfying certain nondegeneracy conditions.The following proposition uses the Fredholm representation (8) to give a sufficient condition for the equivalence of general Gaussian processes in terms of their Fredholm kernels.
Then, X is equivalent to  if it admits, in law, the representation where W is connected to  of (95) by (94).
In order to show (96), let (98) Thus, we have shown representation (96) and consequently the equivalence of X and .

Generalized Bridges.
We consider the conditioning, or bridging, of  on  linear functionals G  = [   ]  =1 of its paths: We assume, without any loss of generality, that the functions   are linearly independent.Also, without loss of generality we assume that  0 = 0 and the conditioning is on the set {∫  0 g()d  = 0} instead of the apparently more general conditioning on the set {∫  0 g()d  = y}.Indeed, see in [16] how to obtain the more general conditioning from this one.
The rigorous definition of a bridge is as follows.
Definition 52.The generalized bridge measure P g is the regular conditional law A representation of the generalized Gaussian bridge is any process  g satisfying We refer to [16] for more details on generalized Gaussian bridges.
There are many different representations for bridges.A very general representation is the so-called orthogonal representation given by where, by the transfer principle, A more interesting representation is the so-called canonical representation where the filtration of the bridge and the original process coincide.In [16] such representations were constructed for the so-called prediction-invertible Gaussian processes.In this subsection we show how the transfer principle can be used to construct a canonical-type bridge representation for all Gaussian Fredholm processes.We start with an example that should make it clear how one uses the transfer principle.
Example 53.We construct a canonical-type representation for  1 , the bridge of  conditioned on   = 0. Assume  0 = 0. Now, by the Fredholm representation of  we can write the conditioning as Let us then denote by where g * =  *  g.

Series Expansions.
The Mercer square root ( 13) can be used to build the Karhunen-Loève expansion for the Gaussian process .But the Mercer form ( 13) is seldom known.However, if one can find some kernel   such that representation (8) holds, then one can construct a series expansion for  by using the transfer principle of Theorem 22 as follows.
Proposition 55 (series expansion).Let  be a separable Gaussian process with representation (8).Let (   ) ∞ =1 be any orthonormal basis on  2 ([0, ]).Then  admits the series expansion where (  ) ∞ =1 is a sequence of independent standard normal random variables.Series (109) converges in  2 (Ω) and also almost surely uniformly if and only if  is continuous.
The proof below uses reproducing kernel Hilbert space technique.For more details on this we refer to [25] where the series expansion is constructed for fractional Brownian motion by using the transfer principle.
Proof.The Fredholm representation (8) implies immediately that the reproducing kernel Hilbert space of  is the image    2 ([0, ]) and   is actually an isometry from  2 ([0, ]) to the reproducing kernel Hilbert space of .The  2expansion (109) follows from this due to [26,Theorem 3.7] and the equivalence of almost sure convergence of (109) and continuity of  follows [26, Theorem 3.8].

Stochastic Differential Equations and Maximum
and vice versa.Let us end this section by discussing briefly how the transfer principle can be used to build maximum likelihood estimators (MLEs) for the mean-reversal-parameter  in (111).For details on parameter estimation in such equations with general stationary-increment Gaussian noise we refer to [27] and references therein.Let us assume that the noise  in (111) is infinite-generate, in the sense that the Brownian motion  in its Fredholm representation is a linear transformation of

1𝐾𝑒[𝐾
Likelihood Estimators.Let us briefly discuss the following generalized Langevin equation:d   = −   d + d  ,  ∈ [0, ](110)with some Gaussian noise , parameter  > 0, and initial condition  0 .This can be written in the integral form   =   0 −  ∫  0    d + ∫  0  () d  .(111)Here the integral ∫  0 1  () d  can be understood in a pathwise sense or in a Skorohod sense, and both integrals coincide.Suppose now that the Gaussian noise  has the Fredholm representation  = ∫  0   (, ) d  .(112)Byapplying the transfer principle we can write (111) as  (, ) d  .(113)Thisequation can be interpreted as a stochastic differential equation with some anticipating Gaussian perturbation term ∫  0   (, )d  .Now the unique solution to (111) with an initial condition   0 = 0 is given by   =  − ∫  0  d  .(114)Byusing integration by parts and by applying the Fredholm representation of  this can be written as   = ∫  0   (, ) d      (, ) d  d(115)which, thanks to stochastic Fubini's theorem, can be written as   = ∫  0   (, ) −  ∫  0  −     (, ) d] d  .(116)Inother words, the solution   is a Gaussian process with a kernel   (, ) =   (, ) −  ∫  0  −(−)   (, ) d.(117)Note that this is just an example of how transfer principle can be applied in order to study stochastic differential equations.Indeed, for a more general equation d   =  (,    ) d + d  (118) the existence or uniqueness result transfers immediately to the existence or uniqueness result of equation  (, ) d  , 1  fl 1 [0,) ,  ≤ , belong to H  .(ii) H  is endowed with the inner product ⟨1  , 1  ⟩ H  fl (, ).Definition 4 states that (ℎ) is the image of ℎ ∈ H  in the isometry that extends the relation  (1  ) fl

)
Remark 7. The Hilbert space H  is separable if and only if  is separable.] in the inner product ⟨⋅, ⋅⟩ H  .
by a unitary operator  on  2 ([0, ]) such that K =   .Moreover, one may assume that   is symmetric.
2(Ω) generated by the random variables {  (()),  ∈ H, ‖‖ H = 1}, where   is the th Hermite polynomial.It is well-known that the mapping    ( ⊗ ) = !  (()) provides a linear isometry between the symmetric tensor product H ⊙ and the th Wiener chaos.The random variables    ( ⊗ ) are called multiple Wiener integrals of order  with respect to the Gaussian process .

)
(8)orem 35 (transfer principle for Malliavin calculus).Let  be a separable centered Gaussian process with Fredholm representation(8).Let   and   be the Malliavin derivative and the Skorohod integral with respect to  on [0, ].Similarly, let    and    be the Malliavin derivative and the Skorohod integral with respect to the Brownian motion  of (8) restricted on [0, ].Then =    .Furthermore, we have relation E ⟨,   ⟩ H  = E ⟨ *  ,    ⟩  2 ([0,]) (53) for any smooth random variable  and  ∈  2 (Ω; H  ).Now we are ready to show that the definition of the multiple Wiener integral  , in Section 4.2 is correct in the sense that it agrees with the iterated Skorohod integral.Let ℎ ∈ H , be of form ℎ( 1 , . . .,   ) = ∏  =1 ℎ  (  ).Then ℎ is iteratively  times Skorohod integrable and * Proposition 38 (Itô formula for polynomials).Let  be a separable centered Gaussian process with covariance  and assume that  is a polynomial.Furthermore, assume that for each polynomial  one has ( ⋅ )1  ∈  2 (Ω; H  ).Then for each  ∈ [0, ] one has  (  ) =  ( 0 ) + ∫  belongs to Dom   .Remark 39.The message of the above result is that once the processes ( ⋅ )1  ∈  2 (Ω; H  ), then they automatically belong to the domain of   which is a subspace of  2 (Ω; H  ).However, in order to check ( ⋅ )1  ∈  2 (Ω; H  ) one needs more information on the kernel   .A sufficient condition is provided in Corollary 43 which covers many cases of interest.Proof.By definition and applying transfer principle, we have to prove that   ( ⋅ )1  belongs to domain of   and that Definition 2.1]. be a step-function of form  = ∑  =1   1   .Then we extend ⟨⋅, ⋅⟩ H  to T  by defining ⟨, ⟩ H  = [11] T  ) belongs to Dom  if      E ⟨, ⟩ H       ≤   ‖‖ 2Remark 47.Note that in general Dom   and Dom    are not comparable.See[11]for discussion.Note now that if a function  satisfies the growth condition (61), then   ( ⋅ )1  ∈  1 (Ω; T  ) since (61) implies Theorem 48 (Itô formula for extended Skorohod integrals).Let  be a separable centered Gaussian process with covariance  and assume that  ∈  2 satisfies growth condition (61).Furthermore, assume that (H) holds and that the variance of  is bounded and is of bounded variation.Then for any  ∈ [0, ] the process   ( ⋅ )1  belongs to Dom    and ] the process   (⋅,  ⋅ )1  belongs to Dom    and    (,   ) d (, ) .
Proposition 51.Let  and X be two Gaussian processes with Fredholm kernels   and K , respectively.If there exists a Volterra kernel ℓ ∈  2 ([0, ] 2 ) such that K (, ) =   (, ) − ∫ Recall that by the Hitsuda representation theorem [5, Theorem 6.3] a centered Gaussian process W is equivalent in law to a Brownian motion on [0, ] if and only if there exists a kernel ℓ ∈  2 ([0, ] 2 ) and a Brownian motion  such that W admits the representation W =   − ∫ 0   (, ) d  .
Let  be a Gaussian process with Fredholm kernel   such that  0 = 0. Then the bridge  g admits the canonical-type representation [16,l ∫     ()   () d.(107)Then, in the same way as Example 53, by applying the transfer principle to[16, Theorem 4.12], we obtain the following canonical-type bridge representation for general Gaussian Fredholm processes.