A class of bridges of iterated integrals of Brownian motion related to various boundary value problems involving the one-dimensional polyharmonic operator

Let $(B(t))_{t\in [0,1]}$ be the linear Brownian motion and $(X_n(t))_{t\in [0,1]}$ be the $(n-1)$-fold integral of Brownian motion, $n$ being a positive integer: $$ X_n(t)=\int_0^t \frac{(t-s)^{n-1}}{(n-1)!} \,\dd B(s) for any $t\in[0,1]$. $$ In this paper we construct several bridges between times 0 and 1 of the process $(X_n(t))_{t\in [0,1]}$ involving conditions on the successive derivatives of $X_n$ at times 0 and 1. For this family of bridges, we make a correspondance with certain boundary value problems related to the one-dimensional polyharmonic operator. We also study the classical problem of prediction. Our results involve various Hermite interpolation polynomials.

Observe that the differential equations and the boundary conditions at 0 are the same in all cases. Only the boundary conditions at 1 differ. Other boundary value problems can be found in [4] and [5]. We refer the reader to [3] for a pioneering work dealing with the connections between general Gaussian processes and Green functions; see also [1]. We also refer to [2], [7], [8], [11], [12], [15] and the references therein for various properties, namely asymptotical study, of the iterated integrals of Brownian motion as well as to [4], [5], [13] and [14] for interesting applications of these processes to statistics.
The aim of this work is to examine all the possible conditioned processes of (X n (t)) t∈ [0,1] involving different events at time 1: (X n (t)|X j1 (1) = X j2 (1) = · · · = X jm (1) = 0) t∈ [0,1] for a certain number m of events, 1 ≤ m ≤ n, and certain indices j 1 , j 2 , . . . , j m such that 1 ≤ j 1 < j 2 < · · · < j m ≤ n, and to make the connection with the boundary value problems: for certain indices i 1 , i 2 , . . . , i n such that 0 ≤ i 1 < i 2 < · · · < i n ≤ 2n − 1. Actually, we shall see that this connection does not recover all the possible boundary value problems and we shall characterize those sets of indices for which such a connection exists.
The paper is organized as follows. In Section 2, we exhibit the relationships between general Gaussian processes and Green functions of certain boundary value problems. In Section 3, we consider the iterated integrals of Brownian motion. In Section 4, we construct several bridges associated with the foregoing processes and depict explicitly their connections with the polyharmonic operator together with various boundary conditions. One of the main results is Theorem 4.3. Moreover, we exhibit several interesting properties of the bridges (Theorems 4.1 and 4.2) and solve the prediction problem (Theorems 4.4). In Section 5, we illustrate the previous results on the case n = 2 related to integrated Brownian motion. Finally, in Section 6, we give a characterization for the Green function of the boundary value problem (BVP) to be a covariance function. Another one of the main results is Theorem 6.2.

Gaussian processes and Green functions
We consider a n-Markov Gaussian process (X(t)) t∈[0,1] evolving on the real line R. By "n-Markov", it is understood that the trajectory t → X(t) is n times differentiable and the n-dimensional process (X(t), X ′ (t), . . . , X (n−1) (t)) t∈[0,1] is a Markov process. Let us introduce the covariance function of (X(t)) t∈[0,1] : for s, t ∈ [0, 1], c X (s, t) = E[X(s)X(t)]. It is known (see [1]) that the function c X admits the following representation: where ϕ k , ψ k , k ∈ {0, 1, . . . , n − 1}, are certain functions. Let D 0 , D 1 be linear differential operators of order less than p and let D be a linear differential operator of order p defined by where α 0 , α 1 , . . . , α p are continuous functions on [0,1]. More precisely, we have for any p times differentiable function f and any t ∈ [0, 1], Theorem 2.1 Assume that the functions ϕ k , ψ k , k ∈ {0, 1, . . . , n − 1}, are p times differentiable and satisfy the following conditions, for a certain constant κ: Then, for any continuous function u on solves the boundary value problem Remark 2.1 If the problem (2.4) is determining, that is if it has a unique solution, then the covariance function c X is exactly the Green function of the boundary value problem (2.4).
Proof In view of (2.1), the function v can be written as and its second order derivative, since More generally, because of the assumptions (2.2), we easily see that, Actually, we have proved that, for i ∈ {0, 1, . . . , p − 1}, Finally, due to (2.3), Concerning the boundary value conditions, referring to (2.3), we similarly have The proof of Theorem 2.1 is finished. ⊓ ⊔ In the two next sections, we construct processes connected to the equation Dv = u subject to the boundary value conditions at 0: (D i 0 v)(0) = 0 for i ∈ {0, 1, . . . , n − 1} and others at 1 that will be discussed subsequently, where D and D i 0 are the differential operators (D being of order p = 2n) defined by
In this section, we exhibit several interesting properties of the various processes (Y (t)) t∈[0,1] . One of the main goals is to relate these bridges to additional boundary value conditions at 1. For this, we introduce the following subset I of {0, 1, . . . , 2n − 1}: The cardinality of I is n. Actually, the set I will be used further for enumerating the boundary value problems which can be related to the bridges labeled by J. Conversely, I yields J through In the table below, we give some examples of sets I and J.

Remark 4.1
In the case where n = 2, we retrieve a result of [6]. Moreover, the conditions (4.1) characterize the polynomials P j , j ∈ J. We prove this fact in Lemma A.1 in the appendix.
Proof By invoking classical arguments of Gaussian processes theory, we have the distributional identity We plainly have Then, the system (4.2) writes The matrix of this system (1/(j + k − 1)) j,k∈J is regular as it can be seen by introducing the related quadratic form which is definite positive. Indeed, for any real numbers x j , j ∈ J, we have j,k∈J Thus, the system (4.2) has a unique solution. As a result, the P j are linear combinations of the functions t → t 0 (t − u) n−1 (1 − u) k−1 du which are polynomials of degree less than n + k. Hence, P j is a polynomial of degree at most 2n − 1.
We now compute the derivatives of P j at 0 en 1. We have P (i) Consequently, at time t = 1, for i ≥ n, In particular, if i ∈ (n − J), this can be rewritten as which by identification yields P (i) In the next theorem, we supply a representation of c Y of the form (2.1).

Theorem 4.2
The covariance function of (Y (t)) t∈[0,1] admits the following representation: for with, for any k ∈ {0, 1, . . . , n − 1}, Moreover, the functionsψ k , k ∈ {0, 1, . . . , n − 1}, are Hermite interpolation polynomials such that By definition (4.2) of the P j 's, we observe that Since the covariance functions c Y and c Xn are symmetric, we also have Let us introduce the symmetric polynomial It can be expressed by means of the functions ϕ k , ψ k 's as follows: We can rewrite c Y (s, t) as and then, for We immediately see thatψ k is a polynomial of degree less than 2n such thatψ We deduce that

Boundary value problem
In this section, we write out the natural boundary value problem which is associated with the process (Y (t)) t∈ [0,1] . The following statement is the main connection between the different boundary value conditions associated with the operator d 2n /dt 2n and the different bridges of the process (X n ) t∈ [0,1] introduced in this work.
• Second step. We now check the uniqueness of the solution of (4.4). Let v 1 and v 2 be two solutions of Dv = u. Then the function w = v 1 − v 2 satisfies Dw = 0, w (i) (0) = 0 for i ∈ {0, 1, . . . , n − 1} and w (i) (1) = 0 for i ∈ I. We compute the following "energy" integral: We have constructed the set I in order to have w (i) (1)  In this manner, we get 2 n different boundary value problems which correspond to the 2 n different bridges we have constructed. We shall see in Section 6 that the above identity concerning the differentiating set I characterizes the possibility for the Green function of the boundary value problem (4.4) to be the covariance of a Gaussian process.

The solution is given by
. . .
and we see that X n−k (t 0 ), k ∈ {0, 1, . . . , n − 1}, is of the form Therefore, by plugging (4.7) into (4.5), we obtain Finally, Y (t + t 0 ) can be written as • Second step. We easily see that the functionsP j,t0 and Q i,t0 are polynomials of degree less than 2n. Let us compute now their derivatives at 0 and t 0 . First, concerning Q i,t0 we have Choosing t = 0 and recalling the definition of a ιk and the fact that the matrices (a ik ) 0≤i,k≤n and (b ik ) 0≤i,k≤n are inverse, this gives for ι ∈ {0, 1, . . . , n − 1}, By Theorem 4.1, we know that P (ι) j (1) = δ j,n−ι for ι ∈ I; then j∈J:j≥n−k Observing that, if ι ≤ k ≤ n − 1, the conditions ι ∈ (n − J) and ι ∈ I are equivalent, we simply have which immediately entails, by (4.8), Next, concerningP j,t0 , we havẽ Choosing t = 0, this gives for ι ∈ {0, 1, . . . , n − 1}, since Q Choosing t = 1 − t 0 , we have for ι ∈ I, since P (ι) The polynomialsP j,t0 (resp. Q i,t0 ) enjoy the same properties than the P j 's (resp. theψ i 's), regarding the successive derivatives, they can be deduced from these latter by a rescaling according asP It is then easy to extract the identity in distribution below, by using the property of Gaussian conditioning: Here, we have a look on the particular case where n = 2 for which the corresponding process (X n (t)) t∈[0,1] is nothing but integrated Brownian motion (the so-called Langevin process): The underlying Markov process is the so-called Kolmogorov diffusion (X 2 (t), X 1 (t)) t∈ [0,1] . All the associated conditioned processes that will be constructed are related to the equation v (4) (t) = u(t) with boundary value conditions at time 0: v(0) = v ′ (0) = 0. There are four such processes:

another bridge of integrated Brownian motion).
On the other hand, when adding two boundary value conditions at time 1 to the foregoing equation, we find six boundary value problems: Actually, only four of them can be related to some Gaussian processes-the above listed processes-in the sense of our work whereas two others cannot be. For each process, we provide the covariance function, the representation by means of integrated Brownian motion subject to a random polynomial drift, the related boundary value conditions at 1 and the decomposition related to the prediction problem. Since the computations are straightforward, we shall omit them and we only report here the results.
For an account on integrated Brownian motion in relation with the present work, we refer the reader to, e.g., [6] and references therein.

Integrated Brownian motion
The process corresponding to the set J = ∅ is nothing but integrated Brownian motion: The covariance function is explicitly given by This process is related to the boundary value conditions at 1 ( The prediction property can be stated as follows:

Integrated Brownian bridge
The process corresponding to the set J = {1} is integrated Brownian bridge: This process can be represented as 1] .

Bridge of integrated Brownian motion
The process corresponding to the set J = {2} is the bridge of integrated Brownian motion: The bridge is understood as the process is pinned at its extremities: Y (0) = Y (1) = 0. This process can be represented as 1] .
The covariance function is explicitly given by .

Other bridge of integrated Brownian motion
The process corresponding to the set J = {1, 2} is another bridge of integrated Brownian motion (actually of the two-dimensional Kolmogorov diffusion): The bridge here is understood as the process is pinned at its extremities together with its derivatives: This process can be represented as The covariance function is explicitly given by The process (Y (t)) t∈[0,1] is related to the boundary value conditions at 1 ( .

Two counterexamples
• The solution of the problem associated with the boundary value conditions v(1) = v ′′′ (1) = 0 (which corresponds to the set I 1 = {0, 3}) is given by • The solution of the problem associated with the boundary value conditions v ′ (1) = v ′′ (1) = 0 (which corresponds to the set I 2 = {1, 2}), is given by We can observe the relationships G 1 (s, t) = G 2 (t, s) and I 2 = {0, 1, 2, 3}\(3 − I 1 ). The Green functions G 1 and G 2 are not symmetric, so they cannot be viewed as the covariance functions of any Gaussian process. In the next section, we give an explanation of these observations.

General boundary value conditions
In this last part, we address the problem of relating the general boundary value problem for any indices i 1 , i 2 , . . . , i n such that 0 ≤ i 1 < i 2 < · · · < i n ≤ 2n − 1, to some possible Gaussian process. Set I = {i 1 , i 2 , . . . , i n }. We have noticed in Theorem 4.3 and Remark 4.2 that, when I satisfies the relationship 2n − 1 − I = {0, 1, . . . , 2n − 1}\I, the system (6.1) admits a unique solution. We proved this fact by computing an energy integral. Actually, this fact holds for any set of indices I; see Lemma A.1.
Our aim is to characterize the set of indices I for which the Green function of (6.1) can be viewed as the covariance function of a Gaussian process. A necessary condition for a function of two variables to be the covariance function of a Gaussian process is that it must be symmetric. So, we shall characterize the set of indices I for which the Green function of (6.1) is symmetric and we shall see that in this case this function is a covariance function.

Representation of the solution
We first write out a representation for the Green function of (6.1).
Theorem 6.1 The boundary value problem (6.1) has a unique solution. The corresponding Green function admits the following representation, for s, t ∈ [0, 1]: where the R I,ι , ι ∈ I, are Hermite interpolation polynomials satisfying Proof Let us introduce the functions v 1 and v 2 defined, for any t ∈ [0, 1], by We plainly have v (0) = 0. Therefore, the function v solves the system (6.1) if and only if the function v 2 satisfies Referring to Lemma A.1, the conditions (6.3) mean that v 2 is a Hermite interpolation polynomial which can be expressed as a linear combination of the R I,ι , ι ∈ I, defined in Theorem 6.1 as follows: Consequently, the boundary value problem (6.1) admits a unique solution v which writes where G I (s, t) is defined in Theorem 6.1. The proof is finished. ⊓ ⊔ We now state two intermediate results which will be used in the proof of Theorem 6.2. Since I 1 = I 2 and since I 1 , I 2 have the same cardinality, there exists an index i 0 which belongs to I 2 \I 1 . For this i 0 , we would have (∂ i0 G I 1 /∂t i0 )(s, 1) = 0 for all s ∈ (0, 1), or equivalently, This is impossible since the exponent (2n − 1 − i 0 ) does not appear in the polynomial on the left-hand side of the foregoing equality. As a result, G I 1 = G I 2 . ⊓ ⊔ Proposition 6.2 Let I 1 and I 2 be two subsets of {0, 1, . . . , 2n − 1} with cardinality n. The relationship G I 1 (s, t) = G I 2 (t, s) holds for any s, t ∈ [0, 1] (in other words, the integral operators with kernels G I 1 and G I 2 are dual) if and only if the sets I 1 and I 2 are linked by I 2 = {0, 1, . . . , 2n − 1}\(2n − 1 − I 1 ).
The polynomial S cannot be null, that is, there exist s, t ∈ [0, 1] such that G I 1 (s, t) = G I 2 (t, s).
The proof of Proposition 6.2 is finished. ⊓ ⊔ A necessary condition for G I to be the covariance function of a Gaussian process is that it must be symmetric: G I (s, t) = G I (t, s) for any s, t ∈ [0, 1]. The theorem below asserts that if the set of indices I is not of the form displayed in the preamble of Section 4, that is I = {0, 1, . . . , 2n − 1}\(2n − 1 − I), the Green function G I is not symmetric and consequently this function can not be viewed as a covariance function, that is we can not relate the boundary value problem (6.1) to any Gaussian process. Here, we have a look on the particular case where n = 3 for which the corresponding process (X n (t)) t∈[0,1] is the twice integrated Brownian motion: All the associated conditioned processes that can be constructed are related to the equation v (6) (t) = −u(t) with boundary value conditions at time 0: v(0) = v ′ (0) = v ′′ (0) = 0. There are 2 3 = 8 such processes. Since the computations are tedious and the explicit results are cumbersome, we only report the correspondance between bridges and boundary value conditions at time 1 through the sets of indices I and J. These are written in the  Proof We look for polynomials P in the form We shall adopt the convention 1/[(j − i)!] = 0 for i > j. The conditions (A.1) yield the linear system (with the convention that i and j denote respectively the raw and column indices) The (2n) × (2n) matrix of this system writes Proving the statement of Lemma A.1 is equivalent to proving that the matrix A is regular. In view of the form of A as a bloc-matrix, we see, since the north-west bloc is the unit matrix and the north-east bloc is the null matrix, that this is equivalent to proving that the south-east bloc of A is regular. Let us call this latter A 0 and label its columns C For proving that A 0 is regular, we factorize A 0 into the product of two regular triangular matrices. The method consists in performing several transformations on the columns of A 0 which do not affect its rank. We provide in this way an algorithm leading to a LU-factorization of A 0 where L is a lower triangular matrix and U is an upper triangular matrix with no vanishing diagonal term.

We have written
where U 1 is the triangular matrix with a diagonal made of 1 below: We now perform the transformation The generic term of the column C j , for j ∈ {2, 3, . . . , n − 1}, is This transformation supplies a matrix A 2 with columns C (2) j , j ∈ {0, 1, . . . , n − 1}, which writes 0 C 1 · · · C (2)

We have written
where U 2 is the triangular matrix with a diagonal made of 1 below: In a recursive manner, we easily see that we can construct a sequence of matrices A k , U k , k ∈ {1, 2, . . . , n − 1}, such that A k = A k−1 U k where U k is the triangular matrix with a diagonal made of 1 below: It is clear that the matrices U and L are triangular and regular, and then A 0 (and A) is also regular. Moreover, the inverse of A 0 can be computed as The proof of Lemma A.1 is finished. ⊓ ⊔