Optimal Linear Filtering of General Multidimensional Gaussian Processes and Its Application to Laplace Transforms of Quadratic Functionals

The optimal filtering problem for multidimensional continuous possibly non-Markovian, Gaussian processes, observed through a linear channel driven by a Brownian motion, is revisited. Explicit Volterra type filtering equations involving the covariance function of the filtered process are derived both for the conditional mean and for the covariance of the filtering error. The solution of the filtering problem is applied to obtain a Cameron-Martin type formula for Laplace transforms of a quadratic functional of the process. Particular cases for which the results can be further elaborated are investigated.


Introduction
The Kalman-Bucy theory of optimal filtering is well-known for Gaussian linear sys- tems driven by Brownian motions.Various extensions of this theory for possibly non-Gaussian Markov processes and semimartingales have been given a great deal of interest over the last decades.(see Davis [1], Liptser and Shiryaev [8,9], Kallianpur [3], Elliot [3], and Pardoux [10]).As far as we know, there are few contributions for 1Research supported by RFBR Grants 00-01-00571 and 00-15-96116.systems generating non-Markovian processes or processes which are not semimartin- gales (e.g., [3]).Yet, for processes governed by for It6-Volterra type equations, Kleptsyna and Veretennikov [7] provide a technique to overcome many of the difficulties of non-Markovian and non-semimartingale processes.Recently, a similar approach has been applied in several specific one-dimensional non-Markovian continuous Gaussian filtering problems (see Kleptsyna et al. [4][5][6], and references therein).
In this paper, we deal with a signal process X (Xt, t > 0) which is an arbitrary p-dimensional continuous Gaussian process and an observation process Y (Yt, t > 0) in Nq governed by the linear equation Yt i R(s)Xsds + Nt' t > O, (1) 0 (see [3,Chap.10] for a similar setting).The function R -(R(s),s > 0) is continuous with values in the set of q x p matrices, and N (Nt, t > 0) denotes a q-dimensional Brownian motion, independent of X, with covariance function (N)-((N)t,t > 0).Clearly, the pair (X, Y) is Gaussian but, in general, is neither Markovian nor a semi- martingale.If only Y is observed and one wishes to known X, the above reduces to the classical problem of filtering the signal X at time t from the observation of Y up to time t.The solution to this problem is the conditional distribution of X given the r-field q'Jt-r({Ys, 0 < s <_ t}) which is called the optimal filter.Of course, here the optimal filter is a Gaussian distribution and it is completely determined by the conditional mean 7rt(X of X given t and by the conditional covariances 7xx(t) of the filtering error, which is actually deterministic, i.e.,

t(x) " xx(t)
Our first aim is to show that the solution can be completely described.That is, the characteristics of the optimal filter are obtained as the solution of a closed form sys- tem of Volterra-type equations which can be reduced to the Kalman-Busy equations when the signal process X is a Gauss-Markov process.Our second aim is to extend the filtering approach for one-dimensional processes presented in [6], to obtain a Cameron-Martin type formula for the Laplace transform of a quadratic functional of the process.That is, for q p, .L(,)-Eexp{-1/2i 0 X'd(N)sXs). ( This paper is organized as follows.In Section 2, we derive the solution of the filter- ing problems where explicit Volterra-type equations, involving the covariance func- tion of the filtered process, are derived for the first and second-order moments of the optimal filter.The application to quadratic functionals of the process is reported in Section 3 where a filtering problem is given and the Laplace transform is computed.Finally, in Section 4, we investigate some specific cases where the results can be further elaborated.

Solution of the Filtering Problem
In what follows, all random variables and processes are defined on the stochastic basis (a, ff,(ft),P where the usual conditions are satisfied and where processes are (fit)- adapted.We consider a NP-valued continuous Gaussian process X (Xt, _> 0) with mean function ru (rut, t >_ 0) and covariance function K -(K(t,s),t >_ O,s >_ 0).
Here, we set Q(s)= d(N)s/ds where the derivative is understood in the sense of absolute continuity.Thus Q(s) is a non-negative symmetric q x q matrix assumed to be non-singular.Recall that the solution of the filtering problem of signal X from observation Y defined in (1) can be reduced to the equations for the conditional mean and covariance of the filtering error.The following theorem provides these equations.

V1
Remark 1: Theorem 1 provides further elaboration of the solution of the filtering problem given in [3, Chap.10].Theorem 1 can also be viewed as a partial extension to the non-Markovian setting of the filtering theorem for general linear systems driven by Gaussian martingales, as proved in Liptser and Shiryaev [9].

The Cameron-Martin Type Formula
Here, we start with a p-dimensional Gaussian process X, as before, and a given arbitrary increasing absolutely-continuous deterministic function (N)= ((N)t,t _> 0) with values in the set of non-negative symmetric p x p matrices.We want to compute the Laplace transform (t) defined by (3).Extending the filtering approach for one-dimensional processes given in [6], we can prove the following statement.
(13) 0 The key point in the proof of this theorem is to describe an appropriate filtering problem of the type studied above and to extend the analysis beyond Theorem 1.
We take q p and we choose N (Nt, t >_ 0), with N o 0, as a NP-valued Brownian motion with covariance function (N) that is independent of the given process X.We also choose R(s)= Q(s), where, again, the notation d(N)s Q(s)ds is used, and we define the NP-valued observation process Y + (Yt, _> 0) by the corresponding equa- tion (1), i.e., Yt / Q(s)Xsds + Nt, t >_ O. 0 Finally, we define the auxiliary process ( (t, >_ 0) by (t-/ X'dYs, > O, (14) 0 and set (15) We now state the following key result.
Lemma 1: For any t >_ O, the following equality holds.
(t) exp{ --1/2 i (rs(X) ?x(S))'Q(s)(Trs(X) 7x(s))ds} 0 x exp{ -1/2 i tr[Q(s)Txx(s)]ds}" 0 (16) Before presenting the proof of Lemma 1, it should be mentioned that equality (16)   states that the difference %(X)-Tx(S) is itself deterministic.Moreover, from a comparison of equations (12) and. ( 16) it is clear from Lemma 1 that to prove Theorem 2, it is only necessary to show that the quantities 7xx(S) and rs(X -7x(S are just 7(s,s) and zs, where 7(s,s) and z s ar given by equations 96) with R (s)Q-(s)R(s) replaced by Q(s) and (13) respectively.These steps are now used to prove Lemma 1.
Proof of Lemma 1: It is easy to check that the function is absolutely continuous and that the corresponding derivative is -L/2, where L(t) EXQ(t)Xte-It; I Therefore, the following representation holds.

Particular Cases
In the one-dimensional case, specific cases of Markovian and non-Markovian Gaussian processes for which the above results about filtering and Cameron-Martin type formulas can be applied, have been reported in [6] (see therein for further references of contributions around Laplace transforms of quadratic functionals).We now discuss some multidimensional examples where our results can be further elaborated.

Gauss-Markov Processes
First we discuss the standard Gauss-Markov case where the NP-valued process X is governed by the stochastic differential equation dX A(t)Xtdt + dWt, >_ O; X0, where A (A(t), t >_ O) is a V x p matrix-valued continuous function, W (W t, t > O) is a Brownian motion in N p such that d(W)t-D(t)dt, and X 0 is a Gaussian initial condition independent on W such that [ZXo-m and Z(Xo-m)(Xo-m)'-A.
Now, denote the solution of the differential equation Is-A(s)I-Is, s >_ O, with the initial condition I-I0 Ip (p x p identity matrix) by I]s.Then by I-Is, we have -1K(s s) 0 <_ s <_ t, ms I-I s TM, K(t,s) I] 1-I where K(s,s) is a solution to the Lyapunov differential equation -sh'(s, s) A(s)K(s, s) + h'(s, s)A'(s) + D(s), s >_ O, K(O, O) A.

4. 2
Iterated Integrals of a Brownian Motion