Monte-Carlo Galerkin Approximation of Fractional Stochastic Integro-Differential Equation

Recently, many applications in numerous fields, such as viscoelastic materials, signal processing, controlling, quantum mechanics, meteorology, finance, and life science have been remodeled in terms of fractional calculus where derivatives and integrals of fractional order is introduced and so differential equation of fractional order are involved in these models, see 1– 5 . Generally, a fractional integro-differential equation with Caputo’s definition of fractional derivative takes the form


Introduction
Recently, many applications in numerous fields, such as viscoelastic materials, signal processing, controlling, quantum mechanics, meteorology, finance, and life science have been remodeled in terms of fractional calculus where derivatives and integrals of fractional order is introduced and so differential equation of fractional order are involved in these models, see 1-5 .Generally, a fractional integro-differential equation with Caputo's definition of fractional derivative takes the form u α t f t, u t t 0 h t, s, u ds, u t 0 u 0 , t 0 ≤ t ≤ T, 0 ≤ α ≤ 1.

1.1
Also, in recent years, development of adequate statistical techniques for stochastic systems has constantly been in the focus of scientific attention because of its outstanding importance for a number of physical applications such as turbulence, heterogeneous flows and materials, random noises, and so forth, see 6 with initial condition u t 0 ; w u 0 and 0 ≤ α ≤ 1.The FSDE is a generalization of the fractional Fokker-Planck equation which describes the random walk of a particle, see 9 .The aim of this paper is threefold.First we rewrite the above equation as stochastic differential equation du t a t, u; w dt b t, u; w dW s . 1.4 Secondly we prove the existence and the uniqueness of the solution of this equation.Thirdly we present a numerical method using finite element method and the Monte-Carlo method.

Preliminaries
In this section we collect a few known results to which we refer frequently in the sequel.
0, ∞ is called a stochastic process.For each point ω ∈ Ω, the mapping t → X t, ω is a realization, sample path, or trajectory of the stochastic process.ii Let L be the σ-algebra of Lebesgue subsets of R. Let L 2 T be a class of functions f : 0, T × Ω → R satisfying the following:

Definition 2.2. A real-valued stochastic process that depends continuously on
Definition 2.4 Convergence .If u h is the exact value of a random variable and u n h is its approximate value, then we have 1 strong convergence when 2 weak convergence when 3 mean square convergence when Definition 2.5.Let y x ∈ L 1 a, b , we define the fractional integral of order α, 1 ≥ α > 0, of the function y x over the interval a, b as

2.4
For completion, we define J 0 I identity operator ; that is, we mean J 0 y x y x .Furthermore, by J α a y x we mean the limit if it exists of J α c as c → a .This definition is according to Riemann-Liouville definition of fractional integral of arbitrary order α.For simplicity, we define the fractional integral by the equation where we dropped a .The folllowing equation denotes the fractional derivative of y x of order α.For numerical computations, we usually use another definition of the fractional derivative, due to Caputo 2 , given by C D α y x D α y t − y a x .2.7

Formulation of the Problem
In this paper, we present a numerical method to solve the following FSIDE: with initial condition u t 0 ; w u t 0 .For simplicity of notations, we drop the variable w, and so this equation takes the form where C D α is the Caputo-fractional derivative operator of order α with 0 < α ≤ 1, t ∈ t 0 , T .The integral term of the right hand side is an It o integral, W is a BM, and dW/dt is a white noise, the derivative of a BM the derivative in the sense of distribution .The functions A t , B t, s; w , f t; w , u t 0 w , and u t; w satisfy the following conditions: C1 measurability : A A t , B B s, t and u t are L 2 -measurable in 0, 1 × R; 2 may be written as Using the definition of the operator C D −α , 2.7 , we obtain that This is a Fredholm integral equation that we are going to solve instead of solving 3.2 .Without loss of generality, we will let t 0 0 and T 1 throughout; that is, we assume that the time parameter set of the processes considered is 0, 1 .where the drift is given by 7

3.8
The proof of this lemma and the next lemma are easy to see.

Mathematical Problems in Engineering
With the help of these two lemmas, we can establish the existence and the uniqueness theorem to the 3.5 ; for proof see Theorem 4.5.3,Kloeden and Platen [10].Theorem 3.3.Under the assumptions C1-C3 (or A1-A4) the stochastic integral equation 3.6 , and so 3.2 , has a pathways unique strong solution u t on 0, 1 .
The existence of a unique solution of the SDE 3.2 ensures the existence of the integrals on the right hand side of 3.5 at each point in the domain of the definition.

The Monte Carlo Galerkin Finite Element Method
It is known that, introducing a finite element method FEM that approximates a solution of differential equation DE , we first need to obtain a weak formulation in the standard sense of DE and FEM which is not possible with the presence of the white noise.In our method, we do not require to approximate the white noise using FEM, instead we follow the approach in Allen et al. 11 who have suggested a smoother approximation for the white noise process when computing the approximate solutions of stochastic differential equations.
They have suggested the following approximation for the one-dimensional white noise process Ẇ t , t ∈ 0, 1 .
Consider the uniform time discretization of the interval 0, 1 Then the following approximation is defined for the white noise process Ẇ t on this discretization, where the coefficients are random variables defined by

4.3
As a direct result of this approximation of the white noise, it is easily to see that E ρ g dW t − d W t 0 E ρ g dW t − d W t 2 for any bounded function g.Now d W t can be substituted for dW t to obtain the following smoothed version of the SDE 3.5 : The solution of this equation, u t , is smoother than u t and therefore standard numerical procedures can be applied to compute its approximate solution.Once the approximate solution, u, is obtained for M realizations of such approximate solution, the Monte Carlo method then uses these approximations to compute corresponding sample averages of these M realizations.Now we show that u t is a good approximation of u t .To show this the following lemma is required.

Lemma 4.1. Given a nonrandom function H t such that
where λ > 0 is a constant, then be a partition of the interval 0, z .We have

4.8
With the help of this lemma, we can show that u t → u t as ρ → 0.
Theorem 4.2. 4.9 where Proof.Let t u t − u t , then 3.5 and 4.4 lead to

4.11
Applying the inequality a b c 2 ≤ 3 a 2 b 2 c 2 and the H ölder's inequality to this equation, we obtain where λ 2 is given by 4.10 and 4.13 Taking expectations on both side, letting e E 4.15 as we claimed.In the rest of this section, we construct our numerical method which enables us to solve 4.4 numerically.Let {ψ j t , j 0, 1, 2, . . ., J} be a set of deterministic orthogonal functions with weight function ν t and a, b its interval of orthogonality.Also, assume that b a ν t ψ k t ψ j t dt

4.16
Now, for t 0, 1 , we assume that u t ≈ u J t u 0 ν t J j 0 c j ψ j t , 4.17

4.18
Multiply this equation by ψ k t and integrating the resultant over the interval a, b , we obtain that where for j, k 0, 1, 2, . . ., J.In case of white noise, we may evaluate the function V j z if we regularize the stochastic term by replacing the white noise with a smoother stochastic term.In other words, the last two equations may be replaced with

The Solution of the Stochastic Linear System
The linear system 4.19 may be rewritten as where Theorem 4.3.This system has a unique solution if

4.24
Proof.The above system can be written, in the matrix form as where where det I − P is the determinant of I − P , Q j is the jth component of the matrix Q, and det P * ij agrees with det I − P except for the ith column where all but the jth term is 0, and the jth term is 1, providing that det I − P / 0; see 12 .It is known that the inverse of the matrix I − λP will always exist for all values of λ with the exception of at most J 1 values.These are the roots of the characteristic determinantal equation det I − λP 0. Therefore, the condition det I − P / 0 holds if λ 1 is not an eigenvalue of the matrix P .According to Gershgorin theorem, all eigenvalues of the matrix P lie in the circles S i {z : |z − γ * ii | ≤ J j / i |γ * ij |}, i 0, 1, 2, . . .J. Hence the system 4.22 has a unique solution given by 4.27 if 4.24 is satisfied.As a conclusion of this theorem, our numerical method depends on the choice of the orthonormal set {ψ j t , j 0, 1, 2, . . ., J}.This set should be chosen in such a way that the requirement of the above theorem, 4.24 , should be satisfied.

Numerical Examples
In this section, we give two examples.The first example is fractional differential equation in which there is no stochastic term.The second example is a stochastic differential equation in which the differentiation is ordinary not of fractional order.
For each of these examples, we use the Jacobi polynomials of degree k, see 13 with weight ν t x q−1 1 − x p−q and satisfy the orthonormality condition 1 0 ν x G k p, q, x G j p, q, x dx

5.2
where With exact solution u t t, Table 1 gives the error error absolute value of the difference between the exact solution and the approximate solution for different values of J and Figure 1 compares the graph of the exact solution with the of the approximate solutions for different values of J 2, 4, and 8.As an application of this example, the logistic population growth

Conclusion
Our presented numerical method is applied for many different FSIDE of the form 3.2 .From our numerical computations, we see the following.
1 There is no restriction on the choice of the orthogonal polynomials.Moreover, the values of the parameters, p and q, of the chosen orthogonal polynomials, see 5.1 , do not play any role in the computations yet more information about the expected value of the exact solution will be helpful in determining appropriate values of these parameters.In all cases the assumed approximation should agree with the initial condition with and the exact solution.

,
matrix C c 0 c 1 c 2 • • • c J T .Cramer's rule gives the solution of this linear system; namely,

Table 1 :
The error term, Example 5.1, for different values of J.