This paper presents a computational study on a quasi-Galerkin projection-based method to deal with a class of systems of random ordinary differential equations (r.o.d.e.’s) which is assumed to depend on a finite number of random variables (r.v.’s). This class of systems of r.o.d.e.’s appears in different areas, particularly in epidemiology modelling. In contrast with the other available Galerkin-based techniques, such as the generalized Polynomial Chaos, the proposed method expands the solution directly in terms of the random inputs rather than auxiliary r.v.’s. Theoretically, Galerkin projection-based methods take advantage of orthogonality with the aim of simplifying the involved computations when solving r.o.d.e.’s, which means to compute both the solution and its main statistical functions such as the expectation and the standard deviation. This approach requires the previous determination of an orthonormal basis which, in practice, could become computationally burden and, as a consequence, could ruin the method. Motivated by this fact, we present a technique to deal with r.o.d.e.’s that avoids constructing an orthogonal basis and keeps computationally competitive even assuming statistical dependence among the random input parameters. Through a wide range of examples, including a classical epidemiologic model, we show the ability of the method to solve r.o.d.e.’s.
1. Introduction and Motivation
The nondeterministic nature of phenomena in areas such as engineering, physics, chemistry, and epidemiology often leads to mathematical continuous models formulated by random ordinary differential equations (r.o.d.e.’s). The uncertainty involving such phenomena appears through the input model parameters (coefficients, source/forcing term, initial/boundary conditions) which then are considered as random variables (r.v.’s) and stochastic processes (s.p.’s) rather than constants and ordinary functions, respectively. The complexity of such stochastic continuous models becomes greater when random inputs are assumed to be statistically dependent, a hypothesis that is met in practice.
Solving a r.o.d.e. consists of not only computing its solution, which is a s.p., but also its main statistical characteristics such as the mean and standard deviation/variance functions. To achieve these objectives, a high number of methods have been proposed, although we underline that most of them rely on the statistical independence of the random input parameters. A good account of such methods can be found in [1].
Due to the fact that it shares common basics with the method that we will presented later, here we highlight a family of powerful methods to deal with r.o.d.e.’s, usually referred to as generalized Polynomial Chaos (gPC). These techniques are based on the Galerkin projection method [2–6]. In this context, all involved random magnitudes are assumed to be elements of the set L2=L2(Ω,𝔉,P), where (Ω,𝔉,P) is a probability space. This means that its elements have finite variance. The set L2 endowed with the inner product 〈ζ1,ζ2〉=E[ζ1ζ2], ζ1=ζ1(ω),ζ2=ζ2(ω)∈L2, where E[·] denotes the expectation operator, is a Hilbert space [7]. gPC method takes advantage of the following key property: finite set of orthogonal functions {Φi(ξ)}i=0P in L2, which depend on a number of r.v.’s, ξ=ξ(ω)=(ξ1=ξ1(ω),ξ2=ξ2(ω),…), ω∈Ω, generates a finite-dimensional subspace of L2, 𝒮. Then, given an element in L2, which can be a r.v., ζ, or a s.p., y(t), we can obtain an approximation of ζ or y(t) as the projection of ζ, or y(t) onto 𝒮; that is,
(1)ζ≈∑i=0Pζ^iΦi(ξ),y(t)≈∑i=0Py^i(t)Φi(ξ),
where the so-called Fourier coefficients ζ^i and y^i(t) are given by
(2)ζ^i=〈ζ,Φi(ξ)〉,y^i(t)=〈y(t),Φi(ξ)〉,0≤i≤P,
respectively.
In the following, we summarize how gPC technique works in dealing with r.o.d.e.’s. Let us consider the r.o.d.e as follows:
(3)𝔇(t,ζ;y)=f(t,ζ),
where 𝔇 denotes a differential operator, the vector ζ=ζ(ω)=(ζ1=ζ1(ω),ζ2=ζ2(ω),…,ζs=ζs(ω)) represents the random parameters, y=y(t) is the solution s.p. that is to be determined, and f(t,ζ(ω)) is a forcing term. In this context, s is usually referred to as the order of the chaos. In order to avoid a cumbersome notation in the subsequent presentation, we will assume that s=1; that is, there is only a random parameter: ζ=ζ(ω). In a first step, one substitutes the approximated representations of ζ and y(t) given by (1) in the r.o.d.e. (3). Second, one multiplies the r.o.d.e. by every element of the expansion basis {Φi(ξ)}i=0P, and then, the ensemble average defined by the inner product 〈ζ1,ζ2〉=E[ζ1ζ2] is taken. This leads to
(4)〈𝔇(t,∑i=0Pζ^iΦi(ξ);∑i=0Py^i(t)Φi(ξ)),Φj(ξ)〉=〈f(t,∑i=0Pζ^iΦi(ξ)),Φj(ξ)〉,0≤j≤P,
that corresponds to a deterministic system of P+1 coupled with differential equations whose unknowns are the node functions y^i(t). These unknowns can be computed by standard numerical techniques such as the Runge-Kutta scheme, for instance. According to expression (1), it determines an approximation of the solution s.p. y(t). Approximations to the expectation and variance functions of y(t) are computed as follows:
(5)E[y(t)]≈y^0(t),Var[y(t)]≈∑i=1P(y^i(t))2E[(Φi(ξ))2],
respectively.
Now, we remark some aspects related to gPC that will help to highlight better the differences with the method that will be presented in the next section. In dealing with r.o.d.e.’s which depend on a number of random parameters, (ζ1,ζ2,…,ζs), in general, gPC method represents these parameters as well as the solution s.p., y(t), in terms of a family of r.v.’s, ξ=(ξ1,ξ2,…) rather than directly in terms of the random inputs: ζi, 1≤i≤s. The selection of r.v.’s ξj as well as the orthogonal basis {Φk(ξ)} must be made according to the probability distributions of the random inputs ζi. In [3], authors provide a comprehensive criterion to set this choice in such a way that an exponential convergence of the error measures for the average and variance of the approximations of the solution holds. In [3], the usefulness of this guideline is shown when there is just one random input parameter, and it also belongs to some of the standard distributions (Poisson, binomial, gaussian, beta, exponential, etc.). In the usual case that two or more r.v.’s appear as input parameters, none of optimal criteria have been established, even assuming that the involved r.v.’s have standard probabilistic distributions.
From a theoretical point of view, the orthogonality of the basic functions {Φi(ξ)}i=0P used in the previous development permits to cancel some involved computations (inner products) when setting the deterministic system of o.d.e.’s (4), although in practice, the total number of cancelations depends strongly on the specific form of the r.o.d.e. (1). However, this theoretical advantage may not compensate the high computational cost that usually entails the orthogonalization process. Moreover, orthogonalization is a critical part of the process, and the obtained result may return vectors with loss of orthogonality what may ruin the computations of the involved inner products and the building and resolution of system (4), [8, pp.230–232] Golub. This leads us to present a method, based on the Galerkin projection, that overcomes this drawback. The study focuses on computational aspects.
The paper is organized as follows. In Section 2, we present the Galerkin projection-based method that allows us to solve a certain class of systems of r.o.d.e.’s whose uncertainty is considered through a finite number of dependent r.v.’s. Expressions for the expectation and the variance of the solution s.p. are also given. Section 3 is addressed to present the algorithm of the method proposed in Section 2 as well as to discuss its most relevant computational aspects. Section 4 begins with a test example whose exact expressions for the mean and variance are available. Hence, the quality of the approximations provided by the proposed method is better assessed. This comparative study is completed by showing the competitiveness of our approach against the Monte-Carlo simulations. The rest of the section is devoted to show a wide range of examples, for both r.o.d.e.’s and systems of r.o.d.e.’s, where the proposed method is satisfactorily applied. In all these examples, the randomness is also considered through different joint probability distributions to illustrate better the power of the proposed method. Conclusions are drawn in Section 5.
2. Method
Hereinafter, we will consider systems of r.o.d.e.’s of the form:
(6)P(t,ζ;y(t),y˙(t))=0,y(t0,ζ)=y0,
where t is the independent variable, ζ=(ζ1,…,ζs) denotes the vector of random input parameters which can appear in the coefficients and/or the forcing term of the r.o.d.e. as well as the initial condition. Components ζi=ζi(ω), 1≤i≤s, ω∈Ω of ζ are assumed to be elements of the Hilbert space L2=L2(Ω,𝔉,P). fζ(ζ) will denote the joint probability density function (p.d.f.) of ζ. Also, 0=(0,0,…,0)⊤ is the null vector of size q; y(t)=(y1(t),y2(t),…,yq(t))⊤ and y0=(y1(t0),y2(t0),…,yq(t0))⊤ are the vectors of unknown functions and initial conditions, respectively. Here, ⊤ denotes the transpose operator for vectors and matrices. P:ℝ2q+s+1→ℝq is a map which is assumed to be a polynomial transformation of ζi, 1≤i≤s, and yj(t), 1≤j≤q. At this point, notice that an important feature of our approach is that we do not assume statistical independence among input parameters.
As in this paper, we are concerned with constructive computational aspects, and we will assume that conditions for the existence and uniqueness of a solution stochastic process to initial value problem (6) are satisfied. We point out that the available results mainly consist of a natural generalization of the classical Picard theorem based upon convergence in the space L2 of successive approximations [1, page 118].
In L2, we consider the linear subspace spanned by the canonical polynomials of degree to be equal to or less than m of the s random model parameters, ζi, 1≤i≤s; that is,
(7)ℬs,m={ζi1,…,is},ζi1,…,is=(ζ1)i1×⋯×(ζs)is:i1+⋯+is=k,0≤k≤m.
The number of elements of the set ℬs,m is (s+m)!/s!m!.
Next, we approximate the unknown s.p. y=y(t) of (6) in terms of elements of ℬs,m (thus, directly in terms of the random model parameters):
(8)y(t)≈∑i1+⋯+is=0myi1,…,is(t)ζi1,…,is.
Therefore,
(9)y˙(t)≈∑i1+⋯+is=0my˙i1,…,is(t)ζi1,…,is.
We substitute both expressions in the r.o.d.e. (6):
(10)P(t,ζ;∑i1+⋯+is=0myi1,…,is(t)ζi1,…,is,hhhhh∑i1+⋯+is=0my˙i1,…,is(t)ζi1,…,is)=0.
In order to compute the (s+m)!/s!m! unknown functions yi1,…,is(t), we are going to establish a deterministic system of o.d.e.’s. This is achieved by multiplying the previous expressions by the elements ζj1,…,js of the set ℬs,m and then taking the ensemble average defined by the s-dimensional integral:
(11)〈g(ζ),h(ζ)〉=E[g(ζ)h(ζ)]=∫supp(ζ)g(ζ)h(ζ)fζ(ζ)dζ,
where supp(ζ) denotes the support of the vector random model parameters: ζ. This leads to
(12)〈P(t,ζ;∑i1+⋯+is=0myi1,…,is(t)ζi1,…,is,ggggkkg∑i1+⋯+is=0my˙i1,…,is(t)ζi1,…,is),ζj1,…,js〉=0,j1+⋯+js=k,0≤k≤m.
As we are directly developing the involved random magnitudes in terms of ζi, 1≤i≤s, in contrast with the development presented in Section 1 (see expression (4)), now we do not need to substitute the components of the vector random model parameters ζ. Likewise, we can establish the corresponding deterministic initial conditions following a similar reasoning. First, we consider the truncated approximation of the initial condition:
(13)y0≈∑i1+⋯+is=0myi1,…,is(t0)ζi1,…,is,
and following an analogous reasoning, one gets
(14)〈y0,ζj1,…,js〉=∑i1+⋯+is=0myi1,…,is(t0)〈ζi1,…,is,ζj1,…,js〉,j1+⋯+js=k,0≤k≤m.
Expressions (12)–(14) constitute a deterministic initial value problem (i.v.p.), usually referred to as auxiliary system, whose (s+m)!/s!m! unknowns, yi1,…,is(t), i1+⋯+is=k,0≤k≤m, can be computed using some of the numerous numerical techniques available such as the Runge-Kutta scheme. In this manner, an approximation to the solution s.p. y(t) defined by (8) is calculated. In addition, the approximations of the expectation and the variance-covariance matrix of y(t) are given by
(15)E[y(t)]≈∑i1+⋯+is=0myi1,…,is(t)〈ζi1,…,is,1〉,(16)Σy(t)≈∑i1+⋯+is=0myi1,…,is(t)(yi1,…,is(t))T×(〈ζi1,…,is,ζi1,…,is〉-(〈ζi1,…,is,1〉)2),
respectively. Notice that Σy(t) is a square matrix of size q×q whose element (j,j) of its diagonal represents the variance of yj(t), 1≤j≤q of y(t).
Remark 1.
For the sake of clarity in the presentation, we have used the multi-index notation (i1,…,is) to express the polynomial basis ζi1,…,is, being as ζi1,…,is=(ζ1)i1×⋯×(ζs)is with i1+⋯+is=k, 0≤k≤m. As a consequence, we have represented the solution s.p. y(t) in the form (8) rather than (1), where {Φi(ξ)}j=0P, with P=(s+m)!/s!m!-1, is the elements of the correspondent basis. However, notice that both expressions are equivalent except for orthogonality. In fact, writing ζ instead of ξ (since now we are directly using the vector random models parameters to represent the solution), this identification can be checked considering the following simple tensor product:
(17)Φj(ζ)=(ζ1)i1×⋯×(ζs)is,
for some mapping (i1,…,is)→j starting from (0,…,0) that corresponds to j=0. Specifically, one gets the following identification:
Based on the method previously developed, in this section, we will give an algorithm to compute the expectation and the standard deviation of the solution s.p. of a random i.v.p. of the form (6) whose analytical expressions are given by (15)-(16), respectively.
The inputs of the algorithm are as follows:
the r.o.d.e.’s (model): P(t,ζ;y(t),y˙(t))=0, with random initial condition: y(t0,ζ)=y0;
the random model parameters (this includes both the coefficients/forcing terms of the r.o.d.e. and the initial conditions): ζ=(ζ1,…,ζs);
the joint probability density function (p.d.f.) of ζ: fζ(ζ);
the truncation order m of the polynomial expansion of random model parameters (see expression (7)).
Now, we describe the algorithm.
Step 1. Define the scalar product 〈,〉 given by (11).
Step 2. Build the basis ℬs,m of canonical polynomials ζi1,…,is of degree ≤m of the s random model parameters, as given in (7).
Step 3. Consider the truncated expansions to the solution s.p., its derivative, and the random initial condition given by (8), (9), and (13), respectively.
Step 4. Substitute the above expansions into the model to obtain the random i.v.p. (10) and (13).
Step 5. Obtain the auxiliary system in two phases: first, multiplying each equation of the random i.v.p. (10) and (13) by the polynomials of the basis built in Step 2 and, second, taking the ensemble average inferred by the inner product 〈,〉 constructed in Step 1. In this manner, expressions (12) and (14) are obtained, respectively.
Step 6. Solve numerically the auxiliary system set in Step 5.
Step 7. Compute the expectation and the standard deviation (or equivalently, the variance) of the solution s.p. y(t) taking into account expressions in (15) and (16).
3.1. Computational Aspects and Implementation
Under the computational point of view, Step 5 of the above algorithm is the most demanding (the building of the auxiliary system) because we have to evaluate many inner products that may involve unbounded integrals with transcendent functions. During this step, using the linearity of the inner product, we can obtain the equations of the auxiliary system in function of the simplest inner products of the form
(19)〈ζi1,…,is,ζj1,…,js〉=〈ζi1+j1,…,is+js,1〉,
the less difficult to be computed, but not straightforward. This fact has led us to consider directly the canonical basis defined in (7) in opposition to other possibilities, for instance, an orthonormal basis, theoretically more appropriate. Eventually, any expansion will disappear when expanding expressions. Moreover and, specifically for the case of the orthonormal basis, we should say that their calculation uses QR decomposition, stable in certain implementations but computationally expensive [8], more if we realise that we deal with inner products that may be defined by unbounded integrals of transcendent functions.
The algorithm has been implemented using Mathematica [9], and it is available at http://gpcdep.imm.upv.es. We have used symbolic manipulation features of this computational software program to optimise the algorithm performance. Therefore, even though the natural implementation order seems to be the one shown in the presented algorithm, we have decided to implement a symbolic version of Steps 1, 3, 4 and 5 as the first step, where all the inner products have been decomposed into auxiliary inner products as in (19), and these new inner products have not been carried out (they have been manipulated symbolically). Doing this, we have realised that a lot of the inner products are repeated. In fact, regarding Examples 2–8 of Section 4 only around 2%–20% of them are different, what implied an important computational saving in the most critical step. The more independence in the r.v.’s, the more saving, because the inner products corresponding to independent r.v.’s can be computed separately. Table 1 collects the percentages of computational saving related to the involved inner products in the examples shown in Section 4.
Computational saving related to the involved inner products 〈,〉 in the examples presented in Section 4.
Example
No. theoretical 〈,〉
No. computed 〈,〉
% of computational saving
2
468
62
86.75%
3
177
34
80.79%
4
2484
255
89.73%
5
9905
827
91.65%
6
1615
260
83.90%
7
249
51
79.51%
8
5745
68
98.82%
Once the auxiliary system has been stated symbolically, we define the inner product and try to use the simplification commands provided by Mathematica to obtain the inner product (19) in a form that can be computed faster. It is used to be easy when the inner product is defined in a bounded domain. In unbounded domains, simplification is often no possible, and the computation of the inner products turns difficult to be carried out when the degree of the polynomial increases.
Then, we compute the nonrepeated inner products of the form (19) appearing into the auxiliary system. We must point out that this is the most demanding step.
Next, we substitute the obtained inner products into the auxiliary system, which is then solved numerically using NDSolve command. Finally, the expectation and the standard deviation are computed.
4. Examples
In this section we will present a variety of examples where the method previously developed is applied. With the aim of illustrating better the ability of the method to deal with a certain number of dependent r.v.’s, in Examples 1–6 we will consider the following pattern model:
(20)y˙(t)+ζ2(y(t))2+ζ3y(t)+ζ4=0,y(0)=ζ1,
which is a Riccati-type r.o.d.e. that includes as a particular case the linear model when ζ2=0. In these examples, we will increase the number of parameters ζi, 1≤i≤4, that are assumed to be r.v.’s. The introduction of different joint probability distributions to the involved r.v.’s will also be considered.
The two first ones are test examples since their exact solutions are available. Then, the quality of the approximations provided by our approach can be better assessed. In the former, we will compare the approximations provided by our approach with the ones corresponding to Monte-Carlo simulations. Monte-Carlo technique can be considered as the most commonly approach used to deal with r.o.d.e.’s. In the second test example, we will detail all the involved computations to clarify the previous notation and methodology.
In Example 7, we illustrate the technique using another r.o.d.e. Finally, we complete the study examples showing the ability of the proposed technique to deal with a classical SIRS-epidemiological model.
The Mathematica source code, its explanation and application to solve Examples 1–7 can be found at http://gpcdep.imm.upv.es.
Example 1.
Let us consider the linear r.o.d.e. as follows:
(21)y˙(t)+ζ3y(t)=0,y(0)=ζ1,
where the input random vector ζ=(ζ1,ζ3) is assumed to have a bivariate gaussian distribution; that is, ζ:N(μζ;Σζ), where
(22)μζ=(01),Σζ=(3-1-12).
Denoting by fζ(ζ1,ζ3) the joint p.d.f. of vector ζ=(ζ1,ζ3) and taking into account that the exact solution s.p. of (21) is y(t)=ζ1e-ζ3t, it is easy to check that its (exact) expectation and variance are given by
(23)E[y(t)]=∫-∞∞∫-∞∞ζ1e-ζ3tfζ(ζ1,ζ3)dζ1dζ3=e(-1+t)tt,(24)Var[y(t)]=∫-∞∞∫-∞∞(ζ1e-ζ3t-e(-1+t)tt)2×fζ(ζ1,ζ3)dζ1dζ3=e2(-1+t)t(-t2+e2t2(3+4t2)),
respectively.
In order to illustrate the competitiveness of the proposed method against Monte-Carlo simulations, in Figures 1 and 2 we show their relative errors with respect to the exact expectation and standard deviation on the interval [0,7], respectively. To highlight better the differences, a logarithmic scale has been used in the vertical axis in both plots. In accordance with the notation introduced in the previous section, the approximations to both statistical moments have been carried out having the following: q=1, s=2, and m=10 (i.e., polynomials of degree equal to or less than 10 have been used), whereas the Monte-Carlo approximations have been performed using 105 simulations. These plots show that the proposed method provides very good approximations which improve the ones generated by Monte-Carlo technique.
Relative errors of the approximations to the expectation of the solution s.p. y(t) on the interval [0,7] in Example 1 by the proposed method and Monte-Carlo simulations. To highlight better the differences, a logarithmic scale has been used in the vertical axis.
Relative errors of the approximations to the standard deviation of the solution s.p. y(t) on the interval [0,7] in Example 1 by the proposed method and Monte-Carlo simulations. To highlight better the differences, a logarithmic scale has been used in the vertical axis.
Example 2.
Let us consider the r.o.d.e. as follows:
(25)y˙(t)+ζ2(y(t))2=0,y(0)=ζ1,
where ζ1, ζ2 are dependent r.v.’s whose joint p.d.f. is given by
(26)fζ1,ζ2(ζ1,ζ2)={24ζ1ζ2ifζ1,ζ2≥0,ζ1+ζ2≤1,0otherwise.
In Figure 3, we show the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. y(t) on the interval [0,10]. Both approximations have been computed using polynomials of degree equal to or less than 3. Then according to the previous notation: q=1, s=2, m=3, the total number of polynomials in the canonical basis is (2+3)!/(2!3!)=10, and these are
(27)ℬ2,3=〈ζi1,i2〉,ζi1,i2{1if (i1,i2)=(0,0),ζ1if (i1,i2)=(1,0),ζ2if (i1,i2)=(0,1),(ζ1)2if (i1,i2)=(2,0),ζ1ζ2if (i1,i2)=(1,1),(ζ2)2if (i1,i2)=(0,2),(ζ1)3if (i1,i2)=(3,0),(ζ1)2ζ2if (i1,i2)=(2,1),ζ1(ζ2)2if (i1,i2)=(1,2),(ζ2)3if (i1,i2)=(0,3).
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to the solution s.p. y(t) on the interval [0,10] in Example 2.
The quality of the approximations to the expectation and the standard deviation can be assessed in this example since the corresponding exact expressions are available. Figures 4(a) and 4(b) show the absolute errors for the approximations to the expectation and the standard deviation of the solution s.p. y(t) on the interval [0,10], respectively. Both plots show that the approximations provided by the method presented in Section 2 are reliable.
Absolute errors of the approximation to the expectation (a) and standard deviation (b) of the solution s.p. y(t) on the interval [0,10] in Example 2. (a) Expectations. (b) Standard deviation.
Example 3.
Let us consider the following r.o.d.e. based on (20) where now uncertainty appears through the two coefficients ζ2 and ζ3:
(28)y˙(t)+ζ2(y(t))2+ζ3y(t)=0,y(0)=5.
We assume that the random vector ζ=(ζ2,ζ3) has a bivariate Gaussian distribution; that is, ζ:N(μζ;Σζ), where
(29)μζ=(12),Σζ=(125-1150-1150175).
Figure 5 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. y(t) on the interval [0,1]. In this example, both approximations have been computed using (2+2)!/(2!2!)=6 polynomials of degree equal to or less than 2. Notice that according to the previous notation one has q=1, s=2, and m=2.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to the solution s.p. y(t) on the interval [0,1] in Example 3.
Example 4.
Now, we analyse the r.o.d.e. (20) where randomness is considered through three random parameters. Specifically, let the random i.v.p. be as follows:
(30)y˙(t)+ζ2(y(t))2+ζ3y(t)+ζ4=0,y(0)=3.
We will assume that ζ=(ζ2,ζ3,ζ4) is a random vector whose joint p.d.f. is given by
(31)fζ(ζ2,ζ3,ζ4)={(ζ2)2+(ζ3)2+(ζ4)2if0<ζ2,ζ3,ζ4<1,0otherwise.
Using the method presented in Section 2 with (3+3)!/(3!3!)=35 random polynomials of degree equal to or less than 3, that is, m=3 and s=3, we have obtained approximations to the expectation and plus/minus the standard deviation of the solution s.p. y(t) on the interval [0,3]. Notice that in this example q=1. Results are depicted in Figure 6.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to the solution s.p. y(t) on the interval [0,3] in Example 4.
Example 5.
In this example, we deal with r.o.d.e. (20) assuming that uncertainty appears through all the parameters ζi, 1≤i≤4. We will assume that the random vectors ζ1=(ζ4,ζ1) and ζ2=(ζ2,ζ3) have multivariate Gaussian distributions; that is, ζi~N(μζi;Σζi), i=1,2, where
(32)μζ1=(12),Σζ1=150(2-13-1323);μζ2=(12),Σζ2=145(2-13-1323).
Figure 7 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. y(t) on the interval [0,2]. In this example, both approximations have been computed using (4+2)!/(4!2!)=15 polynomials of degree equal to or less than 2. Notice that according to the previous notation one has q=1, s=4, and m=2.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to the solution s.p. y(t) on the interval [0,2] in Example 5.
Example 6.
In this example, we deal with r.o.d.e. (20) assuming that uncertainty appears through all the parameters ζi, 1≤i≤4. We will assume that the random vector ζ=(ζ1,ζ2,ζ3,ζ4) has a multivariate Gaussian distribution; that is, ζ:N(μζ;Σζ), where
(33)μζ=(0-153),Σζ=1115(6123144-1241023-1210).
Figure 8 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. y(t) on the interval [0,5]. In this example, both approximations have been computed using (4+2)!/(4!2!)=15 polynomials of degree equal to or less than 2. Notice that according to the previous notation one has q=1, s=4, and m=2.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to the solution s.p. y(t) on the interval [0,5] in Example 6.
Example 7.
In this example we will study the following nonlinear r.o.d.e.:
(34)ζ1y˙(t)(y(t))2+ζ2y(t)=0,y(0)=2,
where ζ=(ζ1, ζ2) is assumed to be a bivariate Gaussian distribution; that is, ζ~N(μζ;Σζ), where
(35)μζ=(12),Σζ=13(6-1-12).
Figure 9 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. y(t) on the interval [0,2]. In this example, both approximations have been computed using (2+2)!/(2!2!)=6 polynomials of degree equal to or less than 2. Notice that according to the previous notation one has q=1, s=2, and m=2.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to the solution s.p. y(t) on the interval [0,2] in Example 7.
Example 8.
In this example, we consider the SIRS (Susceptible-Infectious-Recovered-Susceptible) model for the transmission of dynamics of the Respiratory Syncitial Virus (RSV) proposed by Weber et al. in [10]. This model is based on the following nonautonomous system of differential equations:
(36)S˙(t)=μ-μS(t)-β(t)S(t)I(t)+γR(t),I˙(t)=β(t)S(t)I(t)-νI(t)-μI(t),R˙(t)=νI(t)-μR(t)-γR(t),gggggggggS(t0)=S0,gggggggggI(t0)=I0,gggggggggR(t0)=R0,
where S(t), I(t), and R(t) are the population of susceptible, infectious, and recovered, respectively, μ is the birth rate and it is supposed to be equal to the mortality rate, γ is the rate of loss of immunity, ν is the rate of loss of infectiousness, β(t)=b0(1+b1cos(2πt+ϕ)) is the infection transmission rate, with b0 being the average of transmission, b1 is the amplitude of the seasonal fluctuation, and ϕ is the seasonal phase.
In [10], the authors study the spread of RSV in Finland, obtaining the following parameter values: μ=0.013; γ=360/200=1.8; ν=36; b0=44; b1=0.36; ϕ=0.6. We will assume that the random vectors ζ1=(γ,ν) and ζ2=(b0,b1) have bivariate Gaussian distributions; that is, ζi:N(μζi;Σζi), i=1,2, where
(37)μζ1=(36020036),Σζ1=(4605-4165-460545);μζ2=(440.36),Σζ2=(451100110012000).
Notice that in this way, all the above r.v.’s have as average the deterministic values given in [10].
Now, we assume that 1% of total population is infectious, so initial conditions are given by S0=0.99, I0=0.01, and R0=0.
In Figures 10, 11, and 12 we have plotted the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the susceptible (S(t)), infected (I(t)), and recovered (R(t)), respectively, on the time-interval [0,15]. These approximations have been computed using (4+2)!/(4!2!)=15 polynomials of degree equal to or less than 2. Notice that according to the previous notation one has q=3, s=4, and m=2.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to susceptible (S(t)) on the interval [0,15] in Example 8.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to infected (I(t)) on the interval [0,15] in Example 8.
Approximations of the expectation (solid line) and plus/minus standard deviation (dashed lines) to recovered (R(t)) on the interval [0,15] in Example 8.
5. Conclusions
In this paper, we have presented a method to deal with dependent randomness into a class of continuous models (systems of random ordinary differential equations) focusing on computational aspects and its applicability to a wide range of examples. The method shares similarities with generalized polynomial chaos (gPC) in its basics since it is based on a variation of Galerkin projection techniques. However, the first main difference with respect to gPC approach is that it represents both the random model inputs as well as the solution stochastic process directly in terms of the random model parameters; therefore, accurate approximations of the expectation and standard deviation of the solution are provided. Second, the method avoids constructing an orthogonal basis to set such representations. In this manner, one of the computational bottlenecks to gPC, which is the construction of the so-called auxiliary system (see expressions (12)–(14)) from the orthogonal basis, is avoided. Finally, we have shown through a wide variety of examples the ability of the method to deal successfully with random continuous models whose uncertainty is given by statistical dependent random variables, which is the most complex situation.
Acknowledgments
This work has been partially supported by the Spanish M.C.Y.T. Grants: DPI2010-20891-C02-01 and FIS PI-10/01433; the Universitat Politècnica de València Grant: PAID06-11-2070, and the Universitat de València Grant: UV-INV-PRECOMP12-80708.
SoongT. T.GhanemR.SpanosP. D.XiuD.KarniadakisG.The Wiener-Askey polynomial chaos for stochastic differential equationsChen-CharpentierB. M.StanescuD.Epidemic models with random coefficientsStanescuD.Chen-CharpentierB.JensenB. J.ColbergP. J. S.Random coefficient differential models of growth of anaerobic photosynthetic bacteriaWilliamsM. M. R.Polynomial chaos functions and stochastic differential equationsLoèveM.GolubG. H.van Van LoanC. F.2012, http://www.wolfram.com/products/mathematicaWeberA.WeberM.MilliganP.Modeling epidemics caused by respiratory syncytial virus (RSV)