Numerical Procedures for Random Differential Equations

1Mathematical Modeling and Control, Department of Mathematics, Faculty of Sciences and Techniques of Tangier, Abdelmalek Essaadi University, Tangier, Morocco 2Research Center STIS, Department of Applied Mathematics and Informatics, ENSET, Mohammed V University, Rabat, Morocco 3Department of Mechanical Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia 4LTI, Ecole Nationale des Sciences Appliquées de Tanger, Abdelmalek Essaadi University, Tangier, Morocco


Introduction
Stochastic and random differential equations constitute a growing field of great scientific interest.There are mainly three categories of random differential equations.The first and the simplest class is one where only the initial conditions are random.The second class is characterized by the presence of random nonhomogeneous or input terms and the third one is the differential equations with random coefficients.To deal with errors and uncertainties, random coefficients have been increasingly used in the last few decades.
This paper focuses on the combined second and third classes because this type of equations offers a natural and rational approach for mathematical modeling of many physical phenomena.The last decades have witnessed an enormous effort in the fields of parameters uncertainty and random or stochastic differential processes.This is due to the fact that any physical system contains uncertainties and its real phenomena may be modeled by stochastic differential equations with random or stochastic process coefficients.These equations take into account the approximate knowledge of the numerical values of the physical parameters on which the system depends and have been a matter of intensive investigation.
A number of techniques are available for uncertainty sensitivity and propagation such as Monte-Carlo procedure [1,2], sensitivity analysis methods [3], and polynomial chaos [4,5] among others.Monte-Carlo (MC) method has been the mainstream uncertainty quantification technique for decades.It is the most used method and is valid for a wide range of problems.However, it is very computationally expensive since it requires a large number of simulations using the full model.
An alternative approach is based on the expansion of the response in terms of a series of polynomials that are orthogonal with respect to mean value operations.Polynomial chaos was first introduced by Wiener [6] where Hermite polynomials were used to model stochastic processes with Gaussian random variables.A number of other expansions have been proposed in the literature for representing non-Gaussian process [7,8].Recent review papers by Stefanou [9] and by Schuëller and Pradlwarter [10] summarized the assessment of the past and current status of the procedure for stochastic structural analysis.
This polynomial representation provides a framework suitable for computational simulation and then widespread in mathematical and numerical analysis of many engineering problems.Various problems have been solved based on this approximation such as solution of stochastic differential equations [11], linear structural dynamics [4,5], and nonlinear random vibration [12,13]; soil-structure interaction [14], structural reliability [15], and identification [16,17].More recently, Trcala used polynomial chaos for nonlinear diffusion problems of moisture transfer in wood [18].The accuracy of the PC approximation has been evaluated by Field and Grigoriu [19].A convergence of the decomposition of the solution into the polynomials chaos is studied by Dvurecenskij et al. [20] and the conditions associated with the distribution function of the random vector appearing in the solution for a convergence toward the solution are given by Ernst et al. [21].
The polynomial chaos has been used in many finite elements problems [4].Accurate discrete modeling of complex industrial structures leads to a large finite element model.To reduce the CPU time, reducing the order of the models is very useful.Component mode synthesis (CMS) is a wellestablished method for efficiently constructing models to analyze the dynamics of large and complex structures that are often described by separate substructure (or components) models.Sarsri et al. [22] have coupled the CMS methods with the projection chaos polynomials methods in the first and second orders to compute the frequency transfer functions of stochastic structures.This coupling methodological approach has been used by Sarsri and Azrar in time domain [23] as well as a coupling with the perturbation method [24].
The polynomial chaos methods are well suited for the random differential equations, RDE, with a very few number of random variables defining their main coefficients.It is well known that if the number of the considered random variables increases, the needed number of unknowns to be determined for solving the random systems increases very rapidly with the degree of the polynomials.Thus, for accurate solution, the CPU time and memory required may be prohibitive.This greatly limits these methods to random differential equations with very few numbers of random parameters.
An alternative approach called internal random coefficients method (IRCM) is developed in this paper.A careful presentation is given in the frame of higher-order random differential equations.This method is based on generalized polynomial chaos and the superposition principle.It can be used to solve random differential equations with a large number of random variables and an input right-hand side decomposed in an arbitrary number of random coefficients.The considered random parameters may follow various distribution laws.
A procedure to build a new polynomial chaos basis and a connection between the one-dimensional and multidimensional polynomials is established.Different distribution laws can be easily considered.Based on the superposition principle, the random differential equation with an input depending on several random variables is decomposed on a sequence of RDE with the same main random operator and reduced right-hand sides.A series of RDEs with reduced number of random variables have thus to be solved based on the generalized polynomial chaos decomposition.The global system is then solved by a drastic reduction of the CPU time and memory space.For the sake of comparison, the conditional expectation method is developed for the considered random differential equations as well as the Monte-Carlo method.The applicability and effectiveness of the presented methodological approach have been demonstrated by numerically solving various examples.

Mathematical Formulation
In this work, various methodological approaches are elaborated to solve higher-order initial value problems with linear and nonlinear random variables subjected to a random input right-hand side.For this aim, the following stochastic differential equation is considered: with deterministic initial conditions, where  is the stochastic process response and L is a linear random operator of order  defined by It is assumed that the random coefficients   depend on the random vector  1 which is defined in a probability space (Ω 1 , F 1 ,  1 ).The input right-hand side, , is assumed to be dependent on the random vector  2 that is defined in the probability space (Ω 2 , F 2 ,  2 ).Explicit expression of () is given later.
For numerical solution of (1), a new procedure based on the general polynomial chaos, GPC, expansion procedure is elaborated.Herein, the classical GPC procedure is reviewed in a clear manner.
In the present work, the random variables, component of the vector ( 1 ,  2 ), are assumed to be independent but may have general distinct distributions   .If   are classical distributions, such as normal, uniform, gamma, and beta, the associated known polynomial chaos can be used.Otherwise, the procedure, developed in this paper, will be used to build the needed polynomial basis.Explicit expressions of this basis for general cases are given.In addition, the number of random variables ( + 1) and the differential order  are arbitrary.
The concept of internal random coefficients is introduced and combined with the superposition principle and generalized polynomial chaos expansion.For the sake of comparison, conditional expectation and Monte-Carlo methods are also elaborated.

General Polynomial Chaos Formulation
2.1.1.General Formulation.For general purpose, let us consider random vectors  1 and  2 , presented in the following forms: where   for  = 0 to  are random variables defined from the probabilistic field (Ω  , F  ,   ) to R. The random vectors  1 and  2 are assumed to be independent and gathered in the vector : We assume that the random vector  has a distribution function with respect to the Lebesgue measure denoted by . 2  (R +1 ) denotes the set of square-integrable functions with respect to the weight measure : with the following associated inner product: Let us note that the distribution  may be Gaussian or non-Gaussian.In the present analysis, various types of distribution functions may be considered.
The general polynomial chaos associated with the random vector  is denoted by {  ;  ∈ N}.These polynomials coincide with the orthogonal polynomials associated with the inner product defined in (6) and verify where   are given by The solution  of the main equation ( 1) is a time stochastic process depending on the random vector and decomposed in the polynomial chaos basis {  ;  ∈ N}: For general purpose, the random coefficients   are assumed to depend linearly and nonlinearly on the random variables   ; 0 ≤  ≤  and written in the following general form: in which  = ( 0 , . . .,   ),   ∈ N and || = ∑  =0   .
The right-hand side of (1), (), is assumed to be a timedependent random function that depends linearly on  2 and expressed by where   () and  0 ,  1 are considered deterministic function and constants.Note that a more general right-hand side excitation can be decomposed in the form (11) using the Karhunen-Loéve expansion [25].
Based on a reduced decomposition using the ( + 1) first terms, the stochastic process (, ) can be approximated by where   () is a multidimensional general polynomial chaos depending on the random vector  = ( 0 ,  1 , . . .,   ).
The insertion of these expressions in (1) leads to the following  th order random differential equation: Projecting this equation with respect to   for  = 0 to , the following deterministic differential system is then obtained: Note that the first-and mainly the second-order differential equations of the above kind,  = 1 or 2, have been investigated by many authors when the number n of random variables is too small.When the random variables   are Gaussian, Hermite-chaos polynomials in ]−∞, +∞[ are used in [4].This standard approach is very often used in structural dynamics.Various other works are elaborated when   are uniform, gamma, or beta and thus Legendre-chaos in [, ], Laguerre-chaos in [0, +∞[, and Jacobi-chaos in [, ] are, respectively, used [8].
The expansion on polynomial basis of the vector  is related to the polynomial basis associated wih each random variable   in the case of independent variables.This relation is clarified and the correspondence is clearly established herein.The procedure allowing clarifying the inner product used in the differential system ( 14) is established and an explicit simple procedure is given, in the next subsection.
Firstly, this procedure is established in the next paragraphs for independent random variables, based on the relationships between the random variables   and random vector .Secondly, new variables are introduced that are not necessarily independent and the procedure is established for general cases.
2.1.2.Condensed Formulation.In order to formulate the considered problem in a condensed form, the following mathematical developments will be used.As the variables (  , 0 ≤  ≤ ) are assumed to be pairwise independent, the joint distribution function  is then given by where   ,  = 0 to  are functions of marginal distributions associated with each variable   .
The general polynomial chaos associated with each variable   is denoted by {  ,  ≥ 0}.These polynomials coincide with the orthogonal polynomials associated with the inner product defined in  2    (R) with respect to the weight function   : with the associated inner product given by The set of orthogonal polynomials satisfies the orthogonal conditions: where In order to make a correspondence between the set of polynomial chaos associated with each variable   and that associated with the random vector , the following total order is introduced on the set N +1 by ∀ (( 0 , . . .,   ) , ( 0 , . . .,   )) ∈ N +1 × N +1 , ( 0 , . . .,   ) = ( 0 , . . .,   ) ⇐⇒ This order allows considering a bijection   from N +1 to N, defined by   : N +1 → N ( 0 , . . .,   )  →   ( 0 , . . .,   ) = .(21) This bijection relates each element ( 0 , . . .,   ) in N +1 by its order  defined in (20).Let  be a nonzero integer; the integer  used in the decomposition (12) is taken as  =   (0, 0, . . ., 0, ) . ( The choice of  allows decomposing the solution in a set of polynomial chaos associated with the vector  of degree less than or equal to .So, for all integers  between 0 and , there is a single ( 0 , . . .,   ) ∈ N +1 such that This one-to-one correspondence allows writing the multidimensional polynomial chaos associated with the random vector  as a function of the one-dimensional polynomial chaos corresponding to each variable   by where ( 0 , . . .,   ) is the multi-index associated with the integer , introduced by (23).For all integers  and ℎ between 0 and  the square matrix  ℎ of order  + 1 is defined by Let ℘  = R  [  ], the set of polynomials of degree less than or equal to .Then, the set of the classical polynomial chaos    = {  ,  ∈ ⟦0, ⟧} is an orthogonal basis of ℘  for the inner product defined by (17).Let    = {1,   , . . .,    } be the canonical basis of ℘  and    be the passage matrix from the canonical basis    to the chaos basis    .This matrix can be obtained in a standard way, using the Gram-Schmidt procedure or a recursive method.
Let the vectors Then, one has For ℎ ∈ N and  between 0 and , the moment  ℎ of order ℎ of the random variable   is defined by The square matrices of order ( + 1),  ℎ  , are defined by This gives explicitly matrix  ℎ by Let  and  be two integers between 0 and .From (23) we have, respectively, unique elements ( 0 , . . .,   ) and ( 0 , . . .,   ) in N +1 such that  =   ( 0 , . . .,   ) and  =   ( 0 , . . .,   ).Rewriting   using expression (26), the following expression is explicitly obtained: The expression in the right-hand side of ( 14) is thus given by For independent random variables, a relationship between the multidimensional and associated one-dimensional generalized polynomials is established.This leads to explicit and closed forms of the used scalar product and needed terms to be numerically computed.Using these relationships resulted in the following deterministic differential system: For a compact formulation and using notation (26), the square matrices   of order ( + 1) and the time-dependent vector  of dimension ( + 1) are introduced for all integers ,  ∈ [0, ] by Using these notations, differential system (34) is rewritten in the following closed form system: Using the presented methodological approach, the truncated solution (12) can be numerically obtained.Its mean and variance are given by Equation ( 14) is usually given for Hermite polynomials when random variables are Gaussian.This kind of projection is classically done and many authors follow this procedure.
In this paper, the random variables   may follow various types of laws.The presented generalized formalism allows one to handle multilaws by using the canonical basis and the orthogonalization principle with respect to a scalar product associated with the distribution function of the random vector.Explicit and closed forms of the used scalar products and needed terms to be numerically computed are given.
It should to be noted that the major problem of this classical decomposition into polynomials chaos expansion is that the number of unknowns to estimate increases very rapidly when the degree of the polynomial chaos and the number of random parameters increase.More clearly, for n random variables, the number of unknown coefficients in the polynomial chaos of orders less than or equal to  is ( +  ).
Table 1 presents the numbers of the needed unknown terms for various  and .This very fast growth of dimensionality is the main limitation of this classical approach.
To overcome this drawback, a concept of internal random coefficients is introduced herein.This allows reducing drastically the number of random variables, especially if the initially considered number of random variables is large.

Internal Random Coefficients Method (IRCM).
Let us recall that  is the derivative order of the random operator L and  is the number of the random variables defining L.
If  ≤ , then better use the methodological approach presented in the previous sections.
If  < , then the coefficients   will be considered as the new random variables and called here internal random coefficients.In fact, the following procedure will be of big interest when the number  of random variables   is too large with respect to the differential order .A large reduction will result based on the procedures developed herein.
Let us consider the worst case:  < .The coefficients (  ) 0≤≤ and the random vector  2 can be gathered in a new random vector: which depends on the initial random vector  = ( 1 ,  2 ).It has to be noted that the coefficients   are not independent of each other but independent of  2 .
In this section, it is assumed that the distribution function  of the random vector  is a continuous function in R +1 and has a compact support denoted by  1 .
This hypothesis ensures the quadratic convergence of the series formed by polynomials chaos to the solution [20,21].Further, we assume that there exists a diffeomorphism ℎ from  1 to  2 = ℎ( 1 ), defined by Such that, for all  between 0 and , one has and, for all  between  + 1 and , one has The other components of ℎ, from  + 1 to , are chosen from the components of  1 to complete the construction of ℎ.
Let us put  = ℎ().The random vector , so defined, has a distribution function with respect to the Lebesgue measure, denoted by .This function is given over the distribution function associated with the random vector  by where | ℎ −1 () | is the determinant of the Jacobean matrix of ℎ −1 ().
Let  1 be the distribution function related to the considered Lebesgue measure of the random vector  1 given by where  1 = ( 0 , . . .,   ,  +1 , . . .,   ) is an observation of the random vector  1 .For normalized parameters, the following reduced variables are introduced: where   = (  ) and Then, for, all integers  between 0 and , one has Using these new expressions, the main random equation ( 1) is reduced to the following simplified random differential equation: It has to be noted here that the number of random variables is reduced and the resulting random differential equation is thus easier to handle.
Recall that the coefficients and the random variables, used in the right-hand side of (48), are independent of each other as well as of  1   .This is due to the fact that the random vector  11 depends only on the random vector  1 . 11 and  2 are then independent.
Let  11 and  12 be, respectively, the distribution functions of the random vectors  11 and  2 with respect to the associated Lebesgue measures.Taking into account the independence of random variables (  ) +1≤≤ one has The decomposition of the solution (,  1 ) of the random equation (48), according to the multidimensional polynomial chaos basis, is given by Using  th first terms of the series (50), the process (,  1 ) can be approximated by Inserting these expressions in (48), the following random differential equation resulted: Projecting this equation with respect to Ψ 1  for  = 0,  leads to Let {Ψ 11  ,  ∈ N} be polynomial chaos associated with the random vector  11 .
Let   be the canonical basis of R[ 0 , . . .,   ].This basis can be ordered in the following form: The orthogonalization of the basis   gives the polynomial chaos {Ψ 11   ,  ∈ N} associated with the random vector  11 .This orthogonalization is performed using a classical method such as the Gram-Schmidt procedure in R[ 0 , . . .,   ] and the following inner product: (59) Let  1 be the tensor defined for all  ∈ N +1 ,  = ( 0 , . . .,   ) by This integral can be computed using the distribution function of the vector  1 and the diffeomorphism ℎ by The integer , considered in decomposition (51), is chosen such that the considered polynomials have the degree less than or equal to  and expressed by  =  (+−) (0, 0, . . ., 0, ) ∀ ∈ N. ( Let   be the canonical basis of R  [ 0 , . . .,   ], then Let us denote by P 1 the passage matrix from the canonical basis   to the polynomial chaos basis associated with the vector  1 of degree less than or equal to .Using the following notations,  =   (0, 0, . . ., 0, ) , where ( 0 , . . .,   ) =  −1  () for 0 ≤  ≤ , one gets The matrices Υ, Τ  and , defined for all integers  and  between 0 and , are introduced for 0 ≤  ≤ , by for 0 ≤  ≤  , 0 ≤  ≤ .
The matrices D  of order (+1) for 0 ≤  ≤  are defined by for 0 ≤  ≤ , 0 ≤  ≤ . ( This allows defining the matrices Υ and Τ i by The random differential system (52) is thus reduced to the following deterministic one: where  1 and  1 are integers related to  and  and introduced in (54).Finally, the last differential system is rewritten in the following condensed matrix form: where Based on the presented mathematical procedures, the truncated solution (51) can be numerically obtained.
An equivalent form of (73) has been introduced in (37) based on original random variables.These two equations give the approximate solution of U. The coefficients of these two equations are given, respectively, by the inner products in (57) for ( 73) and ( 32)-( 33) for (37).
It should be noted, on one hand, that these coefficients are hard to be determined for (37) but easier for (73).On the other hand, the number of the random variables used in (73) is largely reduced.This reduces the number of unknowns used to determine the solution U and greatly simplifies the required numerical computation.
Note that when uncertainties come from random parameters of the system parameters, they can be efficiently handled by the previous mathematical procedures.When the righthand side excitation depends on many random variables, the previous internal coefficients procedure can be largely improved by the so-called superposition method.

Superposition Method.
Thanks to the linear behavior of the considered system, the superposition principle can be applied to the standard random differential equation ( 13) and to the resulting equation from the internal random coefficients procedure (52).In both cases, the considered random right-hand side function () depends linearly on ( − ) random variables   .The superposition method consists of solving ( − ) differential equations with elementary righthand sides given by L ⋅  (, ) = ( 0 +  1   )   () where L may be the left-hand side differential operator of ( 13) or (52).In fact, this superposition is adding only one random variable   to the system and the resulting random vectors are ( 1 ,   ) for ( 13) and ( 11 ,   ) for (52) for each .
As the internal random coefficient procedure leads to a reduced number of random variables, the superposition principle is applied to (52).Our focus is then on the resulting random differential equation: with the deterministic initials conditions given by The polynomial chaos associated with the random vector  2 is considered and denoted by { 2  ;  ∈ N}.The solution  is in this case a process that depends on , on the random vector  11 , and on the random vector  2 .The solution is then written in the following form: Since random vectors  11 and  2 are independent, the factorization by the polynomial chaos associated with the random vector  2 is used in the decomposition (50).The solution can be rewritten as follows: The insertion of this decomposition in the differential equation (77) and the projection of the equation and of the initial conditions over each polynomial chaos  2   for  ≥ 0 lead to the following simplified system of random differential equations:

Journal of Applied Mathematics
For  = 0,  ∑ =0 ( (  ) + It has to be noted that the case  ≥  −  + 1 leads to null initial conditions and null right-hand side.In this case, the solution is then null.
The solution given in (80) is then reduced to For 0 ≤  ≤  −  and considering the decomposition of the solution   (,  11 ) according to the polynomials chaos associated with the random vector  11 , one gets Based on this decomposition, the differential equations (81a)-(81c) are then reduced to the following: ( (  ) + Projecting this expression over each polynomial chaos Ψ 11   for  = 0 to , one gets and, for 1 ≤  ≤  − , Using the matrices Υ and Τ  introduced in (71), the last equations become the following.
For  = 0 to , and, for 1 ≤  ≤  − , where the integers  1 and  1 are related to  and  by notations (54).These deterministic systems are then rewritten in the following compact matrix form.
For  = 0 to , 0 (, where The final truncated solution is then given by Its mean and variance are given: This solution is the main approximate solution of the random initial value problem (1) elaborated herein.Following (89) and (90), one has to solve NL( −  + 1) simple initial values differential systems.These systems can be solved by standard methods such as the Runge-Kutta method.
The main advantages of this methodological approach are as follows: (i) The deterministic differential systems have the same left-hand side that is built only once and in a compact matrix form.(ii) Simple right-hand sides are obtained for an arbitrary number of random variables defining the excitation.(iii) A large reduction of the number of unknowns based on polynomial chaos can result.(iv) The CPU time and memory needed can be largely reduced.(v) The numerical solution can be done in parallel manner and particularly for a large number of uncertain parameters.

Conditional Expectation Method.
For the sake of comparison, a methodological approach based on the conditional expectation method is also elaborated here to solve (1).Remember that the solution  of ( 1) is a stochastic process that depends on the random vectors  1 and  2 and time .
Let  0 = ( 10 ,  20 ) be an observation of the random vector  = ( 1 ,  2 ).The conditional expectation of  such that  =  0 , noted by ( |  =  0 ), is the solution of the deterministic equation: where L 0 is a deterministic operator defined by Like in the first section, the random vector  is assumed to have a density function relative to the Lebesgue measure denoted by .The moments of order  ( ∈ N * ) of the solution of ( 1) are then given by the Bayes formulas: The numerical computation of the integral (95) needs the solution of (93) for any observation vector  0 .The analytical solution can be obtained in general cases.Assume that  0 ∈ Ω, where Ω is a bounded domain in R +1 .
The integral can be approximated by the Gauss-Legendre quadrature method.Let the sets { 0 , 1 ≤  ≤ } and {  , 1 ≤  ≤ } be Gauss points and associated Gauss weights.The moments   () can be approximated by The main advantage of this method is that it allows obtaining a general solution that is considered here as a reference solution as well as the obtained one based on the Monte-Carlo method.But, the inconvenience of these two methods is that a very large CPU time is needed for accurate numerical solutions.

Applications
For the sake of clarity, the developed methodological approaches are explicitly presented for the most standard initial value problem.Let us consider the following secondorder differential equation modeling the dynamical forced behavior of Euler-Bernoulli beams with the following uncertain parameters: thickness ℎ, mass density , and Young modulus : where in which  0 ,  1 , , ,  0 ,  1 ,   ,  0 , and U 0 are deterministic constants.The considered physical random variables , ℎ, and  are written in the following forms: where (  , 0 ≤  ≤ ) are random variables with zero means and one variance, two by two independent, and possessing distribution functions relative to the Lebesgue measure, respectively, denoted by   , 0 ≤  ≤ .It has to be noted that the internal random coefficients  0 ,  1 , and  2 depend nonlinearly on the random variables  0 ,  1 , and  2 and  is an arbitrary integer.For a clear presentation, the three methodological approaches, elaborated here, are developed for the solution of (97).
The matrices   for  = 0 to 2, introduced in (35), are then given by the following.
For  = 0 to , The passage from the index for 0 ≤  ≤  to the multiindices ( 0 ,  1 , . . .,   ), used for the determination the matrices  0 ,  1 , and  2 needs the bijection   .This bijection is given with Algorithm 1.
The determination of  0 ,  1 , and  2 , in this case, needs the computation of the inner products ⟨     ,   ⟩ for  ≤ 3, by (32).

Algorithm 1
If the random coefficients   are not identically distributed then ⟨     ,   ⟩ have to be computed for each  = 0 to .Otherwise, these scalar products have to be computed only once.This difficulty as well as the number of unknowns can be largely reduced by using the internal random coefficients method.

Internal Random Coefficients Method.
Based on this method, the internal coefficients of (97) are rewritten in the following form: The random vector  11 , introduced by this method, is then given by The number of random variables used here is ( − 1) variables, two variables for the coefficients   , and ( − 3) variables for the right-hand side.
For ,  = 0, . . ., ; and let  1 ,  1 be the integers associated with integers , , introduced by the notations ( 23)-( 54); one has The matrices   , introduced in (74), are given in this case by and the variable vector  of dimension ( + 1), introduced in (75), is given by The number of random variables, , is arbitrary considered.For more clarity about the needed operations, let us consider  = 5.Based on the classical polynomial chaos and if the decomposition of the solution into the polynomials chaos of degree less than or equal to seven is considered then the needed unknowns to be determined are of number ( 137 ) = 1716, while the internal random coefficient method requires ( 12 7 ) = 792 unknowns to be determined.The number unknowns of is thus diminished by half.In addition the determination of the tensors  0 ,  1 ,  2 needs the computation of the scalar products ⟨∏  =0 ( and   is the degree of a random polynomial coefficient   ( 1 ) given by relationship (10), while the tensors  0 ,  1 ,  2 need only the calculation of the inner products ⟨( 1  )  Ψ 1  , Ψ 1  ⟩ for  = 0 or 1.This demonstrates an extra reduction in the computation effort.For more reduction in the CPU time, the internal random coefficient method is coupled with the superposition method.

Superposition Method.
Here, (97) is replaced with an equivalent system with ( − 2) equations; each equation is a random equation depending on the random vector  11 introduced in (46).These equations are given by 0 (,  11 ) with the initial conditions given by ( 0 (,  11 )/)| =0 = u 0 . 0 (0,  11 ) =  0 , And, for 1 ≤  ≤  − 2, (,  11 ) with the initial conditions given by The associated matrices, introduced in (91), are given by These matrices are replaced with their expressions in systems (108a) and (108b).In this case, the determination of the tensors F 0 , F 1 , F 2 needs the determination of inner products ⟨( 11   )  Ψ 11  1 , Ψ 11  1 ⟩ for  = 0 or 1 where  1 and l 1 are the integers associated with integers  and  given by notation (54).In this case, the polynomial chaos Ψ 11   1 associated with the random vector  11 is simple.The solution of the deterministic system defined by ( 86)-( 87) determines an approximate solution  of the initial random equation (97).
This system largely reduces the number of unknowns to be determined for the approximate solution  and makes the use of polynomial chaos possible even when the classical projection method is unable to determine the solution.The CPU time and memory required to solve this problem by classical projection methods are significantly reduced by this method.
It is well known that, for a stochastic excitation and with Karhunen-Loéve expansion, the classical projection methods are limited to a small number of random variables used in the decomposition.This will affect the accuracy of the obtained solution.This kind of problems is completely solved the by the presented method.

Numerical Results
The dynamical system defined in (97) is considered.To analyze the effect of the uncertain parameters, the free system (() = 0) is first considered.For free systems, two variants of the internal random coefficients method are used.The first one consists of using two random coefficients 0 and 2 called here IRCM2; the second one consists of dividing by 2 and then using only one random coefficient (2/0).This variant is referred to here as IRCM1.The random variables  0 ,  1 ,  2 are assumed to follow various laws and  0 ,  1 , and  2 are the ratios of the standard deviation and the mean of the random variables , ℎ, and .
Figures 1(a) and 1(b) present the standard deviation and mean of the random solution , where   are random variables with normalized truncated exponential laws in [0, 10], with 5% of standard deviation for parameters ( 0 =  1 =  2 = 5%).The solution () is computed using the chaos methods associated with the random variables   , the IRCM1, IRCM2, the conditional expectation method, and the Monte-Carlo method with 1000 simples.
In the case of the classical chaos method, solution  is expanded in the polynomial chaos of degree less than or equal to 7 and these polynomials are dependent on the three random variables   for  = 0, 1, and 2. For the first internal random coefficient method, IRCM1, the coefficients of the random equation ( 97) are divided by the random variables  0 ( 1 ) and the random equation depends only on one new random variable  =  2 ( 1 )/ 0 ( 1 ).The solution  is then expanded into the polynomials chaos associated with the random variable  of degree less than or equal to 12.For the second internal random coefficient method (IRCM2), the random equation depends on the random variables  0 ( 1 ),  2 ( 2 ).Solution is here expanded into the polynomials chaos associated with these two random variables of degree less than or equal to 2 and also less than 7.The number of unknowns used by the classical chaos method is 120 variables for degrees ≤ 7. The ICRM2 needs 6 unknowns for degrees ≤ 2 and 36 unknowns for degrees ≤ 7 while the IRCM1 needs only 13 unknowns for degrees ≤ 12 as presented in Table 2.This simple case demonstrates clearly how the IRCM reduces the number of unknowns.
The mean and standard deviation of the solution  obtained by IRCM1, IRCM2, chaos, conditional expectation, and Monte-Carlo methods, presented in Figures 1(a) and 1(b), demonstrate a good accuracy between the predictions obtained by these methods.
Figures 2(a) and 2(b) present the standard deviation and mean of the solution , when the random variables   are uniform in [−4, 4] with 5% standard deviation for each parameter ( 0 =  1 =  2 = 5%).Solution  is computed using the previous methods.For the case of chaos method, the solution  is expanded into polynomials chaos of degree less than or equal to seven and these polynomials are dependent on the three random variables   for  = 0, 1, and 2. For the ICRM1, solution  is expanded into the polynomial chaos associated with the random variable  of degree less than or equal to 12.For ICRM2, solution  is expanded into the polynomial chaos associated with two random variables of degree less than or equal to 7.
For large standard deviations, these methods are also tested.Figures 3(a) and 3(b) present the standard deviation and mean of the solution , when the random variables   are uniform in [−4, 4] with 20% standard deviation ( 0 =  1 =  2 = 20%).For the case of chaos method, solution  is expanded in polynomial chaos of degree less than or equal to 7 and these polynomials are dependent on the three random variables   for  = 0, 1, and 2. ICRM1 and ICRM2 are also used with polynomial of degree ≤ 7. The number of coefficients to compute is presented in Table 3. Results obtained by the conditional expectation and Monte-Carlo methods are also presented.It is observed that, even for large standard deviation, 20%, a good agreement is observed between the predictions obtained by these methods.
Figures 4(a) and 4(b) present the standard deviation and mean of solution , where the random variables   follow a truncated exponential law in [0, 10] with 20% standard deviation ( 0 =  1 =  2 = 20%).The degree of the used polynomials is ≤7 for chaos, ≤12 for IRCM1, and ≤2 and ≤7 for IRCM2 and the number of coefficients to compute is presented in Table 4.It is demonstrated that, for the mean value, all the methods agree with each other but there are some discrepancies for the standard deviation.The IRCM2 is better than IRCM1 and particularly for large standard deviations.The division procedure, used in IRCM1, reduces the number of random parameters by one but introduces additional nonlinear random effect.This effect affects the accuracy of the IRCM1, as clearly presented in Figure 3.

Dynamical System with a Random Excitation.
To test the flexibility of the presented methodological approach to handle dynamical systems with random variables following different laws, (97) is considered with a random excitation given by () = ( 0 +  1  3 ) sin( 0 ). 0 ,  1 , and  0 are considered deterministic.The random coefficients  0 ,  1 ,  2 , and  3 are assumed here to follow different laws.The presented results are obtained by the chaos method where solution U is expanded in the polynomials chaos of degree less than or equal to 2 and 5 and these polynomials depend on the four random variables   for  = 0, 1, 2, and 3.For internal random coefficients method, solution U is expanded in polynomial chaos associated with the two random variables  0 ( 1 ), 2 ( 1 ) of degree less than or equal to 2 and 5.The results predicted by the IRCM are better than those obtained by the chaos method.In addition, the number of unknowns used by the chaos method is 15 for polynomial chaos of degrees ≤ 2 and 126 for polynomial chaos of degree ≤ 5 while by the IRCM combined to the superposition method we have to solve a system with two differential equations and each one with 6 unknowns for expansion into polynomials of degrees ≤ 2 and 21 unknowns for expansion into the polynomial of degree ≤ 7 as presented in Table 5.
To analyze the standard deviation effect on the obtained results, the parameters of the system and of the excitation are random variables with 20% standard deviations.The obtained results are given in Figures 6(a) and 6(b) for mean and standard deviation.It is shown that chaos of degree ≤ 2 does not predict good results in comparison with reference results, obtained by conditional expectation and Monte-Carlo 2000 methods.The IRCM with degree ≤ 2 predicts better results than chaos with degree ≤ 5 and IRCM with degree ≤ 5 coincides perfectly with the reference results.The random coefficients  0 ,  1 ,  2 , and  3 are assumed here to follow   The error between the used methods is clearly demonstrated in Figures 7, 8, and 9.It is clearly seen that the error is lower for IRCM than for PC related to the conditional solution assumed to be the reference solution.

Conclusion
Methodological approaches based on generalized polynomial chaos for random differential equation with an arbitrary number of random parameters following different types of distribution laws are developed.These random parameters can be linear or nonlinear.The conditional expectation method for the considered random differential equations is well clarified.The obtained results based on this method are considered to be exact results and used as reference results as well as those obtained by the Monte-Carlo method.
A procedure to build a new polynomial chaos basis and a connection between the one-dimensional and multidimensional polynomials is well-established.Firstly, this procedure is given for independent variables and then generalized for variables that may be dependent.This procedure permits considering a basis given by the general polynomial chaos associated with the considered parameters and the solution is expanded into this new basis.A new methodological approach called internal random coefficient method is developed.Compact matrix representations are elaborated and the required matrices, vectors, and scalar products are explicitly given.These projection methods, combined with the superposition method, lead to large CPU time and memory space reductions required for the solution.It is demonstrated that the developed methods are efficient and accurate and these methods may lead to spectacular memory space and CPU time reductions and particularly when a large number of random parameters are considered.

Figures 5 (
a) and 5(b) present the mean and standard deviation of U() obtained by the presented methods. 0 and  2 are truncated normal variables in [−4, 4], 1 is a uniform variable in [−1, 1], and  3 is a truncated exponential variable in [0, 10] with 5% of standard deviations.

Table 1 :
Number of needed unknown terms in the polynomial chaos.

Table 2 :
Number of coefficients to compute for IRCM1-IRCM2 and PC at the considered degrees.

Table 3 :
Number of coefficients to compute for IRCM1-IRCM2 and PC at degree 7 for the considered number of variables.

Table 6 .
These results demonstrate clearly the efficiency and accuracy of the proposed IRCM.

Table 4 :
Number of coefficients to compute for IRCM1-IRCM2 and PC at different degrees.

Table 5 :
Number of coefficients to compute for IRCM and PC at degrees 2 and 5.