AVERAGING AND STABILITY OF QUASILINEAR FUNCTIONAL DIFFERENTIAL EQUATIONS WITH MARKOV PARAMETERS

An asymptotic method for stability analysis of quasilinear functional differential equations, with small perturbations dependent on phase coordinatesand an ergodic Markov process, is presented. The proposed method is based on an averaging procedure with respect to: 1) time along critical solutions of the linear equation; and 2) the invariant measure of the Markov process. For asymptotic analysis of the initial random equation with delay, it is proved that one can approximate its solutions (which are stochastic processes) by corresponding solutions of a specially constructed averaged, deterministic ordinary differential equation. Moreover, it is proved that exponential stability of the resulting deterministic equation is sufficient for exponential p-stability of the initial random system for all positive numbers p, and for sufficiently small perturbation terms.


Introduction
This paper deals with the n-dimensional functional differential equation in a quasilinear form with a small parameter s E [0, 1): due(t) dt g(u) + F(t,u,y(t),e), (1) 2 LAMBROS KATAFYGIOTIS and YEVGENY TSARKOV where 1) 2) u is part of the solution defined by the equality u t. {uS(t+ E h, 0]}, with some positive number h; g(o) is the linear continuous mapping of the space of the continuous n-dim- ensional vector-functions Ca: C,([-h, 0]) to Nn, defined by the equality: 0 -h a)

4)
with matrix G(O) consisting of bounded variation functions; {y(t),t >_ 0} is a homogeneous ergodic Markov process on the probability space (, 5, P), with values in the phase space Y, with infinitesimal operator Q, transition probability P(t,y, dz), and unique invariant measure #(dy)  satisfying the condition of exponential ergodicity (see Blankenship and  Papanicolaou  [1]).Thus, there exist positive constants M and 6 such that I I P(t,y,.)-#I I -Mexp{-6t} for any t _> 0; and the perturbing term F(t,,y,e) is a continuous mapping of the product space R+ x Ca ([-h,O]) xYx [0,1 to the space [R n, satisfying F(t, O, y, e) 0 and the Lipschitz condition 0 F(t, , y, e) F(t, , y, e) <_ / p(s) (s) du(s), (2) -h for any y E Y, [0, 1), t R +, and , Ca, with some function u(s) of unit variation.
Under these conditions, the random Equation (1) with initial problem u(s + O)- (0), -h _ 0 _ 0, has (see Hale and Sjord [4]) a unique solution u e {ue(t), t >_ 0} for any continuous function ; this solution is a continuous stochastic process with probability one.We will refer to the linear equation: 0 d u ( t ) _ d t / {dG(O)}u(t +O) (3) -h as the generative equation corresponding to Equation (1).It is well known (see Hale   and Sjord [4]) that Equation (3) defines, in the space Ca, a strong continuous semigroup T(t) with infinitesimal operator given for a sufficiently smooth function by: dO , if -h_<O<O, ifO-O.
The spectrum a(f) of this operator is given by: a(U)" {z: det{U(z)} 0}, where 0 U(z): Iz-/ eZdG(O). -h As in the deterministic case (see Halanay [3]), we will proceed in this paper with the assumption that the generative equation is on the border of stability, that is: ,,(u) rn {z: z > o} O, 'o: (u) c {z: z o} -O.
We will refer to the spectral subspace of the operator f corresponding to r 0 as the critical subspace, and to the solutions of Equation (3) lying in the critical subspace as the critical solutions.
Using projection on the critical subspace, we will construct a finite-dimensional differential equation with Markov parameters and rapid switchings, which has the same stability properties of the trivial solution as Equation ( 1) for all sufficiently small > 0. It will be proven that for stability analysis under some additional assumptions, one can perform averaging with respect to: 1) the invariant measure of the Markov process; and 2) time along the critical solutions of the generative equa- tion, as one can do for the deterministic delay equations (see Halanay [3]).Stability results can then be obtained applying the Second Lyapunov Method using a specially constructed (see Blankenship and Papanicolaou [1] and Korolyuk [7]) Lyapunov func- tional and recursive approximations of the solutions of Equation ( 1) given by the solu- tions of the corresponding averaged equation.

Result and Discussion
Some preliminary preparation is needed in order to obtain the resulting averaged equation.First, we rewrite Equation (1) in the operator form (see Hale and Sjord  [4])" du u + ir(t y(t) ), (4) dt ut where the matrix-valued function {1(0), -h <_ 0 <_ 0} is defined by the equality: 0, if -h<_0<0, 1(0).: I, if 0 0, and I is the n n identity matrix.Next, we define the spectral projective operator P0 corresponding to r 0 C r(f).For this, we will use its integral representation (see Kato [5])in the form" where %" [.Jn= l{Z: Z_ Zj (} with sufficiently small 5 > 0. It can be easily seen that both the projective operator P0 and I-P0 are bounded.One can apply the projective operator P0 not only on any continuous vector- function (0), but also on any vector-or matrix-valued measurable function.
Inserting the above matrix-valued function 1 into the integral representation from Equation (5), one can define the n n-matrix-function: Let us denote the critical subspace as X0: PoCn, the n x m-matrix of a basis in this subspace as V(O), the restriction of the operator / on X 0 as /0, and let A 0 be the matrix of this restriction, as defined by the equation IoV(O V(O)Ao.^Furthermoreone can define the m x m-matrix , writing the identity F(0)-V(O).Let us use the above notations, along with the notation V:-{Y(0),-h _< 0 _< 0}, and assume the existence of the m-dimensional vector function /(x) of argument x E m defined by" where #(dy) is the invariant measure of the Markov process y(t), t>_ O. define the averaged differential equation (which is not random)" Thus, we We say that the trivial solution of Equation ( 7) is exponentially stable in the large if there exist positive constants 51 and 2 such that" 52t I2(t + s,s,x) < ae I (s) for any s, t _> 0, x E Nm.We say that the trivial solution of the random Equation ( 1) is exponentially p-stable in the large for all sufficiently small positive s if there exist positive constants s0, al, and for any s (0, s0), there exists a positive number a2(s such that" %()t E(S) " + < ale I I y,oL for any s, t _> 0, y Y, and C n.In this definition and throughout this paper, the above upper and lower indices of expectation (or probability) denote the conditions y(s)-y, us-9.All subsequent relations involving random variables and processes are understood as such.
The selection of the linear mapping g() in the right part of Equation ( 1) can be accomplished somewhat arbitrarily by adding any arbitrary linear continuous mapp- ing sgl() to the linear part of Equation ( 1) and subtracting it from the second term.
Because the set e0 consists of a finite number of points (see Hale and Sjord [4]) r 0 {zj, j-1, 2, m}, it may be assumed that the selection of terms in the right part of Equation ( 1) has been done in such a manner so that (detU(zj))'T!: O, j 1,2,...,m.
Lemma 1" Under the above assumptions, one can find a constant c such that the solution of Equation ( 1) with initial condition" satisfies the inequality: O_t_T/ _>o,Y for any T > 0, E (0, 1), where is the Lipschitz constant from Equation (2).
Proof: Let H(t) denote the matrix-solution of the generative Equation ( 3) Using this matrix-valued function, one can (see Hale and Sjord [4]) rewrite Equation (1) in the integral form" ue(t + s, s, ) u(t, O, ) + / H(t ')F(s + r, us + r, y(s + v), e)d7, 0 where u(t, 0, ) is the solution of the generative equation with the same initial condi- tion.Due to our assumptions regarding the spectrum part 0, there exists (see Hale   and Sjord [4]) c: sup I I T(t)II, whence: for any t >_ 0 and E C n. Therefore, the proof follows from the integral inequality sup ue(tl + s, s, 9) <_ c I I I I + gc j sup ue(tl + s, s, ) dr t <t 13 1- after applying the Gronwall's lemma on the segment 0 <_ t <_ T/.
8 Due to Lemma 1, Equation ( 10), the Lipschitz condition and the exponential decay of the semigroup Tl(t the above integral equality allows us to complete the proof using for any T > 0, E (0, 1), Cn, and with 71 being a positive constant.
Theorem 1- Let, in addition to the previous assumptions, the function F(t, Vx, y,) be uniformly continuous at zero as a function of , that is, adsume that the quantity a(): sup F(t, Vx, y, ) F(t, Vx, y, 0) t>0.yYI1 x n is infinitesimal as 0, and the limit function F(t, Vx, y, 0): 1) has uniformly bounded continuous x-derivative DF(t, Vx, y,O); ) 4) belongs to the domain (Q) of the operator Q; has continuous bounded 0 t-derivative -57F(t, Vx, y, 0); has the above defined average F(x) along the solutions of the generative equation, and there exists constant b such that: If the trivial solution of the averaged Equation ( 7) is exponentially stable in the large, then the trivial solution of the random Equation ( 1) is exponentially p-stable in the large for all sufficiently small positive e.
Let ze(t) be the solution of the random equation dx e dt f(t/, xe, y(t/), 0). ( 23) It is easy to verify that e(t) satisfies the random differential equation: d dt = f(t/, x y(t/), ).
Consider this equation with initial conditions e(s)= xe(s)= g, with vector from Equation (17).Due to the existence of the uniformly bounded x-derivative DF(t, Vx, y,O), the right-hand sides of Equations ( 23) and ( 24) satisfy the 'Lipschitz condition with some constant L. Furthermore, it follows from Equation ( 14) that the function f(t,x,y,) is uniformly continuous at point zero as a function of , that is, the quantity f(t,x,y,)-f(t,x,y,O) (): sup >o,Y I x n is infinitesimal as 0. Using the latter property and the Lipschitz constant L, one can write the inequalities I(t + s, s, ) x(t + s, s, )1 s+t 8 s+t 8 f(rl, (r, s, ), y(rl), ) f(rl, x(r, s, C), y(rl), o) dr _< L S I( + s,s,)-(r + 8,8,) dr 0 f(rle, xe(r, s, t), ('1), ) f(r/e, x(,, s, ), ( 1 It can be easily shown that the Lipschitz condition for the right-hand side of Equation ( 23) guarantees the existence of a constant B, such that: x(t + s, s, ) <_ BeLt C, for any g E RTM, 8 _ _ 0, t _> 0, e E [0, 1).Thus, substituting this bound in the last term of Equation ( 25) and applying Gronwall's inequality, one can obtain the relation sup (t + s, s, z) xe(t + s, s, t) < ()BTe LT t O(t(T s_O, yEY for any T >_ 0, g m and sufficiently small e > 0. For further analysis, it is conven- ient to rewrite this inequality for the time te and use the norm of the initial condition in Equation ( 9)" sup (t + s, s, g) xe(t + s, s, t) <_ ()BTe LT [[ 7 [[,   (26)   O<_t<_T/e s>0, yGY for anyT_>0, aGC n.
It is known (see Blankenship and Papanicolaou [1] and Skorokhod [9]) that under the above assumptions, the solutions of Equation ( 24) tend to the corresponding solu- tions of Equation ( 7), and that the stability of the trivial solution of Equation ( 7) guarantees (see Korolyuk [7]) the stability with probability one of the trivial solutions of Equation ( 24).However, in order to prove our theorem, we need stronger evaluation of the rate of convergence to zero of the p-moments of the solutions of Equation (24) as tc.For this purpose, we will apply the second Lyapunov method to a specially constructed functional v(t,x, y,e).Since for any random variable , the quantity (_([[p))l/p is monotonically nondecreasing function of p > 0, we can assume in our proof without loss of generality that p >_ 2.
One can consider the pair {x,y(t/)} as a Markov process in the phase space Rmx Y (see Blankenship and Papanicolaou  [1], Mohammed [8], and Skorokhod [9]) with weak infinitesimal operator defined on sufficiently smooth continuous function v(x, y) by the equality: (Ov)(x, y): (Vv)(x, y) + (Qv)(x, y), where (., and V are the scalar product and the gradient-operator in [m, respective- ly.Since the right-hand side of Equation ( 23) depends on the time coordinate t, one needs to extend the phase space by adding a new phase coordinate t +, and con- sider the above nonhomogeneous Markov process with infinitesimal operator: 1 0 x)+(Qv)(t, y)) + ((Vv)(t, x, y), f(t/e, x, y, 0)).( 27) For a given function w(t,y), let (t) denote the function obtained by averaging LAMBROS KATAFYGIOTIS and YEVGENY TSARKOV w(t,y) with respect to the invariant measure #(dy), that is, (t).i (t, )(d), Y and let (y) denote the function obtained by averaging w(t,y) with respect to the time t, that is, T 0 The projection operator P u on the subspace c.: { e c(Y): y-0) can be defined by the equality Pg(y)"-g(y)-".
nential ergodicity, we can use the inequality: Due to our assumption of expo- sup Eu(P,g)(y(I)I <_ Me-ptsup g(Y) I, yEY (28) for any t _> 0 and g E C(Y) and, therefore, the potential YI of the Markov process {y(t)} can be defined as the improper integral: g)(Y)" 7 Eug(y(t))dt, (l-I 0 which satisfies, sup I(1-Ia)() < gsup a() I, (9) for any g E Cp. Dynkin [2]), one can write the equality: According to the definition of the weak infinitesimal operator (see -tEuh(s, y(s-t)) Eu(Qh)(s, y(s-t)), for any s >t_> 0 and continuous bounded function h(s,y).If in addition, h(s)-O, then the inequality: Euh(s, y(s t)) <_ Me (s t)sup h(s, y) yY follows from Equation (28).Therefore, there exists the improper integral" i Eyh(s, y(s t))ds: G(h)(t, y), for any y Y, and sup I(Gh)(t Y) < M--sup Ih(s,y) l.In view of Equation ( 29), one can easily verify that the function r(t,y)" -G(h) (t,y)   satisfies the ordinary differential equation: tr(t, y) + Qr(t, y) h(t, y). (31) Using this result and the representation h(t, y) (h(t, y)-h(t)) + h(t), one can find a solution of Equation ( 31) for arbitrary bounded function h(t,y)in the form: To prove the exponential p-stability of the trivial solution of Equation ( 24), we will use the Lyapunov functional: where w(t,x,y,g)" v(x) + gvl(t/g,x,y), vl (t,x,y (Vv(x),R(f )(t,x,y,O)), (34) and the operator n acts on the function f(t, x, y, O) (x) according to arguments t and y as defined in Equation ( 32).The inequalities in Equations ( 15) and (33) allow us to estimate the second term in the latter scalar product as follows: R(f )(t,x,y,O) _2 sup ] (s)ds +2sup t>0 yE 0 t>0 It is obvious that under the assumption of exponential stability conditions in the large of Equation ( 7), the pth power of the absolute value of any solution of Equation (7) decreases also exponentially when t---,cx for any p > 0. Therefore, one can con- sider the function: s v(x) / x(t,O,x) Pdt 0 with sufficiently large positive S as the Lyapunov function for the averaged system.
Using the smoothness with respect to 2 of the right-hand side of Equation ( 7), one can prove that v(x) has continuous derivative Vv(x), and that the following inequali- ties are satisfied: sup t>_O,yY vl(t,x,Y) <-v4 x p, sup t>_O,yY VVl(t,x,y) <_ v4 x -1 (35) for any x E R m with some positive numbers Vl,V2, v3, V4, V 5. Therefore, for sufficiently small positive Q, one can write the inequalities: with some positive number wl for all e E [0, ea) and arbitrary values of the remaining variables involved in Equation (36).Furthermore, using the definitions in Equations ( 27) and (32) of the operators and R, respectively, one can obtain for the quantity w the following inequality: + (Qv1)(t/e, x, y)) + e(Vv1)(t/e, x, y), f(t/e, x, y, o)) ((Vv)(x),(x))+ e(Vv)(t/e,x,y),f(t/e,x,y,O)) for sufficiently small values of e.Let us assume that e I has been chosen small enough so that both of the inequalities in Equations ( 36) and ( 37) are fulfilled simult- aneously.Then, using the well known Dynkin formula (see Dynkin [2]), and the inequalities in Equations ( 36) and ( 37), one can obtain the inequality: E{ '(t) P '(s) a, y(/) y)} <_ w-E{w(t,x(t),y(t/e),e) x(s) g,y(s/e) y)} Therefore, the conditional p-moment of the solution of Equation ( 23) satisfies the inequality: E{ .(t)I" .()a, (/) )} whence one can conclude that: E{ (t) P () a, U(/) U)} _< fll a *'-/31(t-s), for any t > s > 0, g Rrn and e (0, el) with some positive constant /1" Using this inequality, one can evaluate the rate of decay of the second moment of the supremum of the solution of Equation ( 23) in the time-interval[-he, 0] from: <--t P-i-hg / t1 t pe-lvdT" -< /2 ' p, he for any t >_ he, x E [m, y E y, and (0, g'l).It can be easily seen that using this formula, the previous inequality can be rewritten in the form: sup E s sup s>O Y' (t-he<_v<_t yY Xe(T-t-s) P xe(8) t,y(8/g) y)}_< 2 t pe 31t.
Since the initial condition of Equation ( 23) is the projection of the initial condition in Equation ( 9), it follows from the above inequality that: By construction the solution of Equations ( 1) and ( 9) satisfies the inequalities: sup I I u + s(S, 99)II < sup I I O(t + )II + sup I I rl(t + s)II with some positive constant h 1. Taking into account the definition of the initial condition g given by Equation ( 17) and the formulas in Equations ( 13), ( 26) and (38), one can find sufficiently large A(T) as Tx and infinitesimal c() as ---0, for any 99 G C n with some positive constant /.Choosing the numbers e 0 such that this inequality can be rewritten in the form: Next, we apply the Markov property for conditional expectation in the form (39) E(S) f [I P} { ( )z{ll E,) u E, u r + t2 } r l + s, d2 u s + tl,z ys + 1 This allows us to use the inequality in Equation ( 39) and to evaluate the second moment of the norm of the solution of Equations ( 1)-( 9) in the recursive form: sup (s) E(r)z I I u.I I Pl E, y , + T/e Sk+l < 1 / 2 sup E (s) J" I I ue y 6_ Y ' Y'.Sk } s k, usk, Ys k for any given s > 0, k E N and p E C n. Therefore, for t [sk, s k + 1)' k N, the reiteration of the above inequality allows us to write the following inequalities for the second moment of the solution of Equations ( 1)-( 9): E (s) l" ue(s + t/5) p} < sup E (s) , ue(s + t/e) p} y, opt Y, s k _ < s k + 1'   or 8 E,){ u(s + t) p} ale with some positive constants al, a 2. This completes the proof of our theorem.
(t + )-T(t)(I-PO) I I e(1 + I I P0 I I )p -elcT I I I Lipschitz condition in Equation (2) and the assumptions regarding the spectrum r 0 of the matrix A0, the difference of the solutions satisfies the in Equation (13), one can derive the inequality I I rl('r -[-S)II "-Pr( 1 + I I P0 II)11 ' I I + Cl IcT I I II,