Large Deviations For Unbounded Additive Functionals Of Markov Process With Discrete Time (non compact case)

We combine the Donsker and Varadhan large deviation principle (1.d.p) for the occupation measure of a Markov process with certain results of Deuschel and Stroock, to obtain the 1.d.p. for unbounded functionals. Our approach relies on the concept of exponential tightness and on the Puhalskii theorem. Three illustrative examples are considered.


Introduction and Main Result
Consider an ergodic Markov process (k)k > 0 having R as its state space, Ao(dx) as the distribution of the initial point 0, ad A (dx-) as the invariant measure.The transition probability r(x, dy) is assumed to satisfy the Feller condition.
Here we combine the results of Deuschel and Stroock [3] and Donsker and Varadhan [5], and we use the representation in g(k) g(x)n(dx -(Mg(rn) ), (1.1) k=l R where rn(dx is the empirical distribution n ()_ 1 i( e ).
k=l n(A) has been named the occupation measure by Donsker and Varadhan [4].

424
O.V. GULINSKII, R.S. LIPSTER, S.V. LOTOSTSKII Assume the family {rn, n > 1} obeys the 1.d.p. in the metric space (S,p) (S is the set of probability measures on R and p is the Levy-Prohorov metric) with a rate function J(#), for #ES.If g--g(x)is a bounded continuous function, then M_(b, g(x)b,(dx).Here ES defines a mapping continuous in the metric p; and the 1.d.p. in R,-)(-is the Euclidian metric) is implied by Varadhan's contraction principle [13] with a rate function Ig(h) inf J(#), {# S: Ma(# y} and inf{O} cx. (1.2) Deuschel and Stroock [3] have shown that under certain conditions this result remains valid for an unbounded function g g(x). n-1 In this paper we give sufficient conditions for the sequence {g(k),n 1} to obey the k=0 1.d.p. in terms of A0(dz), r(x, dy), and g(x).In view of (1.1) we need the 1.d.p. for the family {rn, n 1}.In the noncompact case with a fixed initial point, 0 = x0, the 1.d.p. has been proved by Donsker and Varadhan [5] under the following three assumptions.(H*) There is a nonnegative measurable function v(z)such that sup v(x)< for all N > O. Furthermore, the function xl N w(x) =In f satisfies the following conditions: R inf w(x)=w,> -andlim inf [w(z)-w,]= (RM) There is a r-finite reference measure = l(dx) such that 7r(x, dy) p(y x)l(dx) and p(y x) > O, Vx R 1-a.s.
The rate function J J(#), for # S, is given by the formula J(#) sup J ln eU(X) dy)#(dx Since in our setting the initial point 0 has the distribution 0, we add one more assumption. (H0) The function v(x) from (H*)is such that We show that the Donsker and Varadhan 1.d.p. for {rn, n 1} remains valid under these three assumptions with the same rate function (see Theorem 2 in the Appendix).The lower- bound part of this theorem is a simple generalization of the Donsker and Varadhan 1.d.p.
obtained by averaging with respect to 0" The proof of the upper-bound part is somewhat different.We show that (H*) and (H0) imply the exponential tightness of the family {rn, n 1} and then use the Puhalskii theorem [12].The same method is used in the proof of our main Theorem 1" Suppose assumptions (H*), (RM), and (Ho) hold.If a continuous function g- g(x) is such that [g(x) < L(1 + [w(x)-w.]3),E (0, 1), L > 0, where w(x) is the function from assumption (H*), then the family {Mg(rn),n >_ 1} obeys the l.d.p, in (R,r), where the rate function is defined by (1.2) and (1.3).
Pmark: Assumption (RM) is used only in the lower bound part for the 1.d.p. of {rn, n _> 1}.
It has been weakened in Jain [10] and Wu [15].Theorems 1 and 2 (see the Appendix) remain true if (RM) is replaced by any of the assumptions from [10] and [15].
The proof of Theorem 1 is given in Section 2. Elements of the proof of the theorem have been used in proving the 1.d.p. for the family {rn, n > 1} (see the Appendix, Theorem 2).In Section 3, we consider three examples of Markov processes defined by nonlinear recursions to show how the assumptions of Theorem 1 can be checked.
To this end we use the following.
Keeping in mind that Ig(x) < L(1 + (w(x)-w,)/3) for < 1, we get for k > L that (2.4) R Due to Lemma 2.1, the last inequality implies (2.1).It remains to be shown that (2.2) holds.This time we make use of Lemma 2.2.Lemma2.2:For any q-measurable sets, A n .forn > l and B n for n > l and > l, such that lira limsupln logP(Bn, i) -cx, there holds the following equality: linm sup log P(An)-,.limocsup linm sup log P(An, f\Bn, i).
Suppose that the distribution density w.r.t, the Lebesgue measure of e I is Laplacian: pe(y)-1/2exp(-lYl) and A0-A.Then is a stationary process with r(x, dy)- ,h(z) [)dy and so the assumption RM is met.(H*)is met with v(x)-c Ix , for a 1.
k > 1, is i.i.d, with the Cauchy distribution and where 0-x0 is a constant.satisfy (3.2) and if, for some 7 < 1/2, This time, consider a nonstationary process given by (3.1) where every k, for If f(x) and h(x) lira f(x) O, then conditions (H*) and (RM) are satisfied.((Ho)is obviously satisfied).
dy Indeed, and so (RM) is met.