Generalisation of Hajek s stochastic comparison results to stochastic sums

Hajek's stochastic comparison result is generalised to multivariate stochastic sum processes with univariate convex data functions and for univariate monoton nondecreasing convex data functions for processes with and without drift respectively. The univariate result is recovered.


Statement of results
Applications of the extension of Hajek's results to stochastic sums were described in [2] and [3], but a full proof was not given in these notes. Here we give a short complete proof of related results. Hajek's results are recovered by the different proof method. In the following C(R) denotes the functions space of continuous functions on the field of real numbers R, W denotes a standard N -dimensional Brownian motion, and E x denotes the expectation of a process starting at x ∈ R n . Furthermore, for a R n -valued process (X(t)) 0≤t≤T the ith component of this process is denoted by X i (t). For processes without drift we prove Theorem 1.1. Let T > 0, f ∈ C(R) be convex, and assume that f satisfies an exponential growth condition. Assume that c i > 0 are some positive real constants for 1 ≤ i ≤ n. Furthermore, let X, Y be semimartingales with x = X(0) = Y (0), where with n × n-matrixvalued bounded Lipschitz-continuous functions x → σσ T (x) and y → ρρ T . If σσ T ≤ ρρ T , then for 0 ≤ t ≤ T we have Here, we say that σσ T ≤ ρρ T if for all x ∈ R n the matrix σσ T (x) − ρρ T (x) is positive.
For processes with drift we prove Theorem 1.2. Let T > 0, f ∈ C(R) be nondecreasing, convex, and assume that f satisfies an exponential growth condition. Assume that c i > 0 are some positive real constants for 1 ≤ i ≤ n. Furthermore, let X, Y be semimartingales with bounded Lipshitz-continuous drift functions µ ≤ ν and n × n-matrix-valued bounded Lipschitz-continuous functions x → σσ T (x) and y → ρρ T . If µ ≤ ν and σσ T ≤ ρρ T , then for 0 ≤ t ≤ T we have Here, we say that µ ≤ ν if for all x ∈ R n the difference µ(x) − ν(x) is positive for each component. Remark 1.3. Bounded Lipschitz-continuity, i.e., the condition that for some holds for all x, y ∈ R n implies existence of a t-continuous solution in stochastic L 2 sense of X. Similarly for Y . Proofs are based on a generalisation of ODproofs to infinite dimensional function spaces and can be found in elementary standard textbooks such as [5].
2 Proof of Theorem 1.1 We first remark that the intial data function has to be univariate -for a general multivariate data function f the results do not hold, because simple examples show that convexity can be strongly violated in this genera situation. Since classical representations of the value functions in terms of the probability density (fundamental solution) are not convolutions we use the adjoint of the fundamental solution. For this and other technical reasons we need some more regularity of the data function and the the diffusion matrix σσ T in order to to treat the problem on an analytical level. We shall observe then that the pointwise result is preserved as we consider certain data and coefficient function limits reducing the regularity assumptions. First we need some regularity assumptions which ensure that the fundamental solution and the adjoint fundamental solution existence in a classical sense, i.e. have pointwise well-defined spatial derivatives up to second order and a pointwise well defined partial time derivative up to first order (in the domain where it is continuous). For the sake of possible generalisations in the next section we consider the more general operator We include even the potential term coefficients c because such a coefficient appears in the adjoint even f c = 0. Recall that the adjoint operator is given by where In this section we shallassume that b i ≡ 0 and c ≡ 0. Note that even in this restrictive situation we have b * i = 0 and c * = 0. For our purposes it suffices to assume that the coefficients are of spatial dependence (the generalisation to additional time dependence is straightforward). In order that the adjoint exists in a strong sense we should have bounded continuous derivatives.
We assume where C m ≡ C m (R n ) denotes the space of real-valued twice continuously differentiable functions and H m denotes the standard Sobolev space of order m ≥ 0. In the next section we assume in a addition that b i ∈ C 1 ∩H 1 for all 1 ≤ i ≤ 1. For the folowing considerations concerning the adjoint we assume that c ∈ C 0 ∩ L 2 if a potential coefficient is considered.
ii) we have uniform ellipticity, i.e., there exists 0 < λ < Λ < ∞ such that We use one observation concerning the adjoint Lemma 2.1. Assume that the conditions (11) and (12) above hold and let p be the fundamental solution of and p * be the fundamental solution of Then for s < t and x, y ∈ R n p, p * have spatial derivatives up to order 2 Here, for x = (x 1 , · · · , x n ) and e i = (e i1 , · · · , e in ) along with e ij = δ ij (Kronecker δ) and s < t, x, y ∈ R n we denote Proof. For q(τ, z) = p(τ, z; s, y) and r(τ, z) = p * (τ, z; t, x) for s < τ < t we show that for 1 ≤ i, j ≤ n hold. Let B R be the ball of radius R around zero. As s < t there exists δ > 0 such that s + δ < t − δ and using Green's identity, Gaussian upper bounds of the fundamental solution and its first order spatial derivatives, Lq = 0 and L * r = 0 we get This leads to the identities and R n q ,i,j (t − δ, z)p * (t − δ, z; t, x))dz = R n r ,i,j (t + δ, z)p(t + δ, z; s, y))dz, (23) In the limit δ ↓ 0 we get the relations stated.
For technical reasons we need more approximations concerning the data. As we are aiming at a pointwise comparison result, and we have Gaussian upper bounds it suffices to consider approximating data which are regular convex in a core region and decay to zero at spatial infinity. We have Proposition 2.2. Let f ∈ C(R) be a real-valued continuous convex function. Let B R ⊂ R n be the ball of finite radius R around the origin. Then there is a function ii) the second (classically well-defined) derivative is strictly positive,i.e., Proposition 2.2 can be proved by using regular polynomial interpolation as considered in [4] (for example). Here it can be used that classical derivatives of second order exist for the convex continuous function f almost everywhere.
The function f ǫ,R is not convex in general of course, but it is convex in a core region B R (x). For all ǫ > 0 and all R > 0 using 2.1 we get Here for a univariate function g ∈ C 2 the symbol g ′′ denotes its second derivative. Since f ǫ (z) > 0 for all z ∈ B R (z), and p * ≥ 0, and by the standard Gaussian estimate for some finite constants C * , λ * we get from (26) i.e. the Hessian is positive in a smaller core region B r = {x||x| ≤ r} for R large enough. Furthermore, classical regularity theory tells us that where v ǫ (t, .) = lim R↑∞ v ǫ,R (t, .). It follows that where A(x) = (a ij (x)) is the coefficient matrix and D 2 v ǫR (t, x) is the Hessian of v ǫR evaluated at x. Hence, and as the ǫ ↓ 0 limit of the Hessian is well defined for t ∈ (0, T ] we get Now consider matrices (a v1 ij ) and (a v2 ij ) where v 1 and v 2 solve and where δv(0, x) = 0 for all x ∈ R n . We have the classical representation where p v 2 is the fundamental solution of As ij a v 2 ij − a v 1 ij (s, y) ∂v 1 ∂xi∂xj (s, y) ≥ 0 and p v2 (t, x, s, y) ≥ 0 we conclude that δv 2 ≥ 0. Now we have proved the main theorem for a ij ∈ C 2 ∩ H 2 . Next, for each ǫ > 0 and R > 0 there exists a matrix (a ǫ,R ij ) with components in with σ ǫ,R,T (x) the transpose of σ ǫ,R , and where here σ is the original dispersion matrix related to the process X of the main theorem (which is assumed to be bounded and Lipschitz continuous).
For a ρ ǫ,R (x) which satisfies analogous conditions we define Then the preceding argument together with Feynman-Kac formalism shows that for σ ǫ,T σ ǫ,R,T ≤ ρ ǫ,R ρ ǫ,R,T , then for 0 ≤ t ≤ T we have This leads to where X R are processes with a bounded continuous σ R which satisfies The process Y R is defined analogously. Similarly f R is a limit of functions f ǫ,R ∈ C 2 ∩ H 2 which equals f on B R . In 42 X R can be replaced by X and Y R by Y by the probability law of the processes, and a limit consideration for data which equal for each R the function f on B R leads to the statement of the theorem by an uniform exponential bound of the data functions, the boundedness of the Lipshitz-continuous coefficients and the Gaussian law of the Brownian motion.
where p w 2 is the fundamental solution of As ij a w 2 ij − a w 1 ij (s, y) ∂w 1 ∂xi∂xj (s, y) ≥ 0 and p w2 (t, x, s, y) ≥ 0 we conclude ≥ 0 for all x this condition reduces to the monotonicity condition ∂w 1 ∂xi ≥ 0. The truth of the latter monotonicity condition for the value function w 1 can be proved using the adjoint using the same trick as in the preceding section. Originally the relevance of stochastic comparison results was pointed out to me by P. Laurence and V. Henderson. The article [4] was never submitted to a journal and is only available on arXiv. The main theorems proved here are stated essentially in the conference notes in [2] and [3] but not strictly proved there. In these notes applications to American options and to passport options are considered. For example explicit solutions for optimal strategies related to the optimal control problem of passport options and the dependence of that strategy on correlations between assets can be obtained. The proof given here can be applied in the univariate case as well and recovers the result of Hajek in [1].