First Hitting Problems for Markov Chains That Converge to a Geometric Brownian Motion

We consider a discrete-time Markov chain with state space { 1 , 1 (cid:2)Δ x,..., 1 (cid:2) k Δ x (cid:4) N } . We compute explicitly the probability p j that the chain, starting from 1 (cid:2) j Δ x , will hit N before 1, as well as the expected number d j of transitions needed to end the game. In the limit when Δ x and the time Δ t between the transitions decrease to zero appropriately, the Markov chain tends to a geometric Brownian motion. We show that p j and d j Δ t tend to the corresponding quantities for the geometric Brownian motion.


Introduction
Let {X t , t ≥ 0} be a one-dimensional geometric Brownian motion defined by the stochastic differential equation dX t μX t dt σX t dB t , 1.1 where μ ∈ R, σ > 0, and {B t , t ≥ 0} is a standard Brownian motion. Assume that X 0 x ∈ 1, N , where N ∈ N for simplicity , and define τ x inf{t > 0 : X t 1 or N | X 0 x}.

1.2
As is well known see, e.g., Lefebvre 1, page 220 , the probability p x : P X τ x N 1.3 satisfies the ordinary differential equation 1 2 σ 2 x 2 p x μxp x 0, 1.4 subject to the boundary conditions We easily find that, if c : μ/σ 2 / 1/2, When c 1/2, the solution is Moreover, the function m x : E τ x 1.8 satisfies the ordinary differential equation see, again, Lefebvre  In the next section, we will compute the quantity p j for j ∈ {1, . . . , k − 1}. We will show that p j converges to the function p x for the geometric Brownian motion as Δx decreases to zero and k tends to infinity in such a way that 1 kΔx remains equal to N. In Section 3, we will compute the mean number of transitions needed to end the game, namely, By making a change of variable to transform the diffusion process {X t , t ≥ 0} into a geometric Brownian motion with infinitesimal mean equal to zero and by considering the corresponding discrete-time Markov chain, we will obtain an explicit and exact expression for d j that, when multiplied by Δt, tends to m x if the time Δt between the transitions is chosen suitably. The motivation for our work is the following. Lefebvre 3 computed the probability p x and the expected duration m x for asymmetric Wiener processes in the interval −d, d , that is, for Wiener processes for which the infinitesimal means μ and μ − , and infinitesimal variances σ 2 and σ 2 − , are not necessarily the same when x > 0 or x < 0. To confirm his results, he considered a random walk that converges to the Wiener process. Lefebvre's results were extended by Abundo 4 to general one-dimensional diffusion processes. However, Abundo did not obtain the quantities p j and d j for the corresponding discrete-time Markov chains. Also, it is worth mentioning that asymmetric diffusion processes need not be defined in an interval that includes the origin. A process defined in the interval a, b can be asymmetric with respect to any a < c < b.
Next, Lefebvre and Guilbault 5 and Guilbault and Lefebvre 6 computed p j and d j , respectively, for a discrete-time Markov chain that tends to the Ornstein-Uhlenbeck process. The authors also computed the quantity p j in the case when the Markov chain is asymmetric as in Lefebvre 3 . Asymmetric processes can be used in financial mathematics to model the price of a stock when, in particular, the infinitesimal variance i.e., the volatility tends to increase with the price of the stock. Indeed, it seems logical that the volatility is larger when the stock price X t is very large than when it is close to zero. The prices of commodities, such as gold and oil, are also more volatile when they reach a certain level.
In order to check the validity of the expressions obtained by Abundo 4 for p x and m x , it is important to obtain the corresponding quantities for the discrete-time Markov chains and then proceed by taking the limit as Δx and Δt decrease to zero appropriately. Moreover, the formulas that will be derived in the present paper are interesting in themselves, since in reality stock or commodity prices do not vary completely continuously.
First passage problems for Markov chains have many applications. For example, in neural networks, an important quantity is the interspike time, that is, the time between spikes of a firing neuron which means that the neuron sends a signal to other neurons . Discretetime Markov chains have been used as models in this context, and the interspike time is the number of steps it takes the chain to reach the threshold at which firing occurs.

Computation of the Probability p j
Assume first that Δx 1, so that the state space is {1, 2, . . . , N} and the transition probabilities become The probability defined in 1.17 satisfies the following difference equation: where c μ/σ 2 . The boundary conditions are In the special case when μ 0, 2.3 reduces to the second-order difference equation with constant coefficients We easily find that the unique solution that satisfies the boundary conditions 2.4 is Assume now that μ / 0. Letting

Equation 2.3 can be rewritten as
Using the mathematical software program Maple, we find that the solution of this first-order difference equation that satisfies the boundary condition w 1 p 2 is given by where Γ is the gamma function. Next, we must solve the first-order difference equation subject to the boundary conditions 2.4 . We find that, if c / 1/2, then where κ is a constant. Applying the boundary conditions 2.4 , we obtain that where γ is Euler's constant and Ψ is the digamma function defined by Notice that so that we indeed have p 1 0, and the solution 2.14 can be rewritten as Now, in the general case when Δx > 0, we must solve the difference equation

2.18
which can be simplified to The boundary conditions become When μ 0 which implies that c 0 , the difference equation above reduces to the same one as when Δx 1, namely 2.5 . The solution is Writing n 1 jΔx 2.22 and using the fact that by hypothesis N 1 kΔx, we obtain that Notice that this solution does not depend on the increment Δx. Hence, if we let Δx decrease to zero and k tend to infinity in such a way that 1 kΔx remains equal to N, we have that which is the same as the function p x in 1.6 when c 0/σ 2 0. Next, proceeding as above, we obtain that, if c / 1/2, the probability p j is given by In terms of n and N, this expression becomes for n ∈ {1, 1 Δx, . . . , 1 kΔx N}. The solution reduces to We can now state the following proposition. To complete this section, we will consider the case when Δx decreases to zero. We have already mentioned that when c 0, the probability p n does not depend on Δx, and it corresponds to the function p x in 1.6 with c 0.
Next, when c 1/2, making use of the formula as |z| tends to infinity if |Arg z a | < π . Hence, in the case when c / 0, 1/2, we can write that for 1 ≤ n ≤ N. Therefore, we retrieve the formula for p x in 1.6 .
In the next section, we will derive the formulas that correspond to the function m x in Section 1.

Computation of the Mean Number of Transitions d j Needed to End the Game
As in Section 2, we will first assume that Δx 1. Then, with n 1 j for j 0, 1, . . . , k and 1 k N , the function d n satisfies the following second-order, linear, nonhomogeneous difference equation: d n p n,n 1 d n 1 p n,n−1 d n−1 p n,n d n 1 for n 2, . . . , N − 1. 3.1 The boundary conditions are We find that the difference equation can be rewritten as Let us now assume that μ 0, so that we must solve the second-order, linear, nonhomogeneous difference equation with constant coefficients With the help of the mathematical software program Maple, we find that the unique solution that satisfies the boundary conditions 3.2 is is the first polygamma function. Next, in the general case Δx > 0, we must solve with c 0 for j 0, 1, . . . , k. The solution that satisfies the boundary conditions d 0 d k 0 3.8 is given by Δx .

3.9
In terms of n : 1 jΔx and N 1 kΔx, this expression becomes Δx .

3.10
Finally, the mean duration of the game is obtained by multiplying d n by Δt. Making use of the fact that see 1.14 Δt Δx 2 /A, we obtain the following proposition.

ISRN Discrete Mathematics
Proposition 3.1. When Δx > 0 and μ 0, the mean duration D n of the game is given by
Next, using the fact that we obtain that, as Δx decreases to zero and 1 kΔx remains equal to N, Notice that D n indeed corresponds to the function m x given in 1.11 if c 0.
To complete our work, we need to find the value of the mean number of transitions d j in the case when μ / 0 and Δx > 0. To do so, we must solve the nonhomogeneous difference equation with nonconstant coefficients 3.3 . We can obtain the general solution to the corresponding homogeneous equation. However, we then need to find a particular solution to the nonhomogeneous equation. This entails evaluating a difficult sum. Instead, we will use the fact that we know how to compute d j when μ 0.
Let us go back to the geometric Brownian motion {X t , t ≥ 0} defined in 1. When we make the transformation Y t X t 1−2c , the interval 1, N becomes 1, N 1−2c , respectively N 1−2c , 1 , if c < 1/2, respectively c > 1/2. Assume first that c < 1/2. We have see 1. 2 Now, we consider the discrete-time Markov chain with state space {1, 1 Δx, . . . , 1 kΔx N 1−2c } and transition probabilities given by 1.13 . Proceeding as above, we obtain the expression in 3.9 for the mean number of transitions d j from state 1 jΔx. This time, we replace 1 jΔx by n 1−2c and 1 kΔx by N 1−2c , so that

Δt
Δx 2 1 − 2c 2 A 3.18 time units. Taking the limit as Δx decreases to zero and k → ∞ , we obtain making use of the formulas in 3.12 that This formula corresponds to the function m x in 1.11 when c < 1/2. When c > 1/2, we consider the Markov chain having state space and transition probabilities given by 1.13 . To obtain d j , we must again solve the difference equation 3.7 , subject to the boundary conditions d 0 d k 0. However, once we have obtained the solution, we must now replace 1 jΔx by 1 jΔx −1 and 1 kΔx by 1 kΔx −1 . Moreover, because we replace j by and similarly for k .
Remark 3.3. The quantity d j here actually represents the mean number of steps needed to end the game when the Markov chain starts from state 1/ 1 jΔx , with j ∈ {0, . . . , k}.
We obtain that

3.27
Then, we must solve the nonhomogeneous difference equation subject to the boundary conditions d 0 d k 0. We find that With ln n : jΔx and ln N kΔx, we get that

Concluding Remarks
We have obtained explicit and exact formulas for the quantities p j and d j defined respectively in 1.17 and 1.18 for various discrete-time Markov chains that converge, at least in a finite interval, to a geometric Brownian motion. In the case of the probability p j of hitting the boundary N before 1, because the appropriate difference equation is homogeneous, we were able to compute this probability for any value of c μ/σ 2 by considering a Markov chain with state space {1, 1 Δx , . . . , 1 kΔx N}. However, to obtain d j we first solved the appropriate difference equation when c 0. Then, making use of the formula that we obtained, we were able to deduce the solution for any c ∈ R by considering a Markov chain that converges to a transformation of the geometric Brownian motion. The transformed process was a geometric Brownian motion with μ 0 if c / 1/2 , or a Wiener process with μ 0 if c 1/2 . In each case, we showed that the expression that we derived tends to the corresponding quantity for the geometric Brownian motion. In the case of the mean duration of the game, the time increment Δt had to be chosen suitably.
As is well known, the geometric Brownian motion is a very important model in financial mathematics, in particular. In practice, stock or commodity prices vary discretely over time. Therefore, it is interesting to derive formulas for p j and d j for Markov chains that are as close as we want to the diffusion process. Now that we have computed explicitly the value of p j and d j for Markov chains having transition probabilities that involve parameters μ and σ 2 that are the same for all the states, we could consider asymmetric Markov chains. For example, at first the state space could be {1, . . . , N 1 , . . . , N 2 }, and we could have and similarly for σ 2 . When the Markov chain hits N 1 , it goes to N 1 1, respectively N 1 − 1, with probability p 0 , respectively 1−p 0 . By increasing the state space to {1, 1 Δx, . . . , 1 k 1 Δx N 1 , . . . , 1 k 2 Δx N 2 }, and taking the limit as Δx decreases to zero with k 1 and k 2 going to infinity appropriately , we would obtain the quantities that correspond to p j and d j for an asymmetric geometric Brownian motion. The possibly different values of σ 2 depending on the state n of the Markov chain reflect the fact that volatility is likely to depend on the price of the stock or the commodity.
Finally, we could try to derive the formulas for p j and d j for other discrete-time Markov chains that converge to important one-dimensional diffusion processes.