First Passage Time of a Markov Chain That Converges to Bessel Process

and Applied Analysis 3 Assumption 2. For all δ > 0 and all T > 0, lim Δt→0 sup |r|⩽δ 0⩽t⩽T 󵄨󵄨󵄨󵄨R+ Δt (r, t) − r (t)󵄨󵄨󵄨󵄨 = 0, lim Δt→0 sup |r|⩽δ 0⩽t⩽T 󵄨󵄨󵄨󵄨R− Δt (r, t) − r (t)󵄨󵄨󵄨󵄨 = 0. (19) Assumption 3. For all δ > 0 and all T > 0, lim Δt→0 sup |r|⩽δ 0⩽t⩽T 󵄨󵄨󵄨󵄨󵄨󵄨󵄨μΔt (r, t) − a r (t) 󵄨󵄨󵄨󵄨󵄨󵄨󵄨 = 0, lim Δt→0 sup |r|⩽δ 0⩽t⩽T 󵄨󵄨󵄨󵄨σΔt (r, t) − 1󵄨󵄨󵄨󵄨 = 0. (20) Theorem4. Under Assumptions 1–3,Rtj ⇒ Rt converge to the solution of (1), where⇒ denotes weak convergence. Next, we assume Rtj is bounded for any j = 0, 1, . . . , n. 3. Discrete Value of the Probability and the Average Number of Transitions of First Passage Time, of the Bessel Process Let Tj = inf {m > 0 : Rtm = 1 or N | R0 = 1 + jΔr} , (21) pj = P [RTj = N] , dj = E [RTj = N] . (22) In this section, we will compute the quantity pj for j ∈ {1, . . . , k − 1}. We will show that pj converges to the function p(r) for the Bessel process asΔr decreases to zero and k tends to infinity in such a way that 1 + kΔr remains equal toN. 3.1. Computation of the Probability pj 3.1.1. Assuming First That Δr = 1. Then the state space is {1, 2, . . . , N} and the transition probabilities become pj,j+1 = 1 2A {1 + aj} , pj,j−1 = 1 2A {1 − aj} , pj,j = 1 − 1 A, (23) for j ∈ {2, . . . , N − 1}. It is well known that the probability defined in (22) satisfies the following difference equation: pj = pj,j+1pj+1 + pj,j−1pj−1 + pj,jpj. (24) Equation (24) can be rewritten as wj − (j − a j + a)wj−1 = 0 where wj = pj − pj−1. (25) We find that the solution of this first-order difference equation that satisfies the boundary condition w1 = p2 is given by wj = f (a) Γ (j + 1 − a) Γ (j + 1 + a) , where f (a) = − (1 + a) Γ (a) (1 − a) Γ (−a) . (26) We have the following lemma. Lemma 5. For a ̸ = 1/2, the unique solution of (24) subject to the boundary conditions (4) is given by pj = (j + a) (Γ (j + 1 − a) /Γ (j + 1 + a)) − (1 + a) (Γ (2 − a) /Γ (2 + a)) (N + a) (Γ (N + 1 − a) /Γ (N + 1 + a)) − (1 + a) (Γ (2 − a) /Γ (2 + a)) for j = 1, 2, . . . , N. (27) Proof. We have pj+1 − pj = f (c) Γ (j + 1 − c) Γ (j + 1 + c) = f (c) Γ (2c) ∫ 1 0 tjt−c (1 − t)2c−1 dt, pj − p1 = j−1 ∑ k=1 (pk+1 − pk) = f (c) Γ (2c) ∫ 1 0 t − tj 1 − t t−c (1 − t)2c−1 dt. (28) Sincep1 = 0 andB(z, w) = ∫1 0 tz−1(1−t)w−1dt = ∫∞ 0 (tz−1/(1+ t)z+w)dt = Γ(z)Γ(w)/Γ(z + w) we obtain, pj = f (c) Γ (2c) ∫ 1 0 t1−c (1 − t)2c−2 dt − f (c) Γ (2c) ⋅ ∫1 0 tj−c (1 − t)2c−2 dt = f (c) Γ (2c − 1) Γ (2c) [(c + 1) Γ (2 − c) Γ (2 + c) − (j + c) Γ (j + 1 − c) Γ (j + c + 1) ] . (29) 4 Abstract and Applied Analysis By applying boundary conditions (4), we obtain 1 = pN = f (a) Γ (2a − 1) Γ (2a) [(a + 1) Γ (2 − a) Γ (2 + a) − (N + a) Γ (N + 1 − a) Γ (N + a + 1) ] , f (a) Γ (2a − 1) Γ (2a) = 1 (a + 1) Γ (2 − a) /Γ (2 + a) − (N + a) Γ (N + 1 − a) /Γ (N + a + 1) , (30)


Introduction
The study of the probability of the hitting times for stochastic differential equations is an active area of research and they are of great interest in many applications, for example, in finance, the study of path dependent exotic options as barrier options, in percolation theory [1], in optimal control problems [2], and in neuroscience [3].In this paper we investigate the discrete version of Bessel processes defined by an stochastic differential equation.It is well known [4] that, given a diffusion process defined by a stochastic differential equation, we can produce a discrete Markov chain that converges weakly to the solution of this stochastic differential equation (by making use of a binomial approximation).In this paper we show that the probability of the first passage times and the number of the transitions of this discrete Markov chain tend to the corresponding ones for the continuous times Bessel process.
The discrete versions of stochastic processes are interesting in themselves; for instance, in quantum mechanics the motion of a particle should be essentially discontinuous and random.Moreover, in [5] the authors show the application of the discrete version of Cox-Ingersoll-Ross process in hydrology.
In this paper, we consider the so-called gambler's ruin problem for a discrete-time Markov chain that converges to a Bessel process.Phenomena governed by Bessel processes abound in the physical world, as in the case of growth phenomena governed by Stochastic Loewner Evolution (SLE) [1].Other phenomena include first hitting time of Bessel processes in the study of systems at or near the point of phase transition in statistical physics.In finance, a typical example is the study of stock price.It is well known that in a volatilitystabilized market the stock prices can be represented in terms of Bessel processes.Since the stock price does not vary completely continuously, the discrete formulas that will be derived in the present paper would be interesting.
The paper is organized as follows.In Section 2, we briefly describe the transition probability derived from the Bessel stochastic differential equation.Our main contribution is in the third section (Section 3.2): We find the explicit formula of the average number of transitions needed to end the game that was impossible to obtain in [6].We also show the sequence of the probability of the first passage times and the average number of transitions to end the game converges (with Euclidean metric) to the corresponding values in the continuous case.

Bessel Process Defined by a Stochastic Differential Equation and Simple Binomial Approximation
2.1.Preliminaries on Bessel Processes.We consider the Bessel process defined by the following differential equation: where  () Brownian motion. ( Next, let  = ] + 1/2 and if there is no ambiguity we remove the index  on .Assume that (0) =  ∈ (1, ), where  ∈ N (for simplicity), and define As is well known (see, e.g., [7], page 220), the probability satisfies the ordinary differential equation: We easily find that, if and when  = 1/2, the solutions is Let In [7], we see the function () satisfies the second-order ordinary differential equation: The general solution of this equation is where 2.2.Preliminaries: Binomial Approximation.In this section we recall binomial approximation briefly; for more details please see [4].We wish to find a sequence of stochastic processes that converges in distribution to process (1) over the time interval [0, ].Take the interval [0, ], and chop it into  equal pieces {0,  1 ,  2 , . . .,   } of length Δ ≡ /,   = Δ.Define a sequence    = 1 + Δ of binomial approximations from (1), which is constant between nodes, such that at any given node    the process jumps up to  +   =   +1 (resp., down to  −   =   −1 ) with probability     ,  +1 (resp.,     ,  −1 ), and stays at the node with probability 1 −     ,  +1 −     ,  −1 .

P [𝑅 𝑡
The local drift of    is given by and the local second moment of    is given by On the other hand, By solving ( 12), (13), and ( 14), we obtain Since 1/   is bounded, we can write Δ(/   ) 2 = (Δ); hence we obtain We obtain the following transition probabilities     ,  +1 ,     ,  −1 , and     ,   given by We state the following assumptions, under which    converges weakly to   (see [8]).

Discrete Value of the Probability and the Average Number of Transitions of First Passage Time, of the Bessel Process
Let In this section, we will compute the quantity   for  ∈ {1, . . .,  − 1}.We will show that   converges to the function () for the Bessel process as Δ decreases to zero and  tends to infinity in such a way that 1 + Δ remains equal to .
This expression corresponds to the function () given in ( 6), obtained when  = 1/2.Finally, we have as || tends to infinity (if |arg ( + )| < ; see [9]).Hence, in the case  ̸ = 0, 1/2, we can write that lim Therefore, we retrieve the formula for () in ( 5).In the next section, we will derive the formulas that correspond to the function () in Section 1.

Computation of the Mean Number of Transitions 𝑑 𝑗
Needed to End the Game.We now turn to the problem of computing the mean number   of transitions that the Markov chain {   ,  = 0, 1, . ..}, starting from  0 = 1 + Δ, takes to reach either 1 or 1 + Δ = .Unlike the results of [6], we find the explicit formula of the average number of transitions needed to end the game.