THE INTEGRAL LIMIT THEOREM IN THE FIRST PASSAGE PROBLEM FOR SUMS OF INDEPENDENT NONNEGATIVE LATTICE VARIABLES

The integral limit theorem as to the probability distribution of the random number νm of summands in the sum ∑k=1νmξk is proved. Here, ξ1,ξ2,… are some nonnegative, mutually independent, lattice random variables being equally distributed and νm is defined by the condition that the sum value exceeds at the first time the given level m∈ℕ when the number of terms is equal to νm.


Introduction
The following problem arises in some applications of the theory of random processes.
Let {ξ(t); t ∈ R + = [0,∞)} be a stationary ergodic random process such that it has nonnegative trajectories with probability one.Consider the random process {J[t; ξ]; t ∈ [0,∞)}, that is defined as a functional on the process {ξ(t); t ∈ [0,∞)}.It is naturally implied that the process {ξ(t)} is measurable with probability one.Further, let each trajectory of the process {ξ(t)} with probability one have no temporal interval, where it equals to zero.It is required to calculate the probability distribution of the random time τ that is a solution of for the given value E > 0. It is a well-posed random variable.First, if there exists a solution of (1.1), then it is unique under restrictions pointed out.It is because the function J[t;ξ] increases for each realization ξ(t) if the integral (1.1) is defined at t ∈ [0,∞) for it.Then its graph may cross the level E only once and the constancy interval is absent in this situation.Second, due to the ergodicity of the stationary process {ξ(t); t ∈ [0,∞)}, the following equality: is fulfilled with probability one.Then, choosing an ε ∈ (0,Eξ(s)) (further, the symbol E denotes the mathematical expectation everywhere), there exists almost surely such a random number θ for this ε and for given realization ξ(t) when it is fulfilled t −1 J[t;ξ] > Eξ(s) − ε at t > θ.Then J[t;ξ] > t(Eξ(s) − ε) and, consequently, there exists a solution of (1.2) with the probability one.
The calculation problem of probability distribution for the random variable τ defined by (1.2) arises, for example, in the control theory of stochastic systems and in the reliability theory (Homenko [4]), in the statistical theory of material destruction (Virchenko [6,7]), in the statistical radiophysics (Mazmanishvili [5]).Note, we may expect that the probability distribution pointed out has a universal behaviour in some sense when E tends to infinity.It is due to the ergodicity of the process {ξ(t); t ∈ [0,∞)} and if tending to the limit (1.3) is sufficiently fast.In this case, the integral in (1.2) may be considered as the sum of large number of weakly dependent, equally distributed random variables approximately equal to TEξ(s) with overwhelming probability.This circumstance makes the study of the probability distribution of the random variable τ a very important problem from the viewpoint of mentioned applications.
The problem, described above, admits some natural generalizations.This is important for its study since, in frameworks of more general problem setting, this may find such its particular cases that are more simple from the analytical viewpoint.The condition of almost sure absence of the interval where ξ(t) = 0 for each process trajectory is not optional.If we define the variable τ by then we may ignore this condition.Arguments guaranteeing the nonemptiness of the set where the inequality is fulfilled are the same as above in the proof of solution existence for (1.2).Moreover, it may set the analogous problem for random processes with discrete time, that is, for random stationary ergodic sequences {ξ k ; k ∈ N} for which ξ k ≥ 0. Such a problem arises in the mathematical statistics, that is, in the so-called sequential statistical analysis.In the case of sequences, it is necessary to introduce (Wald [9], Basharinov [1]) the process {J n [ξ]; n ∈ N} with realizations and the random variable ν E defined as Y. P. Virchenko and M. I. Yastrubenko 3 In particular, such a problem makes sense in the case when the sequence {ξ k , k ∈ N} presents the collection of independent, equally distributed, nonnegative random variables.Just this case is investigated in our work under the additional condition.Namely, we assume that random variables ξ k are lattice.

The problem setting
As it was mentioned above, the calculation problem of the probability distribution P m (n) = Pr{ν m = n} of a random variable ν m was considered by A. Wald when he developed the sequential statistical analysis.The random variable ν m is determined by the formula where m ∈ N, n ∈ N, η l = l k=1 ξ k , l ∈ N and ξ 1 ,ξ 2 ,... is a sequence of independent values, ξ k ∈ {0, 1}, k = 1,2,... with the success probability equal to p = Pr{ξ = 1} > 0.
Probabilities P m (n) are approximated by the following way: Here, x = np/m, dx = p/m, and is the density of the probability distribution of a suitable continuous random variable.Further, this result was spread (Virchenko [8]) for the general case of arbitrary sequences ξ 1 ,ξ 2 ,... of statistically independent and equally distributed, nonnegative, lattice random variables under the following restrictions on the probability distribution p k = Pr{ξ = k} of their typical representative ξ.First, it is nontriviality condition 1 > p 0 > 0 for the distribution p k .Second, the fast decrease lim k→∞ (p k ) 1/k = ρ −1 < 1 of the probabilities must take place.If m,n → ∞ and m = nEξ + O(n 1/2 ), then the probability distribution of the variable ν m is approximated by the formula 2) with the density Equations (2.2) and (2.4) can be considered as some local limit theorems for probability distribution of the variable ν m .They present some asymptotic formulas provided m → ∞ at the mentioned restrictions concerning the n variation.But these formulas are defined only up to a factor ∼ 1 in this limit.In connection with this fact a natural further step is to prove of the corresponding integral theorem at m,n → ∞ relative to the distribution of P m (n).Such a theorem determines uniquely the limit probability distribution unlike the local one.
Here, we will prove the integral theorem of probability distribution for the random variable centering the variable ν m .

The integral representation
In this section we obtain the integral representation for the probability P m (n) ≡ Pr{ν m = n}.It will be used further for proving the integral theorem.
At first, we make use the following decomposition of the event {ν m = n} on events {η n−1 = k}, k = 0,...,m − 1 that are pairwise disjoint, We obtain the following formula for probability P m (n): based on the decomposition and using the total probability formula with the independence condition for variables ξ 1 ,ξ 2 ,...,ξ n .Now, we introduce the generating function F(z) of probability distribution of the typical representative ξ of the sequence ξ 1 ,ξ 2 ,...: Taking into account the independence condition for random variables ξ 1 ,ξ 2 ,...,ξ n , we have Using this fact, we express the following generating function: Y. P. Virchenko and M. I. Yastrubenko 5 via the function F(z).For this, we multiply (3.2) by z m and sum all those equalities over m ∈ N. As a result, we get The function G n (z) is analytical into the unit disk {z : |z| < 1}.This follows from the fact that the function F(z) is always analytical into the closed unit disk since the power series defining F(z) converges uniformly in it, (3.7) The probability P m (n) is defined as the mth coefficient of the Taylor series of the function G n (z).Therefore, the probability P m (n) may be represented by the Cauchy formula where C = {z; |z| = r}, r < 1 is a closed countour with the positive going around.
We formulate the obtained result in the form of the separate statement.

The extremal property of the holomorphic function H(z) with nonnegative coefficients
In this section we prove the theorem on the module maximum of the holomorphic function H(z) that has nonnegative coefficients in its expansion in the power series.This property will be used hereinafter for the proof of the limit theorem.
Theorem 4.1 (Fedoryuk [2]).Let Prove that the point z * coincides with r * provided a 0 > 0. Assume that z * = |z * | = r * .We will show that there exists such an integer j ≥ 2 for which the following formula: is valid in this case.Thus, we will come to the contradiction with the condition (b) of the theorem formulation.Due to the non-negativity of a k , we have the following inequality for any z = r * e iϕ : On the other hand, since the maximum is reached at the point z * on the circle {z : |z| = r * }, then the equality is realized in the obtained inequality.Because of the fact that a k ≥ 0, it is possible only if all the summands have the same argument, that is, e inϕ = 1 at all k = 0,1,2,... if a k = 0.This follows from the condition a 0 > 0. Let k 1 ,k 2 ,... be integers corresponding to nonzero coefficients in the series of Then there exist the integers l n ∈ N such that k n ϕ = 2πl n , l n ≤ k n , n = 1,2,.... Therefore, we obtain k n = (k 1 /l 1 )l n for all n ∈ N. The rational number k 1 /l 1 is represented as irreducible fraction k 1 /l 1 = j/l where j ≥ 2 and l ≥ 1 are relatively prime numbers.The latter is valid since at j = 1 we have k n = l n and, consequently, ϕ = 2π that is not true.Then k n = jl n /l, that is, l n is divided by l, l n = lm n , m n ∈ N. Therefore, k n = jm n , that is, the function H(z) has form (4.2).

The integral limit theorem
Main Theorem 5.1.Let the probability distribution {p k ; k ∈ N + } satisfy the collection of following conditions:

absent (this condition indicates that the unit is the proper (minimal) step of the lattice probability distribution of the random variable ξ); (c) the fast decrease takes place
(5.1) Then the limit formula Y. P. Virchenko and M. I. Yastrubenko 7 for the probability distribution of random variable Our proof of the theorem will be affected in several steps.(A) We find the expression of the characteristic function of the variable ζ m .The characteristic function of the variable ν m is expressed in the following form in terms of the integral representation (3.8): dz. (5.4) Then the formula is valid.Here, (5.6) (B) We calculate the limit value of the integral (5.5) provided m → ∞.For this, we introduce the auxiliary circumferential circuit C = {z : |z| = r } having the negative going around.The value r meets the condition 1 < r < ρ.The latter is possible in view of the condition (b) of the theorem.For a given small real number ε > 0, we draw directed segments Here, they are characterized by ordered pairs representing their initial and finish points.Further, we cut out small arcs from circuits C and C included between intersection points of these circuits with segments L ± .As a result, we get contours C ε and C ε with the preserved direction of going around on them, corresponding to going around on contours C and C .
We consider the closed circuit L that consists of the sequential passage of circuits where the summation is done on the set of poles {z k (m)} that are solutions of depending on parameter m.Since the function F(z) is analytical in a small neighborhood of the point z = 1 and (dF(z)/dz) z=1 = Eξ = 0 in view of p 0 = 1, then the inverse analytical function y(z) exists when the variable z is being changed in this neighborhood.It is defined by F(z) = y and the condition y(1) = 1 (it is clear that it is impossible to guarantee the uniqueness of the solution in general case without the condition pointed out.For instance, if p 2k+1 = 0, k = 0,1,2,..., then there is a solution satisfying the condition y(1) = −1 together with the mentioned solution).Further, in view of the condition (b) of the theorem formulation and Theorem 4.1, we may state that, for any circle centered at zero, the function F(z) reaches its module maximum on the positive part of real axe when it is varied on the circle.Therefore, the solution Here, the equality is reached only if |z| = 1.But if this condition holds, then F(1) > F(z) provided z = 1.
Thus, in a sufficiently small neighborhood of the point z = 1, there exists the unique inverse function y(z) of the function F(z).Therefore, since the right-hand side of (5.8) tends to 1 provided m → ∞, then, under sufficiently large value m, there exists the unique solution z(m) of this equation.In this connection, formula (5.7) takes the form L h(z)dz = −2πi Res h(z) z=z(m) . (5.9) The integral in the left-hand side of this equality is decomposed into the sum of four integrals (5.10) We go to the limit ε → +0.Then integrals over segments L + and L − compensate each other due to the integration in opposite directions.Furthermore, circuits C ε and C ε turn into circuits C and C under such a passage to the limit.Further, the integral over the circuit C tends to zero when m → ∞, (5.12) Y. P. Virchenko and M. I. Yastrubenko 9 Now we proceed to the limit m → ∞ in formula (5.5) taking into account the limiting relationship (5.12).As a result, we get (5.13) (Note that the obtained formula makes sense only if p 0 < 1, otherwise, F(z) ≡ 1 and F = 0.) (C) We calculate the limit in (5.13).For this, we represent z(m) as the expansion in half-integer powers (5.14) Correspondingly, we represent the generating function F(z) with the same accuracy in the following form taking into account that F(1) = 1, F (1) = Eξ, and (5.15) Using (5.8), we get (5.16) Equating coefficients at powers m −1/2 and m −1 in (5.15) and (5.16), we find the expression for w 1 , (5.17) and the equation for w 2 ,

.19)
The substitution of these expressions into (5.14)gives us the formula for z(m), (5.23) Using the theorem of the connection between the characteristic functions sequence convergence and the convergence of corresponding sequence of probability distributions (Gnedenko [3]), we obtain the theorem statement.

The Wald representation
Consider the random variable .The density f ρ (x) approximates asymptotically the probability distribution density of the variable ρ m .We have the following formula for it: x 1/2 − x −1/2 2 .(6.9)
to the limit in the expression (1 − F(z(m)))/(1 − z(m)), we can change it by F (z(m)) in accordance with L'Hospital rule.Then the direct substitution of this expression and expression (5.22) into (5.13)gives the limit of the characteristic function lim m→∞ Ee itζm = exp − t 2 /2 .