A Weak Limit Theorem for Galton-Watson Processes in Varying Environments

There has been a lot of interesting works on Markov chains in random environments, which is mainly concentrated in branching processes in random environments and random walks in random environments (see [1]). The study of branching processes in random environments dates back to late 60s or early 70s in the last century (see [2–5]). Our paper deals with a Galton-Watson branching process in the varying environment (GWVE) which is a special case of branching processes in random environments. The main concern is the weak convergence for a GWVE, which is an extension of Donsker’s theorem (see [6, 7]). In the following context, {X ni , n ≥ 0, i ≥ 1} is a double sequence of independent and nonnegative integer valued random variables, where for fixed n, {X ni , i ≥ 1} have the same distribution {p ni , i = 0, 1, 2, . . .} with mean μ n > 0 and variance σ2 n > 0.


Introduction
There has been a lot of interesting works on Markov chains in random environments, which is mainly concentrated in branching processes in random environments and random walks in random environments (see [1]).
The study of branching processes in random environments dates back to late 60s or early 70s in the last century (see [2][3][4][5]).Our paper deals with a Galton-Watson branching process in the varying environment (GWVE) which is a special case of branching processes in random environments.The main concern is the weak convergence for a GWVE, which is an extension of Donsker's theorem (see [6,7]).
→ , as  → +∞ (see [8]).For any fixed , let   :=  ()  , be the size of the th generation of GWVE starting with the th particle at time ; then {  ,  ≥ 1} are i.i.d. with mean  , and variance  2 , (see ( 4) and ( 5)).For each , define where [] is the largest integer that is less than .Our main result is a weak limit theorem for GWVE, which is an extension of Donsker's theorem.Let  be the space of functions defined on [0, 1] and having discontinuities of at most the first kind.For any  ∈ , define   = { ∈  : (1) ≤ }; it turns out that (  ) = 0, where  is the Wiener measure on .Note that  + = ∑ 2 Journal of Applied Mathematics Corollary 3 (CLT).Suppose that   → ∞ and ( = 0) = 0; then for any fixed , where (0, 1) is the standard normal random variable.
So, Theorem 2 is an extension of the central limit theorem for classical Galton-Watson process (see [9,10]).

Auxiliary Results
Let us begin with a result of   .Proposition 4. {  ,  ≥ 1} are independent and identically distributed with Proof.According to the definition of definition of GWVE, {  ,  ≥ 1} are independent and identically distributed.Denote the generating functions of  1 and  ,1 by   () and  , (), respectively; then it can be proved that Therefore, So ( 4) is proved.In addition, the first and second derivatives of  , () are as follows: By ( 8) one has Thus, Since  ,1 =   ,  2 ,1 =  2  ,  1 =  (1)  , , we complete the proof of ( 5) by (10).
For any , define The proof of Theorem 2 depends on the following proposition.
For any fixed  and  large enough, Since   → ∞, for  large enough, we have This means that the characteristic function of   () is convergent to that of   ; by Lévy continuous theorem we complete the proof of single point case.
Consider now two time points  and  with  < ; we are to prove Note that

) . (17)
By Corollary 1 to Theorem 5.1 in [12], it is only needed to prove Since the components on the left are independent by the independence of the {  ,  ≥ 1}.Equation ( 16) follows from the case of one time point and Theorem 3.2 of [12].A set of three or more time points can be treated in the same way, and hence the finite-dimensional distributions converge properly.
In the next step, we will show that {  } is tight.According to Theorem 15.6 of [12], it is enough to establish the inequality Since {  ,  ≥ 1} are i.i.d. with (  ) ≡ 0 and Var(  ) ≡ 1; by the definition of   , we have If  2 −  1 ≥ 1/  , then there exist  1 <  2 such that Hence, So ( 19) is true when

The Proof of Theorem 2
We are now ready to prove Theorem 2.
Proof.Note that for each ,   (, ) = 1  , √  ( We assume at first that  is bounded, so that there exists a constant  such that 0 <  ≤  with probability 1.We may adjust the   so that they are integer and so that  < 1. If
Proof.It lose no generality if we assume that {  } are integers.The proof is divided into two steps.We first show that the finite-dimensional distributions of the   are convergent to those of .Consider first a single time point .We must prove that   (, ⋅)  .Since {  ,  ≥ 1} have the same distribution, we can set → then either  1 and  lie in the same subinterval [/  , ( + 1)/  ) or else  and  2 do.In either of these cases Δ  = 0 by (20).This establishes (19) in general and proves the proposition.