© Hindawi Publishing Corp. A BAYESIAN MODEL FOR BINARY MARKOV CHAINS

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on Jeffreys’ prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.

1. Introduction.Markov chain models have been useful for the analysis of longitudinal data in many areas of research.In ecology, the model was used to study the migration behavior of animal population from capture-recapture data [2]; in pathology, the model was useful to describe the evolution of certain viral or infectious diseases [3,6]; in sociology, the model was used for the modeling of the behavior of smoking population [5].
In most applications, these models do not take into account possible correlations between different rows of the transition matrix.As the observations are dependent, it seems more reasonable to consider prior distributions which incorporate a certain type of dependence between the components of the parameters.
In this note, we explore the Bayesian model, for binary Markov chains, using Jeffreys' prior which has some advantages: the model has no extra parameters and permits a structure of correlation between the transition probabilities.
In the sequel, X = (X 0 ,...,X n ) denotes a homogenous and stationary Markov chain with transition probabilities p ij = P X t+1 = j | X t = i , i,j = 0, 1. (1.1) The equilibrium probability of observing a 1, which we denote by p, represents the longrun proportion of time when the Markov chain is in state 1.From [1], this probability is given by p = p 01 /(p 01 + p 10 ).Letting x = (x 0 ,...,x n ) denote a fully observed realization of X, conditionally to X 0 = 1, the distribution of the observed sequence is then and the MLE of p is p = p01 /( p01 + p10 ).
The remainder of the note is organized as follows.In the next section, we calculate Jeffreys' prior and the correspondent posterior distribution.Next, we describe the way to approximate the Bayesian estimator via the independent Metropolis-Hasting (IMH) algorithm.Finally, we develop a numerical study by simulation in order to compare the Bayesian estimates with the MLEs.

Jeffreys' prior.
The goal here is to determine Jeffreys' prior (see, e.g., [4]) and its correspondent posterior distribution.Jeffreys' prior is obtained by taking the determinant of the information matrix which is defined according to Fisher as where l n (θ | x 0 = 1) is the logarithm function of (1.2).To obtain (2.1), we take the second derivates of l n (θ | x 0 = 1), and then take the expectation with negative sign to yield (2.2) Considering the expectation of the sufficient statistics (n 00 ,n 01 ,n 10 ,n 11 ), we have (2.9) and the posterior density is (2.10) The main advantage of the density (2.9) is that it provides a convenient analysis when the transition probabilities may be correlated.Moreover, this prior has no extra parameters and it is a conjugate distribution for f (x | x 0 = 1,θ) given by (1.2).Also notice that for the particular case p 01 +p 10 = 1 (independent case), Jeffreys' prior given by (2.9) is just the beta distribution Ꮾe(1/2, 1/2).

Bayesian estimation of transition probabilities.
Under the squared error loss, we know that the Bayes estimator coincides with the posterior mean, that is, In the case of Jeffreys' prior, the above integral is difficult to calculate, so we propose an approximation of it by means of a Monte Carlo Markov chain (MCMC) algorithm; namely, the IMH algorithm (see [7]).
The fundamental idea behind these algorithms is to construct a homogenous and ergodic Markov chain (θ (l) ) with stationary measure π(θ | x).For m 0 large enough, θ (m 0 ) is roughly distributed from π(θ | x) and the sample θ (m 0 ) ,θ (m 0 +1) ,... can be used to derive the posterior means.For instance, the Ergodic theorem (cf.[7]) justifies the approximation of the integral (3.1) by the empirical average in the sense that (3.2) is converging to the integral (3.1) for almost every realization of the chain (θ (l) ) under minimal conditions.Next, we give the description of the IMH algorithm. Given 10 ), the IMH algorithm at step l proceeds as follows.
As convergence assessments, we use the cumulated sums method (cf.[7]) in the sense that a necessary condition for convergence is the stabilization of the empirical average (3.2).Also, this method of convergence control gives the minimal value m of iterations that provides the approximation of the integral (3.1) by the empirical average (3.2).
4. Numerical study.In this section, we illustrate the performance of the Bayesian estimation based on Jeffreys' prior by analyzing a small simulated data set.
On the one hand, the simulation study compares the proposed Bayesian estimators with the MLEs.On the other hand, it compares the two estimators for independentlychosen transition probabilities in both cases of the beta distribution Ꮾe(1/2, 1/2) and the uniform distribution.
We recall that under the beta prior distribution the Bayesian estimator is calculated explicitly by The Bayesian solution, using the uniform prior distribution is given by

4.1.
A simulated data set.Table 4.1 displays a data set consisting of 20 independent Markov chains each with 21 observations; obviously, the chains may be of differing lengths.To generate this data set, transition probabilities for each chain are first drawn from Jeffreys' prior given by (2.9) by using the IMH algorithm (see Section 3).We assume, without loss of generality, that the first state X 0 , in each chain, is equal to 1.The remaining observations in each chain are drawn in succession from Bernoulli distribution with successive probabilities given by the appropriate transition probabilities.
To obtain the Bayesian estimator pij , based on Jeffreys' prior, of the transition probabilities, given each chain, we apply the IMH algorithm to the posterior distribution given by (2.10).The MLE pij is calculated from (1.3).The Bayesian estimator pij founded on the beta distribution (resp., the uniform distribution) is obtained from (4.2) (resp., from (4.4)).The results of this experiment are provided in Table 4.1.4.1 shows the actual transition probabilities p ij from the simulation, the MLE pij , and the Bayesian estimates pij of the transition probabilities for each chain.

Simulation results. Table
Notice that for many chains, the MLE takes extreme values 0 or 1; this is explained by the restricted size of the simulated sample.In addition, for the chain no. 8, p01 does not exist because the chain never entered state 0, whereas the Bayesian estimates do not suffer from these problems because a common prior distribution is assumed.
Also shown in Table 4.1 are the mean actual and the estimated transition probabilities, as well as mean square errors (MSE) for the estimates.The MSE are calculated by averaging the squared difference between the estimated probability and the actual probability used in simulation.Notice that the Bayes posterior means perform better than the MLEs.In particular, the MSE of the MLEs is clearly higher than that corresponding to the Bayes estimates.
This study also illustrates the usefulness of modeling the dependence among the transition probabilities.In particular, the resulting posterior distributions, under the assumption that P 10 and P 01 are independent, may not be accurate.Indeed, using the beta prior distribution (resp., the uniform prior distribution), the resulting changes in the posterior means range from −0.1594 to 0.1909 (resp., from −0.0844 to 0.2574) for p10 and from −0.1588 to 0.1445 (resp., from −0.1588 to 0.1098) for p01 .Moreover, the MSEs corresponding to p10 and p01 become 0.0122 and 0.0096 (resp., 0.0131 and 0.0101).These are slightly higher than the results obtained by modeling the dependence.All these results lead to privilege the Bayesian solution based on Jeffreys' prior.
For the previous experiment, a Pascal program is written to run the transition probabilities.The Bayesian estimator pij , founded on Jeffreys' prior, is obtained from a single chain including 10 4 iterations.

Conclusion.
In this note, we studied the Bayesian estimation for the transition probabilities of a binary Markov chain under Jeffreys' prior distribution.As shown, this prior has many advantages: it permits a certain type of dependence between the components of the parameter.The absence of extra parameter in this prior is of great interest because we do not need to do more extra estimation.A numerical study by simulation is also carried out to evaluate the performance of the Bayesian estimates compared to the MLEs.The following stage of this note will be to generalize the suggested method in the case of missing data.

Call for Papers
This subject has been extensively studied in the past years for one-, two-, and three-dimensional space.Additionally, such dynamical systems can exhibit a very important and still unexplained phenomenon, called as the Fermi acceleration phenomenon.Basically, the phenomenon of Fermi acceleration (FA) is a process in which a classical particle can acquire unbounded energy from collisions with a heavy moving wall.This phenomenon was originally proposed by Enrico Fermi in 1949 as a possible explanation of the origin of the large energies of the cosmic particles.His original model was then modified and considered under different approaches and using many versions.Moreover, applications of FA have been of a large broad interest in many different fields of science including plasma physics, astrophysics, atomic physics, optics, and time-dependent billiard problems and they are useful for controlling chaos in Engineering and dynamical systems exhibiting chaos (both conservative and dissipative chaos).
We intend to publish in this special issue papers reporting research on time-dependent billiards.The topic includes both conservative and dissipative dynamics.Papers discussing dynamical properties, statistical and mathematical results, stability investigation of the phase space structure, the phenomenon of Fermi acceleration, conditions for having suppression of Fermi acceleration, and computational and numerical methods for exploring these structures and applications are welcome.
To be acceptable for publication in the special issue of Mathematical Problems in Engineering, papers must make significant, original, and correct contributions to one or more of the topics above mentioned.Mathematical papers regarding the topics above are also welcome.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/.Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/according to the following timetable:

Figures 4 .
1 and 4.2 give an example of the convergence evaluation (see Section 3).