RECURSIVE SMOOTHERS FOR HIDDEN DISCRETE-TIME MARKOV CHAINS

Hidden Markov chains have been the subject of extensive studies, see the books [1, 2] and the references therein. Of particular interest are the discret-time, finite-state hidden Markov models. In this paper, using the same techniques as in [3], we propose results that improve the finite-dimensional smoothers of functionals of a partially observed discrete-time Markov chain. The model itself extends models discussed in [2]. The proposed formulae for updating these quantities are recursive. Therefore, recalculation of all backward estimates is not required in the implementation of the EM algorithm. This paper is organized as follows. In Section 2, we introduce the model. In Section 3, a new probability measure under which all processes are independent is defined and a recursive filter for the state is derived. The main results of this paper are in Section 4 where recursive smoothers are derived.


Introduction
Hidden Markov chains have been the subject of extensive studies, see the books [1,2] and the references therein.Of particular interest are the discret-time, finite-state hidden Markov models.
In this paper, using the same techniques as in [3], we propose results that improve the finite-dimensional smoothers of functionals of a partially observed discrete-time Markov chain.The model itself extends models discussed in [2].The proposed formulae for updating these quantities are recursive.Therefore, recalculation of all backward estimates is not required in the implementation of the EM algorithm.
This paper is organized as follows.In Section 2, we introduce the model.In Section 3, a new probability measure under which all processes are independent is defined and a recursive filter for the state is derived.The main results of this paper are in Section 4 where recursive smoothers are derived.

Model dynamics
A system is considered, whose state is described by a finite-state, homogeneous, discretetime Markov chain X k , k ∈ N. We suppose that X 0 is given, or its distribution is known.If the state space of X k has N elements, it can be identified without loss of generality, with the set Write Ᏺ 0 k = σ{X 0 ,...,X k }, for the σ-field generated by X 0 ,...,X k , and {Ᏺ k } for the complete filtration generated by the Ᏺ 0 k ; this augments Ᏺ 0 k by including all subsets of events of probability zero.The Markov property implies here that P X k+1 = e j Ᏺ k = P X k+1 = e j X k . (2.2) Write a ji = P(X k+1 = e j | X k = e i ), A = (a ji ) ∈ R N×N .Define V k+1 := X k+1 − AX k so that (2.3) The state process X is not observed directly.We observe a second Markov chain Y on the same state space as X but with probability transitions perturbated by X.More precisely, suppose that where {Ᏻ k } is the complete filtration generated by X and Y .Write and We immediately have the following representation for Y : where W k , k ∈ N, is a sequence of martingale increments.
Let {ᐅ k } be the complete filtration generated by Y .
Our objective here is to seek recursive filters and smoothers for the states of the Markov chain X, the number of jumps from one state to another for the occupation time of a state, and for a process related to the observations.

An unnormalized finite-dimensional recursive filter for the state
What we wish to do now is starting with a probability measure P on (Ω, ∞ n=1 Ᏻ n ) such that (1) the process X is a finite-state Markov chain with transition matrix A; (2) {Y k }, k ∈ N, is a sequence of i.i.d.random variables and

Lakhdar Aggoun 347
We will now construct a new measure where b s,ri is the probability transition defined in (2.5).
With the above definitions, (The existence of P follows from Kolmogorov's extension theorem.) Recall that for a Ᏻ-adapted (3.4) Therefore, the normalized conditional probability distribution is given by

.7)
To simplify the notation, we write (3.8) Theorem 3.1.For k ∈ N, the recursive filter for the unnormalized estimates of the states is given by (3.9) Proof.In view of (3.1), (3.2), (2.3), and the notation in (3.8), ) and which finishes the proof.

Recursive smoothers
We emphasize again that these improved recursive formulae to update smoothed estimates are used to update the parameters of the model via the EM algorithm. (4.1) where Lakhdar Aggoun 349 Therefore, q m ,e i v m ,e i e i = diag q m v m . (4.4) The same argument shows that the following lemma holds.
Lemma 4.2.The process v satisfies the backward dynamics Here A * is the matrix transpose of A.

4.1.
Recursive smoother for the number of jumps.The number of jumps from state e r to state e s in time k is given by (4.7) Proof.
which finishes the proof.

Recursive smoother for the occupation time.
The number of occasions up to time k for which the Markov chain X has been in state e r , 1 ≤ r ≤ N, is q ,e r v ,e r , (4.15)