A Numerical Algorithm on the Computation of the Stationary Distribution of a Discrete Time Homogenous Finite Markov Chain

The transition matrix, which characterizes a discrete time homogeneous Markov chain, is a stochastic matrix. A stochastic matrix is a special nonnegative matrix with each row summing up to 1. In this paper, we focus on the computation of the stationary distribution of a transition matrix from the viewpoint of the Perron vector of a nonnegative matrix, based on which an algorithm for the stationary distribution is proposed. The algorithm can also be used to compute the Perron root and the corresponding Perron vector of any nonnegative irreducible matrix. Furthermore, a numerical example is given to demonstrate the validity of the algorithm.


Introduction and Preliminaries
Throughout this paper, the following notations and definitions are used.A matrix A a ij ∈ R m×n is called nonnegative positive , if all a ij ≥ 0 a ij > 0 , denoted by A ≥ 0 A > 0 .Similarly, a vector x x 1 , . . ., x n T is called nonnegative positive and denoted by x ≥ 0 for all i 1, 2, . . ., m, j 1, 2, . . ., n.
For a square matrix A ∈ R n×n with eigenvalues λ 1 , . . ., λ n , ρ A max{|λ j |} is called the spectral radius of A. If A ≥ 0 is irreducible, there exists a unique eigenvector x x 1 , . . ., x n T > 0 such that Ax ρ A x and In this case, we say that ρ A is the Perron root of A and x is the Perron vector 1 .
We consider a discrete-time Markov chain X {X n : n 0, 1, . ..} with a finite state space S {i 1 , . . ., i n }.Among ergodic processes, homogeneous Markov chains with finite state space are particularly interesting examples.Such processes satisfy the Markov property, which states that their future behavior, conditional to the past and present, depends only on the present.Precisely, for all t ∈ R , h > 0, and for all sequences 0 ≤ t 1 ≤ • • • ≤ t r t, i 1 , . . ., i r ∈ S and i j ∈ S, The behavior of such a process is characterized by an n × n matrix M called the transition matrix 2 .
Its stationary distribution π, which is also its asymptotic distribution, is a vector satisfying the following.
where e is the column vector of all ones.It has been established that it is possible to represent all possible uses of a software system as a Markov chain 3-5 .This model is called a Markov chain usage model.In a usage model, states of use such as state "Document Loaded" in a model representing use of a word processing system are represented by states in the Markov chain.Transitions between states of use such as moving from state "Document Loaded" to "No Document Loaded" when the user closes a document in a word processing system are represented by state transitions between the appropriate states in the Markov chain.Transitions between states of use have associated probabilities which represent the probability of making each transition.A usage model may be created based on information taken from functional specifications, usage specifications, and test objectives.
Considering the problem of software reliability, we represent a software system S f with n states of use {s 1 , . . ., s n } by a homogeneous discrete Markov chain {X n : n 0, 1, . ..} the corresponding transition matrix is M .We denote the initial state probability distribution π 0 π 0 1 , . . ., π 0 n T , where π 0 i P X 0 s i .Then π k T π k−1 T M, where π k stands for the state probability distribution at time k.Let μ i i 1, . . ., n be the probability when the software fails at state s i .The reliability of S f at time k can be defined as After a long time running, the state distribution of system S f will tend to the stationary distribution π π 1 , . . ., π n T .Then, the terminating reliability R 1 − n i 1 μ i π i , with which we can evaluate the quality of a software system.By decreasing the μ i of state s i with the largest probability π i in the stationary distribution π, we can also enhance the reliability of S f efficiently with limited resources.
A nonnegative matrix A a ij ∈ R n×n is called a row-stochastic matrix or a stochastic matrix for short if n j 1 a ij 1 for all i 1, . . ., n, that is, Ae e. From the well-known Perron-Frobenius theorem, it can be easily deduced that the Perron root of a stochastic matrix A equals 1.
Obviously, the transition matrix M of a discrete-time homogeneous Markov chain is a stochastic matrix.From 1.3 , we have M T π π.That is to say, the stationary distribution π is also an eigenvector of M T associated to 1. Since M T and M have the same eigenvalues, π is the Perron root of M T , that is, the solution to M T π π.As for the computational aspects of π, many approaches have been presented e.g., see 6-10 based on the Gaussian elimination, direct projection and so on.In this paper, from the viewpoint of the Perron root which has not been discussed, we propose an algorithm for the stationary distribution π considering that the computation of π is equivalent to the computation of the Perron vector of M, which not only can compute the stationary distribution, but also could be used to compute the Perron root and the corresponding Perron vector of any nonnegative irreducible matrix noting that the stationary distribution is the Perron vector of the transition matrix, which is a special nonnegative matrix .
This paper is organized as follows.In the next section, we propose some lemmas and preliminary results.In Section 3, we prove the convergent theorem and give some facts.In Section 4, we propose an algorithm for the stationary distribution together with a demonstrating numerical example.

Some Lemmas
In this section, we present some lemmas which will be used in the proof of the main results.The following facts can be found in 1, 11, 12 .
It is known that any primitive matrix must be irreducible 12 .We will use the following important facts which can be found in 1, 12 .
Theorem A see 1 .Let A A n×n ≥ 0, then A is irreducible if and only if I A n−1 > 0, where I is the unit matrix.
Theorem B Perron-Frobenius see 1 .Let A A n×n ≥ 0 be irreducible.Then, This theorem guarantees the eigenspace of ρ A is one-dimensional.That is, Ay ρ A y implies y kx.And there exists an unique positive vector x > 0 whose components sum to 1 such that Ax ρ A x.This x is called the Perron vector 1 .
For the Perron root of nonnegative matrices, many algorithms and bounds estimations have been proposed see in 13, 14 .In this paper, we will describe the Perron root by using the following Collatz-Wielandt functions 11, 12 .Definition 2.2 see 11 .Let A a ij n×n be nonnegative, define for any positive vector x x 1 , . . ., x n T > 0.

Mathematical Problems in Engineering
f A x and g A x are both continuous at any x > 0.

Lemma 2.3.
Let A A n×n be nonnegative and irreducible.Then, for any x > 0, f A x and g A x satisfy the following: Proof. 1 -3 are clearly true see 10 .For 4 , by Ax ≥ f A x x, it follows that Ay B Ax ≥

Main Results
In this section, we will present the main results.

3.1
Then, Proof.By 3.1 , we can write x k B k x 0 /b k for some b k > 0 .This means for k 1, 2, . . .

3.4
By putting x Lx 0 / Lx 0 1 , it is clear that x > 0 with x 1 1 and By Lemma 2.3 and 3.1 , we have for k 1, 2, . . .

3.7
So, {f A x k } and {g A x k } are both monotonic convergent sequences.This proves c , completing the proof.
Remark 3.2.From the proof, we know ABx ρ A ρ B x x > 0 , and ρ AB ρ A ρ B .For an n × n irreducible matrix A ≥ 0, since bI A n−1 > 0 b > 0 , B bI A m are primitive for m 1, 2, . ... Clearly, BA AB, we have the following.

3.8
Then, For a positive matrix A > 0, all the matrices B A m m 1, 2, . . .are primitive.The following is obvious.

Corollary 3.4.
If A A n×n > 0 and B A m (for m ≥ 1).Let x 0 a 1 , . . ., a n T be a positive vector.Define Then, for k 1, 2, . . ., one has the following.

3.10
If A ≥ 0 is irreducible, it is obvious that B bI A b > 0 is primitive, and ρ A ρ B − b.So, we have the following. 3.11

An Algorithm and a Numerical Example
In this section, we propose a numerical algorithm to compute the stationary distribution of a discrete time homogeneous finite Markov chain.Step 2. Computing π k from π k−1 :

4.2
Step 4. If g M T π k − f M T π k < ε, go to Step 5. Otherwise setting k : k 1, go back to Step 2.
Step 5. Let λ 1/2 f M T π k g M T π k .Then λ is the approximation of the Perron root of M T , and the corresponding π k is the approximation of the stationary distribution of M.

4.4
The iteration results are listed in Table 1.

Algorithm 4 . 1
to compute the stationary distribution π .Step 1. Giving a transition matrix M of a discrete time homogeneous finite Markov chain, a calculation precision ε > 0. Choosing parameters: a positive real number b > 0 and an integer m.Setting the initial iterative vector π 0 1, 1, . . ., 1 T , B bI M T m , k 1.

Example 4 . 3 .
For a given finite Markov Chain, with the corresponding transition matrix as the following: stationary distribution with calculation precision ε 10 −6 .

Table 1 :
Iteration results of Example 4.3 by Algorithm 4.1.