Estimation of Failure Probability and Its Applications in Lifetime Data Analysis

,


Introduction
In the area related to the reliability of industrial products, engineers often deal with the truncated data in life testing of products, where the data sometimes have small sample size or have been censored, and the products of interest have high reliability.In the literature, Lindley and Smith [1] first introduced the idea of hierarchical prior distribution.Han [2] developed some methods to construct hierarchical prior distribution.Recently, hierarchical Bayesian methods have been applied to data analysis [3].However, complicated integration compute is a hard work by using hierarchical Bayesian methods in practical problems, though some computing methods such as Markov Chain Monte Carlo (MCMC) methods are available [4,5].
Han [6] introduced a new method-E-Bayesian estimation method-to estimate failure probability in the case of two hyperparameters, proposed the definition of E-Bayesian of failure probability, and provided formulas for E-Bayesian estimation of the failure probability under the cases of three different prior distributions of hyperparameters.But we did not provide formulas for hierarchical Bayesian estimation of the failure probability nor discuss the relations between the two estimations.In this paper, we will introduce the definition for E-Bayesian estimation of the failure probability, provide formulas both for E-Bayesian estimation and hierarchical Bayesian estimation, and also discuss the relations between the two estimations in the case of only one hyperparameter.We will see that the E-Bayesian estimation method is really simple.
Conduct type I censored life testing m time, denote the censored times as t i (i = 1, 2, . . ., m), the corresponding sample numbers as n i , and the corresponding failure sample numbers observed in the testing process as r i (r i = 0, 1, 2, . . ., n i ).
In the situation where no information about the life distribution of tested products is available, Mao and Luo [7] introduce a so-called curves fitting distribution method to give the estimation of the failure probability p i at time t i , p i = P(T < t i ) for i = 1, 2, . . ., m, where T is the life of a product.
This paper introduces a new method, called E-Bayesian estimation method, to estimate failure probability.The definition and formulas of E-Bayesian estimation of the failure probability are described in Sections 2 and 3, respectively.In Section 4, formulas of hierarchical Bayesian estimation of 2 International Journal of Quality, Statistics, and Reliability the failure probability are proposed.In Section 5, the properties of E-Bayesian estimation are discussed.In Section 6, an application example introduces given.Section 7 is the conclusions.

Definition of E-Bayesian Estimation of p i
Suppose that there are X failures among n samples, then X can be viewed as a random variable with binomial distribution Bin (n, p i ).We take the conjugate prior of p i , Beta (a, b), with density function where 0 is the beta function, and hyperparameters a > 0 and b > 0.
The derivative of π(p i | a, b) with respect to p i is ( According to Han [2], a and b should be chosen to guarantee that π( Given 0 < a < 1, the larger the value of b, the thinner the tail of the density function.Berger had shown in [8] that the thinner tailed prior distribution often reduces the robustness of the Bayesian estimate.Consequently, the hyperparameter b should be chosen under the restriction 1 < b < c, where c is a given upper bound.How to determine the constant c would be described later in an example. Since a ∈ (0, 1) and 1/2 is an expectation of uniform on (0, 1), that we take a = 1/2.When 1 < b < c and a = 1/2, π(p i | a, b) also is a decreasing function of p i .
In this paper we only consider the case when a = 1/2.Then the density function π(p i | a, b) becomes The definition of E-Bayesian estimation was originally addressed by Han [6] in the case of two hyperparameters.In the case of one hyperparameter, the E-Bayesian estimation of failure probability is defined as follows.Definition 1 indicates that the E-Bayesian estimation of p i , is the expectation of the Bayesian estimation of p i for the hyperparameter.

E-Bayesian Estimation of p i
Theorem 2. For the testing data set {(n i , r i , t i ), i = 1, . . ., m} with type I censor, where r i = 0, 1, 2, . . ., n i , let s i = m j=i n j and e i = i j=1 r j (i = 1, 2, . . ., m).If the prior density function π(p i | b) of p i is given by (3), then, we have the following.
(i) With the quadratic loss function, the Bayesian estimation of p i is Proof.(i) For the testing data set {(n i , r i , t i ), i = 1, . . ., m} with type I censor, where r i = 0, 1, 2, . . ., n i , according to Han [6], the likelihood function of samples is where s i = m j=i n j and e i = i j=1 r j (i = 1, 2, . . ., m).Combined with the prior density function π(p i | b) of p i given by (3), the Bayesian theorem leads to the posterior density function of p i , Thus, with the quadratic loss function, the Bayesian estimation of p i is This concludes the proof of Theorem 2.

Hierarchical Bayesian Estimation
If the prior density function π(p i | b) of p i is given by (3), how can the value of hyperparameter a be determined?Lindley and Smith [1] addressed an idea of hierarchical prior distribution, which suggested that one prior distribution may be adapted to the hyperparameters while the prior distribution includes hyperparameters.If the prior of p i , π(p i | b), is given by (3), prior distribution of b is uniform on (1, c), and the density function is Theorem 3.For the testing data set {(n i , r i , t i ), i = 1, . . ., m} with type I censor, where r i = 0, 1, 2, . . ., n i , let s i = m j=i n j and e i = i j=1 r j (i = 1, 2, . . ., m).If the hierarchical prior density function π(p i ) of p i is given by (12), then, using the quadratic loss function, the hierarchical Bayesian estimation of p i is Proof.According to the course of the proof of Theorem 2, the likelihood function of samples is where s i = m j=i n j and e i = i j=1 r j (i = 1, 2, . . ., m).From the hierarchical prior density function of p i given by ( 12), the Bayesian theorem leads to the hierarchical posterior density function of p i , With the quadratic loss function, the hierarchical Bayesian estimation of p i is Thus, the proof is completed.

Property of E-Bayesian Estimation of p i
Now we discuss the relations between p iEB and p iHB in Theorems 2 and 3.

Theorem 4. In Theorems 2 and 3, p iEB and p iHB satisfy lim
Proof.According to the course of the proof of Theorem 2, we have that ) is continuous; by the mean value theorem for definite integrals, there is at least one number b According to (17) and (18), we have that According to the relations of Beta function and Gamma function, we have that where the Gamma function is and (e i +(1/2)/s i +b+(1/2)) is continuous; by the generalized mean value theorem for definite integrals, there is at least one number b According to Theorem 3, we have that According to ( 19) and ( 22), we have that lim Thus, the proof is completed.
Theorem 4 shows that p iEB and p iHB are asymptotically equivalent to each other as s i tends to infinity.In application, p iEB and p iHB are close to each other, when s i is sufficiently large.

Application Example
Han [6] provided a testing data of type I censored life testing for a type of engine, which is listed in and satisfy Theorem 4; for different c (c = 2, 3, 4, 5, 6), p iEB and p iHB (i = 1, 2, . . ., 9) are all robust (when i = 9, there exists some difference).In application, the author suggests selecting a value of c in the middle point of interval [2,6], that is, c = 4.When c = 4, some numerical results are listed in Table 3 and Figure 1.
Range: in Figure 1, * is the results of p iEB and o is the results of p iHB (i = 1, 2, . . ., 9).
From Table 3 and Figure 1, we find that p iEB and p iHB are very close to each other and consistent with Theorem 4.
From Table 3, we find that the results of p iEB and p iHB are very close to the corresponding results of Han [6].
According to Han [6], we may assume that the lifetime of these products obeys Weibull distribution with distribution function According to Mao and Luo [7], the least square estimates of η and m are, respectively, where According to (24), we can obtain the estimate of the reliability at moment t, where η and m are given by (24).From (24) and Table 3, we can obtain η and m.Some numerical results of η and m are listed in Table 4.  From Table 4, we find that the results of η and m are very close to the corresponding results of Han [6].
From (25) and Table 4, we can obtain the estimate of the reliability with some numerical results of R EB (t) and R HB (t) listed in Table 5 and Figure 2.
Note that R EB (t) is the estimate of reliability at moment t with regard to p iEB , R HB (t) is the estimate of reliability at moment t with regard to p iHB , and R −B (t) = | R EB (t) − R HB (t)|.
Range: in Figure 2, * is the results of R EB (t) and o is the results of R HB (t).
From Table 5, we find that R −B (t) < 0.0092 when t = 200, 600, 1000, 1400, 1800, 2000, and the results of R EB (t) and R HB (t) are very close to those of Han [6].

Conclusions
This paper introduces a new method, called E-Bayesian estimation, to estimate failure probability.The author would like to put forward the following two questions for any new parameter estimation method: (1) how much dependence is there between the new method and the other already-made ones?(2) In which aspects is the new method superior to the old ones?
For the E-Bayesian estimation method, Theorem 4 has given a good answer to the above question (1) and, in addition, the application example shows that p iEB and p iHB satisfy Theorem 4.
To question (2), from Theorems 2 and 3, we find that the expression of the E-Bayesian estimation is much simple, whereas the expression of the hierarchical Bayesian estimation relies on beta function and complicated integrals expression, which is often not easy.
Reviewing the application example, we find that the E-Bayesian estimation method is both efficient and easy to operate.

Definition 1 .
With p iB (b) being continuous, p i EB = D p iB (b)π(b)db (4) is called the expected Bayesian estimation of p i (briefly E-Bayesian estimation), which is assumed to be finite, where D is the domain of b, p iB (b) is the Bayesian estimation of p i with hyperparameter b, and π(b) is the density function of b over D.

Figure 2 :
Figure 2: Results of R EB (t) and R HB (t).

Table 1 (
time unit: hour).By Theorems 2, 3, and Table 1, we can obtain p iEB , p iHB , and p i−B = | p iEB − p iHB |.Some numerical results are listed in

Table 2 .
Figure 1: Results of p iEB and p iHB .

Table 1 :
Test data of the engine.

Table 3 :
Results of p iEB and p iHB .

Table 4 :
Results of η and m.

Table 5 :
Results of R EB (t) and R HB (t).