On the Mean Residual Life Function and Stress and Strength Analysis under Different Loss Function for Lindley Distribution

Purpose. Mathematical properties of Lindley distribution are derived under different loss functions. These properties include mean residual life function, Lorenz curve, stress and strength characteristic, and their respective posterior risk via simulation scheme. Methodology. Bayesian approach is used for the reliability characteristics. Results are compared on the basis of posterior risk. Findings. Using prior information on the parameter of Lindley distribution, Bayes estimates for reliability characteristics are compared under different loss functions. Practical Implications. Since Lindley distribution is a mixture of gamma and exponential distribution, so Bayesian estimation of reliability characteristics will have a great implication in reliability theory.Originality. A real life application to waiting time data at the bank is also described for the developed procedures. This study is useful for researcher and practitioner in reliability theory.


Introduction
Exponential distribution is frequently used as a lifetime distribution in statistics and applied areas; the Lindley distribution has been ignored in the literature since 1958. Lindley distribution originally developed by Lindley [1] and some classical statistic properties are investigated by Ghitany et al. [2]. Sankaran [3] introduced a discrete version of Lindley distribution known as discrete Poisson-Lindley distribution, and Ghitany and Al-Mutairi [4] described some estimation methods. The distribution of zero-truncated Poisson-Lindley was introduced by Ghitany et al. [5] who used the distribution for modeling count data in the case where the distribution has to be adjusted for the count of missing zeros. Zamani and Ismail [6] introduced negative binomial distribution as an alternative to zero-truncated Poisson-Lindley distribution. Recently, Ghitany et al. [7] introduced a two-parameter weighted Lindley distribution and pointed that Lindley distribution is particularly useful in modelling biological data from mortality studies.
The rest of the study is organized as follows. Section 2 deals with the derivation of posterior distribution using different noninformative and informative priors. Using different loss functions, the Bayes estimators and their respective posterior risks are discussed in Section 3. Elicitation of hyperparameter is also discussed in Section 3. Simulation study of Bayes estimates of mean residual life and their posterior risks is performed in Section 4. Lorenz curve discussion for Lindley distribution is given in Section 5 while stress and strength reliability characteristics and simulation study under different loss functions is discussed/performed in Section 6. Real life application is illustrated in Section 7. Finally, Section 8 deals with a conclusion and some future remarks.

Likelihood Function and Posterior Distributions
The posterior distribution summarizes available probabilistic information on the parameters in the form of prior distribution and the sample information contained in the likelihood function. The likelihood principle suggests that the information on the parameter should depend only on its posterior distribution. Bayesian scientist's job is to assist the investigator to extract features of interest from the posterior distribution. In this section, we will use the Lindley model as sampling distribution mingles with noninformative priors for the derivation of posterior distribution. A random variable 2 Journal of Quality and Reliability Engineering Var (log | x) 6   is said to possess a Lindley distribution if it has the following form: It is obvious from Figure 1 that behavior of Lindley distribution is close to exponential or gamma distribution.
The likelihood function for a random sample 1 , 2 , . . . , which is taken from Lindley distribution is

Uniform
Prior. An argument in the favor of uniform prior is that when the data are sufficiently informative, so that likelihood function is sharply peaked, then it really does not matter what prior is used, since all reasonably smooth prior densities will lead to approximately the same posterior density. The uniform density in most cases is convenient to simplify calculations of the posterior. This argument supports the uniform prior only in those cases where it produces approximately the same conclusions as the highly imprecise prior constructed from a sufficiently large class of prior densities. If the data are highly informative, the uniform prior  may produce reasonable inferences. The uniform prior for is defined as The posterior distribution of parameter for the given data using (2) and (3) is

Jeffreys
Prior. Jeffreys was motivated by invariance requirements and suggested a solution to provide a noninformative prior. He used differential geometry method. The requirements are invariance under 1-1 transformations and invariance under sufficient statistics. One-dimensional version of Jeffreys prior has been justified from many different viewpoints. Jeffreys [8] proposed a formal rule for obtaining a noninformative prior as follows. If is a k-vector valued parameter, then, JP of is g ( ) ∝ √| det ( )| where I ( ) is a kxk Fisher's (information) matrix whose ( , )th element is -E [ 2 ln ( | x)/ ] , = 1, 2, . . . , . Fisher's information matrix is not directly related to the notation of lack of information. The connection comes from the role of Fisher's matrix in asymptotic theory. Jeffreys noninformative priors based on Fisher's information matrix often lead to a family of improper priors. The Jeffreys prior of the parameter is

Posterior Distribution Using Informative Prior.
In case of an informative prior, the use of prior information is equivalent to adding a number of observations to a given sample size and, therefore, leads to a reduction of the variance/posterior  risk of the Bayes estimates. Bansal [9] discussed a method to evaluate the relevance of a prior information in terms of the number of additional observation supposed to be added to a given sample size. We used the gamma and conjugate informative prior for analysis.

Likelihood Matching Prior (Conjugate Prior). The likelihood matching prior (LMP) for Lindley distribution is
and the posterior distribution using LMP is

Bayes Estimators and Posterior Risk under Different Loss Functions
This section spotlight is on the derivation of the Bayes estimator (BE) under different loss functions and their respective posterior risk (PR). The results are compared for noninformative as well as informative priors. If the decision is a choice of an estimator, then, the Bayes decision is a Bayes estimator. The Bayes estimators are evaluated under squared error loss function (SELF), weighted squared error loss function (WSELF), precautionary loss function (PLF), modified (quadratic) squared error loss function (M/Q SELF), logarithmic loss function (SLLF), entropy loss function (ELF),  and K-Loss function. K-loss function proposed by Wasan [10] is well fitted for a measure of inaccuracy for an estimator of a scale parameter of a distribution defined on + = (0, ∞); this loss function is called K-loss function (KLF). Kanefuji and Iwase [11] used KLF for the estimation of a scale parameter with a known coefficient of variations. Table 1 (by [12]) will show the Bayes estimators and their posterior risks for the above-mentioned loss function.

Elicitation of Hyperparameter(s).
Even though many authors have pointed a need for a formal and comprehensive process for elicitation of hyperparameters, there is no standard method. For elicitation, mainly two points are considered; the functional form of the prior distribution and hyperparameter(s), that is why a natural conjugate prior distribution has been generally recommended because its functional form is identical to likelihood function and posterior distribution can be determined by the way of conjugancy. To determine hyperparameter, we adopted the method discussed by Ali et al. [12].

Mean Residual Life Function
For a continuous distribution with the density ( ) and cumulative distribution function ( ), the mean residual life function is defined as Bayramoglu and Gurler [13] study the mean residual life function of k out of n system with nonidentical components while Govil and Aggarwal [14] and Abdous and Berred [15] compared for different distributions like gamma, exponential, Pareto, uniform, truncated normal, Maxwell, 6 Journal of Quality and Reliability Engineering Ghitany et al. [2,5] point out the following remarks.
From ( ), one can easily observe that mean residuals life is a diminishing function of time because the distribution belongs to exponential family and for larger parameter value it is close to zero. To evaluate the Bayes estimates and their risk, since the integral appears in both numerator and denumerator, we required a suitable approximate method to obtain Bayes estimates and respective posterior risks. The simplest method is Lindley's [16] approximation method, which approaches the ratio of the integrals as a whole and produces a single numerical results. Thus, Lindley approximation (LA) given by Lindley [16] for obtaining the Bayes estimator and posterior risk of (Mathematica can be used for the solution of integral but takes large time as compared to LA). Many researchers have used this approximation for solving the ratio of integrals for different numbers of parameters for lifetime distributions; see among others Howlader and Hossain [17], Singh et al. [18], and Preda et al. [19].
If is sufficiently large, the ratio of the integral of the form according to Lindley [16] can be computed as where ( ) = function of only; ( , x) = log of likelihood; ( ) = log of prior of , can be evaluated as  Tables 2, 3, 4, 5, 6, 7, and 8).
Using SELF for = 7.142857, Bayes estimates are overestimated while for = 1.2 and 0.114114 results are underestimated and by increasing sample size these approaches to true parameter values. This behaviour can be observed in all loss functions except PLF. Making comparison between symmetric and asymmetric loss functions, one can     easily observes that SELF has smaller posterior risk. Since there is defect in symmetric loss function, that is, SELF assign equal weight to over and under estimation, so we have to look for an alternative choice. In case of asymmetric loss functions; WSELF, SLLF, and PLF can be alternative choices.

Lorenz Curve (See Figure 2)
For a positive random variable , the Lorenz curve is defined by the graph of the ratio against ( ) with the properties ( ) ≤ , (0) = 0, and (1) = 1 for 0 ≤ ≤ 1. If represents annual income, then, ( ) is the proportion of total income that accrues to individuals having the 100 % lowest incomes; see Gail and Gastwirth [20] for details of Lorenz curves. For the exponential distribution, it is well known that Lorenz curve is given by For the Lindley distribution, the Lorenz curve is The comparison of Lorenz curve for exponential and Lindley distribution is given in Figures 2 and 3. Lindley distribution performance for Lorenz curve is better as compared to exponential distribution for different values of .

Stress and Strength Analysis for Lindley Distribution
Let and be two random variables such that represents "strength" and "stress"; then, reliability of the stress and strength model is presented as where ( < ) is a relationship which represents the probability that the strength exceeds the stress and ( , ) is joint p.d.f. of and for example, receptor in human eye operates only if it is simulated by a source where magnitude is greater than a random lower threshold for the eye, so here is the probability that the receptor operates. In mechanical reliability of a system gives the probability of a system failure, if the system fails whenever the strength is less than the applied stress.
This quandary has a long narration starting with the revolutionary work of Birnbaum [21] and Birnbaum and McCarty [22]. The stress strength was first introduced by Church and Harris [23]. A comprehensive handling of the different stress strength models can be found in outstanding monograph by Kotz et al. [24]. Some most fresh work on the stress strength model can be found in Kundu and Gupta [25,26], Raqab and Kundu [27], Kundu and Raqab [28], Krishnamoorthy et al. [29], Eryilmaz [30], and the references cited therein. Recently, Al-Mutairi et al. [31] considered the stress and strength analysis of Lindley distribution using SELF but we are generalizing their work by considering different types of loss functions.
The Bayes estimates of stress and strength under SELF, MSELF, and ELF are underestimated. By comparing symmetric and asymmetric loss functions, it is noted that posterior risk of SELF is smaller than asymmetric loss functions. In case of asymmetric loss functions, WSELF and PLF have smaller posterior risk than other available loss functions.
Evaluating the performance of informative and noninformative priors, one can easily observe that informative priors have smaller posterior risk due to the availability of compact information. LM and gamma priors both have approximately the same behaviour depending upon the choice of hyperparameters value. More compact information will lead to correct hyperparameters which will lead to definitely better results and smaller posterior risk than noninformative priors. Although there are some depicts where informative priors have posterior risks greater than noninformative priors which is just due to random generation. Increasing sample size in case of SLLF has an inverse effect.

Real Life Application
Ghitany et al. [2] provide waiting times (in minutes) before service of 100 bank customers data set for Lindley distribution. They fitted both Lindley and exponential distributions (both have the same number of parameters) by method of maximized likelihood method and found Lindley distribution provides better fit. The data is given in Tables 16 and 17.
We fit on both data sets Kolmogorov-Smirnov test and found that Lindley distribution is good fitted. The values of K-S test along value are given in Table 18.
The Bayes estimates of stress and strength reliability under different priors and their posterior risk are evaluated in Tables 19 and 20.
Since the Lindley distribution belongs to the exponential family so the natural conjugate prior is Gamma distribution.
The posterior risks of LMP and GP are approximately the same as compared to noninformative priors. There are some posterior risk values which are greater than noninformative priors. These are just due to hyperparameters value effect that is, more accurate values will lead to the smaller posterior risk. PLF and WSELF loss functions have smaller posterior risk as compared to other loss functions.

Conclusion and Suggestions
We consider the Bayesian analysis of the Lindley model via informative and informative priors under different loss functions. Based on posterior distribution, different properties, we conclude that informative priors (LMP, GP) performance approximately equal and have smaller posterior risk's as compared to the noninformative priors; also Jeffreys prior results are more precised than uniform prior. In other words, we can summarize result as GP (PR) ≤ LMP (PR) < JP (PR) < UP (PR) .
The choice of loss function as concerned, one can easily observe based on evidence (different properties as discussed above) that PLF, SLLF, and WSELF are suitable than other asymmetrical loss functions. One thing is common as we increase sample size posterior risk comes down. In future, this work can be extended using censored data.