Robust Assessing the Lifetime Performance of Products with Inverse Gaussian Distribution in Bayesian and Classical Setup

The inverse Gaussian (Wald) distribution belongs to the two-parameter family of continuous distributions having a range from 0 to ∞ and is considered as a potential candidate to model diﬀusion processes and lifetime datasets. Bayesian analysis is a modern inferential technique in which we estimate the parameters of the posterior distribution obtained by formally combining a prior distribution with an observed data distribution. In this article, we have attempted to perform the Bayesian and classical analyses of the Wald distribution and compare the results. Jeﬀreys’ and uniform priors are used as noninformative priors, while the exponential distribution is used as an informative prior. The analysis comprises ﬁnding joint posterior distributions, the posterior means, predictive distributions, and credible intervals. To illustrate the entire estimation procedure, we have used real and simulated datasets, and the results thus obtained are discussed and compared. We have used the Bayesian specialized Open BUGS software to perform Markov Chain Monte Carlo (MCMC) simulations using a real dataset.


Introduction
In probability theory, the inverse Gaussian distribution (IGD), also known as the Wald distribution, belongs to the two-parameter family of continuous distributions with support 0 to ∞ [1]. e concept of Brownian motion is applicable in describing the inherent process of many phenomena, particularly in the natural and physical sciences.
e time in which a Brownian motion with a positive drift reaches a fixed value is distributed as an IGD. e probability density function, or density for short, of the IGD is given by x > 0, μ > 0, λ > 0. (1) Here, μ is the location and λ is the shape parameter. e IGD approaches the normal distribution as λ ⟶ ∞.
A brief retrospective on the IGD is now proposed. To begin, the author in [2] studied the inverse Gaussian as a model to study Brownian motion. e author in first considered its basic features of statistical properties and found certain similarities in its statistical analysis. e authors in [3] proposed it as a lifetime model to be applied in situations where the initial failure rate was high. e author in [4] considered estimating the inverse Gaussian (μ, λ) model in a Bayesian framework. He also discussed that the estimation becomes very difficult when there is no natural conjugate prior. e authors in [5] considered the parameterization when ψ � 1/μ and λ and evaluated Bayes estimates using uninformative reference and natural conjugate priors. e authors in [6] indicated that in [5] approach, the posterior mean of 1/ψ does not exist, so the Bayesian estimate of the mean of the distribution is not available. e author in [7] performed the estimation of the reliability function from the work done on IG (μ, λ) parameterization. e authors in [8] evaluated estimates for λ assuming Jeffreys' prior and its posterior density by using the Gibbs sampling technique when μ is known.
Because of its shapes, the related density may also be considered as a good competitor to the Gamma, Weibull, and log-normal distributions. Various sampling theory inferences with the IGD are studied by [9][10][11][12] among others. e authors in [13] treated some applications in marketing, while applications of IG in life testing are considered by [2,3]. e authors in [14] investigated Bayesian estimation for the parameters of the IGD distribution. ey emphasized the MCMC technique and gave a complete implementation of the Gibbs sampler algorithm. e author in [5] obtained some Bayesian results for the inverse Gaussian family of distributions with a noninformative reference prior as well as the natural conjugate prior. e authors in [6] derived Bayesian results for the IGD by using a proper prior under reparameterization with reference to the distribution mean and of the inverse of the squared variation coefficient, for obtaining Bayes estimates as well as of their inverses. e author in [9] presented a report on some statistical properties of the IGD distribution when the parameters are confined to (0, ∞). A good review of the advantages of using Bayesian methods may be found in [15][16][17][18][19][20][21][22][23]. e posterior distributions often have complex multidimensional forms that require using Markov Chain Monte Carlo (MCMC) methods to get results [16,[24][25][26][27]. In recent years, the use of Markov Chain Monte Carlo (MCMC) methods has gained much popularity [28][29][30]. Most recently, the author in [31] has considered in detail the q-Weibull distribution for classical and Bayesian analyses, which also serves as a motivation to conduct this study. Keeping in view the extensive literature on the importance of Bayesian analysis and the importance of the Wald distribution, we have attempted to present the Bayesian analysis of the Wald distribution. Jeffreys' and uniform priors are used as noninformative priors, while the exponential distribution is used as an informative prior. e analysis comprises finding joint posterior distributions, the posterior means, predictive distributions, and credible intervals. To illustrate the entire estimation procedure, we have used real and simulated datasets, and the results thus obtained are discussed and compared.
e literature reveals that a lot of authors have studied classical distributions, including the Wald distribution, in a classical framework. We have observed a variety of applications of the Wald distribution and the capability of the Bayes methods to incorporate the prior information of the model parameters. To the best of our knowledge and belief, the Wald distribution has not yet been studied in a Bayesian framework despite its potential applications. erefore, to cover this gap present in the literature, we have attempted to perform the Bayesian analysis of the Wald distribution in this article.
Here is the break-up of the study. Section 2 considers the frequentist analysis of the inverse Gaussian distribution using the MLE method and computes the standard errors associated with the classical estimates. A numerical example is presented in Section 3. Section 4 carries out the Bayesian analysis of the IGD assuming the uniform, Jeffreys', and subjective informative priors. It is supposed that the parameters of the IGD follow exponential distribution(s). e convergence diagnostic is given in Section 5. e predictive inference of the inverse Gaussian distribution is presented in Section 6. Comparison between the frequentist and Bayesian approaches is performed in Section 7. e simulation study is performed in Section 8 to justify the results.

Maximum Likelihood (ML) Estimation
Let x 1 , x 2 , . . . , x n constitute a random sample of size n from the IGD. e likelihood function is e logarithmic form of the likelihood function is We differentiate this expression w.r.t the unknown parameters μ and λ, and equating the resulting equations to zero to maximize l, we get ey are the required ML estimates of the parameter μ and λ of the IGD.

Standard Errors of the ML Estimates.
e main diagonal elements of the inverted Fisher information matrix (FIM) designate the variances of the ML estimates. Hence, we can find the standard errors of the ML estimates by calculating the square roots of the diagonal elements as We find the elements of the Hessian matrix as follows. e second derivative of l with respect to λ, μ, and (μ, λ) is given as Fisher's information matrix (F.I.M) I(μ, λ) may be defined as follows: where Hence, e uncorrelated parameters are displayed in the above matrix. e results are summarized in Table 1.

Numerical Example
A real dataset is considered in this section that is analyzed by [14]. e dataset given in Table 2 denotes the active repair times (in hours) for an airborne communication transceiver.
Using the data given above, the ML estimates along with their variance are computed and are given in Table 3.
Here, we observe that the estimates are stable with very small standard errors.

Uninformative Bayesian Analysis Using the Uniform Prior.
e uninformative uniform prior for the both parameters μ and λ is defined as e posterior distribution is given as As μ and λ are considered independent, therefore, their joint prior distribution will be the product of their individual priors and may be defined as

Uninformative Bayesian Analysis Using Jeffreys'
Prior. e positive square root of the determinant of the FIM is known as Jeffreys' prior.
Mathematical Problems in Engineering Here, p J (μ, λ) denotes for Jeffreys' prior. We carry on for Jeffreys' prior as follows: e determinant of the FIM of the IGD is Jeffreys' prior p J (μ, λ) for unknown parameters (μ, λ) is given as follows: Hence, for the unknown parameters μ and λ, the joint posterior distribution is given by where K is denoted as the normalizing constant. It may be defined as

Informative Bayesian Analysis Assuming the Exponential
Prior. It is known that expert opinion can be incorporated into the analysis using the informative prior of an unknown parameter about a state of nature. To achieve this, we suppose an exponential prior for both parameters of the IGD.

e Exponential Prior.
Here, we assume that the prior distributions of both the parameters follow exponential distributions and are defined as Here, h 1 and h 2 are known as the hyperparameters. So, the joint posterior distribution of the unknown parameter μ and λ is As μ and λ are supposed to be independent,

p(μ, λ) � p(μ).p(λ).
(21) So, the joint posterior distribution of the parameters μ and λ is Now, we observe that the two parameters of the IGD have independent distributions. erefore, to make inferences, we will derive their marginal distributions regarding them.
e marginal posterior distribution of μ may be defined as Similarly, the marginal posterior distribution of λ may be derived as follows: Some loss function is required to find the Bayesian estimates. So, we have considered the squared error loss function (SELF), which describes the posterior means as the Bayes estimates. at is, Such expressions generally comprise complex structures, so the numerical methods are required to solve them. erefore, to simulate data from the posterior distribution(s), we have used the MCMC technique by using OpenBUGS, and the resulting Bayes estimates, posterior risks, and 95% highest posterior density regions are given in Table 4. e results show that the posterior estimates produced by the uniform, Jeffreys', and exponential priors coincide a lot and have small standard errors. e posterior marginal densities of the model parameters are presented in Figure 1.
e posterior marginal densities for the parameters of the model show a slight positive skewness for both of the parameters of the model.

Predictive Inference
We evaluate the predictive distributions to study the future behaviour of data. Using the posterior distribution p(μ, λ | x) based on the exponential distribution defined in Section 4.3 and the Wald distribution p(y | μ, λ) as data model, the predictive distribution can be expressed as

Mathematical Problems in Engineering
Often, the predictive distributions do not follow the baseline distributions and do not have closed forms. Here, we observed that the predictive distribution is not in closed form. So, we will require numerical methods to evaluate multiple integrals to evaluate the above defined predictive distribution. is is accomplished by using the numerical procedures, and the resulting predicted and observed datasets are summarized in Table 5 with graphical representation made in Figure 3.
It shows that the predicted and observed datasets are much identical to each other.

Comparison of the Frequentist and Bayesian Approaches
To make a comparison of the Bayesian estimation method with the frequentist maximum likelihood estimation method, several model selection criteria, i.e., log-likelihood        Table 6.
e results reveal that the values of the AIC and BIC computed by Bayesian estimates are the smallest as compared with those produced by frequentist estimates. It is noted that the Gamma prior has minimum AIC and BIC values for Bayes estimates as the expert opinion is also involved in it.

Simulation Study
Simulation studies help us to understand the type and behaviour of the underlying distributions. So, we generate data values for the parameters and estimate them based on the generated data. Here, the same parametric values have been used that are obtained by using the real dataset through R codes, and the results are portrayed in Tables 7 and 8. e results show that the model will produce the same results if it continues to run on a similar pattern in the future. Moreover, the estimates become more stable if the numbers of simulations are increased.

Conclusion and Recommendations
e Bayesian inference for the parameters of the Wald distribution has been performed in this study and also compared the results with their classical counterparts. We have evaluated the maximum likelihood estimates for the comparison purpose. We used uniform and Jeffreys' priors as noninformative priors and the exponential distribution as informative prior. We have also discussed the predictive distribution as well. Simulation studies have also been conducted to verify the results. It has been witnessed that the results produced by using the Bayesian technique produce better results by yielding smaller AIC and BIC values. We have also witnessed close coordination between the observed and predicted datasets, which indicates that the Bayes methods are the potential replacements for their classical counterparts.
e Bayesian methods are best suited for evaluating the lifetime data of any type of product. Future perspectives of the research may be to conduct such studies using other distributions that can model a variety of natural phenomena. We may also extend this study to include the generalized and multivariate distributions as well.

Data Availability
e data used to support this study are included within the article.