Classical and Bayesian Inference of a Mixture of Bivariate Exponentiated Exponential Model

Exponentiated exponential (EE) model has


Introduction
In the last two decades or so, a major point of interest for statisticians and practitioners was to study populations that exhibit similar behaviors with respect to some predetermined criteria.
e earliest evidence regarding the study of heterogeneous populations was mostly due to Newcomb [1] and Pearson [2] who utilized/developed an approach, commonly known as nite mixture distributions.With modern days' stellar advancement on long data-related computation facilities, studies focusing on heterogeneous populations became more popular in the modern era; for some useful references, see the work of Titterington et al. [3], Everitt and Hand [4], McLachlan and Basford [5], and AL-Hussaini and Sultan [6] and the references cited therein.In recent times, there is a growing trend to study and explore the application of nite mixture models; for more details, see the work of Al-Hussaini and Sultan [6].
In several studies concerning heterogeneous population, the EE probability model appears to be really useful.Twoparameter EE distribution is a right skewed unimodal distribution.e behaviors of the probability density function and the hazard function of the EE distribution is quite close to the behavior of the pdf and the hazard function of the gamma or Weibull model.e two-parameter EE distribution has received an increasing amount of interest in recent times.e e cacy of the EE distribution in modeling lifetime data can be found in the works of Gupta and Kundu [7][8][9][10][11][12].Several studies have demonstrated that, in speci c real-life scenarios, the EE distribution provides a better t (based on several well-known goodness-of-t measures) as compared to the gamma or the Weibull model.Kundu and Gupta [13] introduced a bivariate generalized exponential (BGE) distribution and constructed BGE distribution which has three parameters.Some studies obtained a mixture of bivariate inverse Weibull and gamma models; for more details, see the work of Jones et al [14], Sarhan and Balakrishnan [15], Chen and Tan [16], Khosravi [17], and AL-Moisheer et al. [18] and the references cited therein.
e main objective of this paper is to develop and study the mixture of a new bivariate absolutely continuous distribution via a mixture of two independent two-parameter EE distributions.We call this new bivariate distribution as the bivariate mixture of exponentiated exponential distribution (henceforth, in short, the BMEE).
e proposed model is constructed under two mechanisms.In the rst case, let X and Y be two random variables where each variable is independent and distributed as EE distribution with parameters (λ 1 , θ 1 ) and (λ 2 , θ 2 ), respectively.In the second case, we construct the BMEE distribution via copula approach using the well-known bivariate Gaussian copula (see, for details, the work of Nelson [19]).Several useful mathematical properties of the proposed model are derived.Classical and Bayesian estimation methods are discussed.In addition, the performance of the suggested BMEE model is examined using simulation in estimating the model parameters and a real dataset.e rest of the paper is organized as follows.In Section 2, we introduce the BMEE (type I) distribution, discuss its construction via two mechanisms, and provide some contour plots.In Section 3, we provide some useful mathematical properties and obtain expressions for the bivariate survival function, hazard rate function, bivariate moment generating functions, conditional moments, joint moments, stochastic ordering, etc., for the BMEE distribution constructed starting from two independent EE distributions.In Section 4, we discuss the estimation strategy of the model parameters via EM algorithm.In Section 5, we discuss the estimation strategy for the BMEE (type II) distribution constructed via the bivariate Gaussian copula.In Section 6, we study and explore the estimation of the model parameters under the Bayesian paradigm.Simulation results are presented in Section 7. In Section 8, a wellknown motor dataset has been reanalyzed to exhibit the e cacy of the proposed BMEE-type models.Finally, we conclude the paper by providing some nal remarks in Section 9.

Mixture Bivariate Independent EE Model
Here, we begin our discussion with two independent univariate EE distributions with parameters (λ 1 , θ 1 ) and (λ 2 , θ 2 ) , respectively.e central idea of compounding is to consider that θ 1 and θ 2 are indeed random variables not constant, and the observed (marginal) distribution of X in and Y in can be obtained from the joint distribution of θ 1 and θ 2 which is as follows: Next, we construct a bivariate of EE mixture distribution by assuming two cases.In the rst case, X in and Y in are independent EE distributions with the scale parameters having a generalized bivariate Bernoulli distribution.In the second case, X d and Y d are dependent.A random variable with an EE distribution has a cumulative distribution function (cdf ) and a probability density function (pdf ) for X in > 0, given by where θ > 0 and λ > 0 are the shape and scale parameters.
In the bivariate case, let X in and Y in be two random variables with parameters θ 1 , and θ 2 , respectively.For given xed values of θ 1 and θ 2 , X in and Y in are independent.e pdf of BMEE distribution is de ned as where p i are the mixing proportions which must satisfy 2 i 1 p i 1 and p i ≥ 0, and all parameters are unknowns.e pdf of the rst component of EE is given by (2), with xed shape parameter θ > 0, and a random scale parameter λ > 0 that takes two distinct values λ 1 and λ 2 .Likewise, for xed shape parameter θ 2 , let Y have an EE mixture density, and the pdf of second component (EE) is given by where β is a random scale parameter (β > 0 ) that takes two distinct values β 1 and β 2 .For given values of (λ, β), we assume that X in and Y in are independent, but λ and β are correlated through their generalized bivariate distribution with the following probability matrix: where P is the mixture components and 2

Journal of Mathematics
For simplicity in the independent case, we use x in x and y in y as follows: For simpli cation, let a p From Figures 1 and 2, it is evident that the joint pdf in equation ( 4) can produce various shapes corresponding to several parameter choices.
e joint pdf mixture of four univariate EE mixture distributions involves a total of 9 parameters for its speci cation.In application to real-life datasets, as one can imagine, not all four components might be necessary.Consequently, one may put some restrictions, such as b c 0, a d 0, or a b c 0. ese restrictions result in correlation values (among scale parameters) of +1, −1, and 0, respectively.e marginal densities of X and Y, respectively, are given as follows: where where π 2 a + c. e joint cdf will be e associated survival function of BMEE distribution will be where F X (x) and F Y (y) are the marginal density functions of X and Y, respectively.e hazard rate function (hrf ) is given as follows: where A is given in (8) and B is given in (12).e conditional pdf of X, given Y, for each xed Y y will be e conditional pdf of Y, given X, will be

Structural Properties
Here, we derive some properties of the BMEE distribution which are as follows.
where Harmonic number [.] is the sum of the reciprocals of the rst n natural numbers.The expected value of X and Y are, respectively, given by A Mathematica 11.2 is used to obtain the integral in (17)- (19).

Journal of Mathematics
Using the joint pdf of X and Y and/or using the MGF expression given above, the di erent product moments of the order X m X n , (m, n) ≥ 1, can be obtained from the above.Proposition 4 (shape of the distribution).A critical point of a function with two variables is a point where the partial derivatives of rst order are equal to zero.e most compelling two reasons to study the critical points for a bivariate distribution are as follows: (1) A real-life dataset(s) can have several di erent shapes.
e exibility of any proposed model can be well determined from such a study.
(2) In dealing with bivariate distributions, quite often, it is imperative to study the tails of the joint pdf as well as the point of in ection.Knowledge on critical point(s) will help to better understand these properties.
Let us now consider the shape of the BMEE distribution: Consequently, for BMEE distribution, there may be several critical points.For speci c choices of the model parameters, a numerical study can be made here.

Estimation (Independence Case Scenario)
In this section, we discuss the estimation of the model parameters of the BMEE distribution assuming the independence of X and Y under the EM algorithm.[20] introduced expectation-maximization (EM) algorithm which is an iterative method to nd the maximum likelihood estimator of a parameter Φ of a parametric probability distribution.To invoke the idea of EM algorithm, we augment the data ((x k , y k ), k 1, . . ., n, with the group membership variables

EM Algorithm. Mclachlan and Krishnan
where a k is an indicator variable, and one if the k th observation is in f(x, λ 1 , β 1 ), and zero otherwise.
Similarly, for b k , c k , we have four groups G ij , i, j 1, 2, for which the densities are e corresponding mixing proportions are P(G 11 ) a, P(G 12 ) b, P(G 21 ) c, and e corresponding log-likelihood function to the complete sample, ℓ ij (x, y) log f ij (x, y), is represented by Each iteration of the EM algorithm involves two steps: step E (expectation) and step M (maximization), de ned by (16). is is linear in the group membership variables Φ k , so in the E-step, we replace in (9) the associated expected values, given the current estimates (( θ, φ, λ 1 , λ 2 , β 1 , β 2 , a, b, c ) of the parameters are calculated as and follow the same procedure similarly for b k and c k .

Journal of Mathematics
Next, the M-step is completed by setting is algorithm requires an initial value of the model parameters, designated by Φ (0) .A judicious choice of these initial values requires special attention on the fact that the rate of convergence of the assumed EM algorithm may not become quite slow.Another point of concern is that the maximum likelihood equation may have multiple solutions corresponding to local maxima; therefore, the selection of the starting values is indeed very important.A comparative study of various strategies in the choice of initial values can be found in Karlis and Xekalaki [21].We use the copula R package to solve these equations numerically.After the maximum likelihood estimators for θ, λ 1 , β 1 , λ 2 , β 2 , δ, ρ 11 , ρ 12 , ρ 21 , and ρ 22 are obtained, we substitute these estimates in (a k , b k , c k ).We complete the M-step by setting a 1/n n k 1 a k , etc. Initial values for the mixing proportions are obtained by the moment's method of the marginal univariate EE parameter separately.Next, we take the resulting estimates of the BEE parameters as starting values for the EM algorithm.After that, we merge the moment estimators of the marginal mixing parameters to obtain initial values for the bivariate mixing parameters, assuming that the independence between two variables X and Y.We apply this method in application as mentioned in Section 8; later on, speci cally, in Tables 1-4, we equate ( 25)- (30) to zero, to obtain estimates of the parameter of the distribution.

BMEE Model (Type II) Distribution Using
Gaussian Copula e BMEE model proposed in this paper involves EE marginals, with greatest exibility with its marginals as well as in the correlation structure.On the contrary, this proposed distribution has several elds of applicability.Usually, in the dependence study, copulas play a vital role.Copulas are a general tool to construct bivariate and multivariate distributions and to study dependence structure between random variables.Several bivariate and multivariate lifetime distributions are suggested using several methods of constructing bivariate and multivariate distributions, and copula functions have been proposed by Nelsen [22], Trivedi and Zimmer [23], Adham and Walker [24], Kundu et al. [25], Kundu and Gupta [26], Kundu [27], El-Morshedy et al. [28], and Alotaibi et al. [29].

BMEE Distribution Based on Gaussian
Copula.e concept of copula, suggested and derived by Sklar [30], states that any multivariate distribution can be disintegrated to a copula and its continuous marginal.In a bivariate setup, copulas are used to link two marginal distributions with a joint distribution such that, for every bivariate distribution function F(x, y) with continuous marginal F(x), F(y), there exists a unique copula function C given by e associated density function of bivariate distribution will be where c(F(x), F(y)) is the density function of a copula; for further details, see the work of Nelsen [19,22].A plethora of choices are available to construct BMEE distributions via copula using EE marginals as given in (1).Here, the Gaussian copula is utilized to construct BMEE distribution.e Gaussian copula has the following form: where φ Σ denotes the distribution function of a bivariate standard normal random variable and φ − 1 represents its inverse.e joint pdf of X d and Y d based on Gaussian copula becomes where ρ ∈ [−1, 1] is a dependence parameter and f(x d ) and f(y d ) are the density function of EE distributions given in (1).Suppose that the marginals are EE distribution; then, the bivariate exponentiated exponential (BEE) distribution pdf is where p i , are the mixing proportions, and it must satisfy 2 i 1 p i 1 and p i ≥ 0, and all of them are unknown.e pdf of rst component of EE is given by (1): with xed shape parameter θ > 0 and random scale parameter λ > 0, which take two distinct values λ 1 and λ 2 , respectively.Similarly, for xed shape parameter θ 2 , let Y d have an EE mixture density, and the pdf of second component EE is given by with β being a random scale parameter taking values β 1 and β 2 .
For a given values (λ, β), we assume that X d and Y d are dependent, and λ and β are correlated through their generalized bivariate distribution with the probability matrix given by Let f (x d , y d ) be the joint pdf of (X d , Y d ); then, (41) Like the joint pdf in (8), the joint pdf in (41) can assume several di erent shapes as well.Let x d x and y d y.Consequently, the associated BMEE distribution pdf will be where ρ ij ∈ [−1, 1] is the dependence parameter.

EM Algorithm under Gaussian
Copula.e EM algorithm is introduced as a method of estimation.To apply the EM algorithm, as before, we augment the data (x k ,y k ), k 1,. .., n, with the group membership variables (a k ,b k ,c k ), k 1,. .., n, where a k with the group membership variables (a k ,b k ,c k ), k 1,. .., n, where a k is one if the k th observation is in f ij (x, y, θ, λ 1 , β 1 , δ, ρ 11 ), and zero otherwise.Similarly, for b k and c k , we have four groups G ij , i, j 1, 2, for which the densities are where ρ ij ∈ [−1, 1] is a dependence parameter.e mixing proportions are given as follows: P (G 11 ) a, P (G 12 ) b, P (G 21 ) c, and P (G 22 ) 1 − a − b − c.We de ne ℓ ij (x, y) log f ij (x, y, θ, λ i , β j , δ, ρ ij ); then, the EM algorithm as the method of estimation is given by nding the complete log likelihood, ℓ, as follows: is is linear in the group membership variables (a k , b k , c k ); consequently, in the E step, we enter into (26); their expected values, given the current estimates ( θ, λ 1 , λ 2 , β 1 , β 2 , δ, ρ 11 , ρ 12 , ρ 21 , ρ 22 , a, b, c ) of the parameter, are calculated as Similarly, for b k and c k , we follow the same strategies.Note that the algebraic simpli cation of the above might be necessary to avoid numerical problems.For the M-step, we need to maximize ( 27) over (θ, λ 1 , β 1 , δ, ρ 11 , ρ 12 , ρ 21 , ρ 22 ), for xed values of (a k , b k , c k ). is is achieved by the conditional dependence of X and Y, given the group membership.We can essentially deal with the univariates and the Gaussian copula parameter separately.Di erentiating (25) gives 12 Journal of Mathematics zl zδ zl zδ e approach involves a two-step procedure in estimating the marginal of X and Y and the copula function independently that gives the maximum likelihood estimation of θ, λ en, copula density is estimated as where F i (x k ) and , F j (y k ) denote the maximum likelihood estimates of the pdf from the rst step.e solution of the nonlinear equation (34) gives the MLE of ρ 11 , ρ 12 , ρ 21 , and ρ 22 .e M-step is completed by setting We use the copula R package to solve these equations numerically.After the maximum likelihood estimators for Journal of Mathematics θ, λ 1 , β 1 , λ 2 , β 2 , δ, ρ 11 , ρ 12 , ρ 21 , and ρ 22 are obtained, next, we substitute these estimates in (a k , b k , c k ).We complete the M-step by setting a 1/n n k 1 a k , etc. Initial values of the parameters for the mixing proportions are obtained by the method of matching moments that are obtained from the marginal univariate EEM and the Gaussian copula parameter separately, en, we take the resulting estimates of the BEE parameters as starting values for the EM algorithm.Next, we merge the moment estimators of the marginal mixing parameters to obtain initial values for the bivariate mixing parameters, assuming the dependence between two variables X and Y.We apply this method in application as mentioned in Section 9, speci cally, in Tables 1-4.For more details, see the work of Kosmidis and Karlis [31].
Next, we provide the estimation procedure of the unknown parameters for the density in (44).In the copulabased estimation, we adopt two approaches that are termed as parametric and semiparametric.

Maximum Likelihood Estimation (MLE).
Here, we discuss the estimation of the unknown parameters of BEE distributions by the approach of the maximum likelihood, by using the two-step estimation.It involves a two-step procedure by which we estimate the marginal and the copula function separately.
e log-likelihood function is expressed as e log-likelihood function in (35) can be re-expressed as e rst step involves estimating the parameters of marginals distribution F 1 and F 2 by MLE, separately given as follows: en, estimating copula parameters by maximizing the copula density, we will obtain log By considering the rst step with EE distributions, the parameters of each marginal distribution will be estimated by the MLE.If x 1 , . . ., x n is a random sample from EE(θ, λ) and y 1 , . . ., y n is a random sample from EE(δ, β), then the log-likelihood functions are, respectively, given by log L 1 (x, θ, λ) n log(θ) + n log(λ) So, the maximum likelihood equations are e solution of the system of nonlinear equations (60)-(63) gives the MLE of θ, λ, β, and δ.
en, copula density is estimated as follows: where F 1 (x) and F 2 (y) denote the ML estimates of the parameters from the rst step.e solution of the nonlinear equation (64) gives the MLE of c. 5.4.Semiparametric Methods of Estimation.Two semiparametric methods are used to estimate the copula parameter in the copula models and are compared with the two methods of moments approaches which are the inversion Kendall's τ and inversion of Spearman's ρ, respectively.

Methods of Moments.
From the moment's method of inversion of Kendall's τ and the inversion of Spearman's ρ mentioned in Kojadinovic and Yan [32], we provide a brief details which are given as follows.Let c be a bivariate random sample from a cdf C c [F 1 (x) , F 2 (y)], where F 1 and F 2 are continuous cdf's and C c is an absolutely continuous copula such that c ∈ O, where O is an open subset of R 2 .Furthermore, let R 1 , . . ., R n are the vectors of ranks associated with x 1 , . . ., x n unless otherwise stated.In what follows, all vectors are row vectors.Moment's approaches are based on the inversion of a consistent estimator of a moment of the copula C c .e two best-known moments, Spearman's rho and Kendall's tau, are, respectively, given by Consistent estimators of these two moments can be expressed as If ρ and τ are one-to-one, consistent estimators of c will be c n,ρ ρ − 1 (ρ n ) and c n,τ τ − 1 (τ n ), respectively.It can be called inversion of Kendall's τ and inversion of Spearman's ρ, respectively.For more information, see the work of Kojadinovic and Yan [32] and the references cited therein.
As explained above, the moment's method of τ and ρ estimation for copula may be considered under the umbrella of semiparametric approach estimation.5.6.Goodness of Fit Tests for Copula.We want to compare the empirical copula with the parametric estimator derived under the null hypothesis; for details, see the work of Fermanian [33].eory suggests a test if C is well-represented by a speci c copula C c : Several well-known approaches are available in the literature; for example, see the work of Genest and Rémillard [34], or the fast multiplier approach, Genest et al. [35], and Kojadinovic et al. [36].e goodness of t tests based on the empirical process is given as where ], and U i,n and V i,n are pseudoobservations from C calculated from data as follows.
U i,n R 1i /n + 1 , V i,n R 2i /n + 1, and R 1i and R 2i are, respectively, the ranks of X i and Y i .
Here, C n (u, v) is a consistent estimator, and θ n is an estimator of c obtained using the pseudoobservations.
According to Genest et al. [35], the appropriate test statistics is the Cramer-von Miss and is de ned as (71)

Bayesian Estimation
In this section, the Bayes estimates of the model parameters of the joint pdf (51) are obtained under the assumption that the random variables, Φ (θ, λ 1 , β 1 , λ 2 , β 2 , δ), have an independent gamma prior distributions with hyperparameter w k and m k , k 1, 2, 3, 4, 5, 6, given by and ρ 11 , ρ 12 , ρ 21 , and ρ 22 have a noninformative prior.By multiplying (23) or ( 44) with (72), the joint posterior density for the vector Φ, given the data, becomes Marginal distributions of Φ can be obtained by integrating out the (nuisance) hyperparameters.us, the Bayesian estimators of the parameters Φ under square error loss function can be calculated as follows: e integrals in (74) cannot be obtained in a closed form, so the Markov chain Monte Carlo (MCMC) technique is used.In MCMC methods, the posterior distribution and the intractable integrals using simulated samples from the posterior distribution are obtained.Also, Gibbs sampling and the Metropolis-Hastings (MH) algorithm as a MCMC technique are used.For more details, see Metropolis et al. [37], Hastings [38], and Mohsin et al. [39].e M-H algorithm considers that, to each iteration of the algorithm, an applicant value can be generated from a proposed distribution.us, the applicant value is allowed according to a su cient approval probability.is technique assurances the convergence of the Markov chain for the target density.Finally, we can investigate that the advantage of the MCMC method over the MLE method is that we can always obtain a reasonable interval estimate of the parameters by constructing the probability intervals based on empirical posterior distribution.is is often unavailable in MLE.
6.1.Credible Intervals.In this section, a symmetric 100(1 − ε)% two-sided Bayes probability interval estimate of Φ, denoted by [L Φ , U Φ ], can be obtained as satisfying the following expression: Since it is di cult to nd the interval L Φ and U Φ analytically, therefore, we apply suitable numerical techniques to solve this nonlinear equation.

Simulation Study
Here, a simulation study is conducted to see the e cacy of the proposed model in two cases, independent case and the dependent case.Monte Carlo simulation is done for comparison between maximum likelihood and Bayesian estimation methods, for estimating parameters of BEEM distribution using R language such as (bbmle).e MLE estimation methods are done based on Newton-Raphson algorithm by using "maxLik" package.e Bayesian estimation by using Markov Chain Monte Carlo (MCMC) approach and Metropolis-Hastings (MH) algorithm is carried out by using R-program.Monte Carlo simulations are considered based on a data that is generated from a Gaussian copula by using copula package in R. We generate 10000 random samples of sizes n 50, 100, and 200, and di erent cases of actual values of the parameters are listed below: Case I: θ 2.8, λ 1 2.  We claim the best performance as the method which minimizes the mean squared error (MSE), bias of estimation, and length of con dence interval (L.CI) of the estimator.
e two-sided con dence limit with condence level c 0.95 of the parameters is constructed as well.
From the reported bias, MSE, and L.CI, in Tables 3-6, it appears that the e ciency of the estimation under the Bayesian paradigm is quite evident.In particular, the MSE values in all parametric combinations tried in this article support in favor of this statement.We have the following observation on the simulation study as follows: (i) As sample size (n) increases and for the same case ( xed actual parameters), the bias, MSE, and L.CI  (iii) From the reported bias, MSE, and L.CI, it appears that the e ciency of the estimation under the Bayesian paradigm is quite evident for both independent and dependent cases.In particular, the MSE values in all parametric combinations tried in this article support in favor of this statement.For reference, see Tables 1-6 in the revised manuscript.(iv) Credible interval(s) constructed appear to be the best indicator on the estimation of the model parameters as expected.
(v) Bias measures whether, over many replications, the estimator yields result that is correct on an average.
A statistic is positively biased if it tends to overestimate the parameter; a statistic is negatively biased if it tends to underestimate the parameter.
Negative bias means that the estimator is too small on an average compared to the true value.

Real Dataset (Motor Data)
e data represent the failure times of a parallel system constituted by two identical motors in days.ese data are reported in Relia Soft (2003), where X (102, 84,88,156,148,139,245,235,220,207,250,212,213,220,243,300,257,263).Y (65,148,202,121,123,150,156,172,192,214,212,220,265,275,300,248,330,350).We t at rst the marginals of X and Y separately on the motor data.e MLE and Bayesian estimation for the BMEE distribution using dataset are shown in Table 7. e MLE of the parameters Kolmogorov-Smirnov distance (KS-d) and its p value for the marginals are listed in Table 8.
e empirical cdf, the histogram of the pdf, PP plots, and QQ plots are displayed for the rst variable in Figure 3. e empirical cdf, the histogram of the pdf, PP plots, and QQ plots are displayed for the second variable in Figure 4. e   Journal of Mathematics EE distribution is tted to real data using Kolmogorov-Smirnov goodness-of-t test.e estimated cdf with empirical cdf, the histogram of the pdf, PP plots, and QQ plots are displayed for the rst variable in Figure 3. e estimated cdf with empirical cdf, the histogram of the pdf, PP plots, and QQ plots are displayed for the second variable in Figure 4. ese gures show that the tow variables are tted for the marginal EE distribution.Also, Table 8 conrmed this conclusion by using Kolmogorov-Smirnov goodness-of-t test, where p values are more than 0.05.
In majority of the cases, one may observe that the estimates under the Bayesian method are preferable than the measures of MLE estimates as the standard error of the estimates are smaller in all estimates of the parameters.History plots, approximate marginal posterior density, and MCMC convergence of θ, λ, δ, β, and ρ are represented in Figure 5.

Conclusions
In this paper, we have proposed and studied a new class of BEE whose marginals are EE distributions.e proposed class of distribution is constructed via two di erent types of mixture: (a) type I: starting with two independent EE distributions and (b) type II: using a bivariate Gaussian copula.Estimation of the model parameters for both types of MBEE distribution are conducted using classical (the method of moments and the method of maximum likelihood) and under the Bayesian paradigm using independent gamma priors.Since the joint distribution function and the joint density function are in closed forms, consequently, this distribution can be used in practice for nonnegative and positively correlated random variables.Since the maximum likelihood estimators of the unknown parameters cannot be obtained in the closed form, we consider the EM algorithm that works quite well, and it can be e ectively used to compute the MLEs.Since the choice of hyperparameters for a prior in a Bayesian paradigm is of paramount importance, as a continuation of this work in future, we will be focusing on (including but not limited to) the following: (i) Exploring various strategies (for example, matching conditional moments, or conditional percentiles information to be provided by our expert with the corresponding theoretical moments and percentiles and subsequently assuming, say, Euclidean distance) to estimate/guess best choice(s) of the hyperparameters for the priors.(ii) In the current work, our prior choices are mostly conjugate in nature.However, in a real-life scenario, we might not have such an information on the prior always.Also, in the presence of more concrete information, one might consider a more precise prior for the model parameter(s), possibly a partially informative improper prior.It would be interesting to see the e ect on the overall e ciency of the MCMC and Gibbs sampling in such a setting.We are currently working on it, and it will be reported somewhere else.
Data Availability e data used to support the ndings of this study are included within the article.

Figure 1 :
Figure 1: e plot of the BMEE pdf model for varying parameter choices in equation (4).

Figure 2 :
Figure 2: e contour plots of BMEE distribution for varying parameter choices in equation (4).

Figure 3 :
Figure 3: e tted pdf, cdf, PP plot, and QQ plot for EE distribution for the rst variable.

Figure 4 :
Figure 4: e tted pdf, cdf, PP plot, and QQ plot for EE distribution for the second variable.

Figure 5 :
Figure 5: e Markov Chain Monte Carlo (MCMC) plots for motor data using the BMEE model.

Table 1 :
MLE and Bayesian estimation with di erent sample sizes in the independent case.

Table 2 :
MLE and Bayesian estimation with di erent sample sizes in the independent case.

Table 3 :
MLE and Bayesian estimation with di erent sample sizes in dependent case (Case I).

Table 7 :
MLE and Bayesian estimation for the BMEE distribution using the dataset.

Table 8 :
KS distance and its p value and MLE.