FIU Digital Commons Interval and Point Estimators for the Location Parameter of the Three-Parameter Lognormal Distribution

The three-parameter lognormal distribution is the extension of the two-parameter lognormal distribution to meet the need of the biological, sociological, and other ﬁelds. Numerous research papers have been published for the parameter estimation problems for the lognormal distributions. The inclusion of the location parameter brings in some technical di ﬃ culties for the parameter estimation problems, especially for the interval estimation. This paper proposes a method for constructing exact conﬁdence intervals and exact upper conﬁdence limits for the location parameter of the three-parameter lognormal distribution. The point estimation problem is discussed as well. The performance of the point estimator is compared with the maximum likelihood estimator, which is widely used in practice. Simulation result shows that the proposed method is less biased in estimating the location parameter. The large sample size case is discussed in the paper.


Introduction
The two-parameter lognormal distribution and the threeparameter lognormal distribution have been used in many areas such as reliability, economics, ecology, biology, and atmospheric sciences. In the past twenty years, many research papers have been published on the parameter estimation problems for the lognormal distributions. See, for example, Kanefuji and Iwase [1], Sweet [2], and Crow and Shimizu [3]. The three-parameter lognormal distribution is the extension of the two-parameter lognormal distribution to meet the need of the biological and sociological science, and other fields. Some papers can be found in the literature for the parameter estimation problems for this distribution. See, for example, Komori and Hirose [4], Singh et al. [5], Eastham et al. [6], Cohen et al. [7], Chieppa and Amato [8], Griffiths [9], and Cohen and Whitten [10]. Chen [11] analyzed an application data set containing 49 plastic laminate strength measurements using the locally maximum likelihood estimation method. When the locally maximum likelihood estimation method is used, people are not using the criterion of searching the value of the parameter, which is being estimated, such that the likelihood function is maximized. This is particularly true when the location parameter of the three-parameter lognormal distribution is estimated. This is because the likelihood function goes to infinity when the value of the location parameter approaches to the smallest order statistic. The point estimation will be discussed in Section 3. The same data set is analyzed using the method presented in this paper.
It should be noted that the inclusion of the location parameter brings in some technical difficulties for the parameter estimation problems. The probability density function of the three-parameter lognormal distribution is where the parameters γ ≥ 0, −∞ < μ < ∞, and σ > 0 are all assumed to be unknown in this paper. When γ = 0, the distribution becomes the two-parameter lognormal distribution. Constructing confidence intervals for the parameters of the three-parameter lognormal distribution is a difficult problem because of the inclusion of the location parameter γ. As far, only some approximation methods can be found in 2 International Journal of Quality, Statistics, and Reliability the literature. This paper proposes a method for constructing exact confidence intervals and exact upper confidence limits for the location parameter γ of the three-parameter lognormal distribution. The point estimation problem is discussed as well. Statistical simulation is conducted to compare the performance of the method proposed in this paper with the maximum likelihood estimator, which is a commonly used method for estimating parameters.

Confidence Interval and Statistical Test
Let X 1 , X 2 , . . . , X n be a random sample from the threeparameter lognormal distribution, and let X (1) , X (2) , . . . , X (n) be the corresponding order statistics. To find a 1 − α confidence interval for the parameter γ, define As a mathematical function, ξ(γ) is a function of γ only. On the other hand, the distribution of ξ(γ) does not depend on any parameter. This is due to the fact that ξ(γ) can be expressed as and the fact that are the corresponding order statistics of Therefore, for any fixed 0 < α < 1, there exists a number ξ α such that It can be shown that ξ(γ) is a strictly increasing function of γ. Then a confidence interval of γ can be constructed based on ξ(γ). The lower and upper confidence limits are the solutions of γ for the equations respectively. The values of ξ α can be obtained by Monte Carlo simulation. The construction of the confidence interval of γ here is based on three order statistics X (i) , X ( j) , and X (k) . Therefore, the performance of the confidence interval of γ depends on the selection of the triplet (i, j, k). For a complete sample X 1 , . . . , X n , it is natural to use the largest and smallest observations. In other words, one would choose i = 1 and k = n. Then the selection of the triplet (i, j, k) can be focused on selecting j. Monte Carlo simulation is used to select the "optimal" value of j. Here the selection is based on two criteria. A traditional way to evaluate the performance of confidence intervals is to check the average width of the confidence intervals for a fixed level of confidence. This method is adopted here for selecting j. To discuss the second criterion for selecting j, note that and that It is possible that may occur. If that is the case, then the lower confidence limits of γ cannot be found. Fortunately, Monte Carlo simulation result has shown that if the value of j is appropriately selected, the occurrence of the previously mentioned event is very unlikely for all commonly used confidence levels. It is found from the Monte Carlo simulation results that when the value of j is somewhere between 20% and 40% of the sample size, the average width of the confidence intervals of the location parameter is the shortest, and the probability of the occurrence of (11) is almost zero. Based on this result, it is recommended that the value of j should be 30% of the sample size.
To obtain the values of ξ α , Monte Carlo simulation was used. For each combination of the selected values of n and j, 250,000 pseudorandom samples were generated from the three-parameter lognormal distribution. Since the distribution of ξ(γ) does not depend on any parameters, the simplest case with γ = 0, μ = 0 and σ = 1 was used. Then the critical values of ξ(γ) were obtained for selected values of α. The values of ξ α are listed in Table 1. The column in the middle of Table 1 (labeled ξ 0.5 ) is for obtaining point estimator of the location parameter γ.
The quantity ξ(γ) can also be used to test the hypotheses about the location parameter γ. In practice, people may need to choose either the two-parameter lognormal distribution or the three-parameter lognormal distribution to fit their data. In that case, the test H 0 : γ = 0 versus H a : γ > 0 needs to be conducted. It has been mentioned previously that ξ(γ) is strictly increasing in γ. When the calculated value of ξ(γ) is greater than ξ 1−α , it can be concluded, at level of significance α, that the three-parameter lognormal distribution should be used instead of the two-parameter lognormal distribution.
International Journal of Quality, Statistics, and Reliability 3

Point Estimation
A widely used method for estimating the parameters of the lognormal distributions in the literature is the maximum likelihood estimator. Certain problems in using the maximum likelihood estimation have been mentioned by some authors. With respect to the three-parameter lognormal distribution, note that the likelihood function of a random sample from the three-parameter lognormal distribution is Here I [γ,∞) (min{x 1 , . . . , x n }) is an indicator function defined as It can be seen from the above expression of L(μ, σ, γ|x) that the maximum likelihood estimator of γ is X (1) = min{X 1 , . . . , X n }. Since the density function of the threeparameter lognormal distribution is nonzero only when x (1) ≥ γ, and since the probability that X (1) > γ is 1, one would expect that the maximum likelihood estimator X (1) is a positively biased estimator of γ. This is verified by the Monte Carlo simulation result discussed in the following. Chen [11] used the locally maximum likelihood estimation method to estimate the parameter. As mentioned in that paper, the locally maximum likelihood estimation method has some problems. The locally maximum likelihood estimate may not exist. In some cases, there are multiple locally maximum values. The biggest problem for the locally maximum likelihood estimation method is that it gives up the principle of maximizing the likelihood function globally. The point estimator of γ can be obtained by squeezing the confidence interval of γ described in the in the previous section. In fact, a point estimator of γ is the solution of γ for the equation The value of ξ 0.5 can also be found in Table 1. The above equation can be solved easily using a scientific calculator.
To compare the performance of the point estimator obtained by (14) with the maximum likelihood estimator, Monte Carlo simulation was used based on 250,000 pseudorandom samples from the three-parameter lognormal distribution with parameters γ = 10, μ = 4, and σ = 2. Simulation results are listed in Table 2. The column γ (new) gives the average of the point estimates using the method presented in this paper, and the column γ (MLE) gives the average of the point estimates using the maximum likelihood estimator. It can be seen that the point estimator using the maximum likelihood estimator method is obviously biased. The columns MSE (new) and MSE (MLE) provide the mean squared error for the method in this paper and the maximum likelihood estimator method, respectively. It can be seen that the method presented in this paper has smaller mean squared error when the sample size is small. When the sample size becomes larger, the maximum likelihood estimator method has smaller mean squared error while the maximum likelihood estimator is still biased.

Examples
The following data set containing 20 observations was used in Cohen and Whitten [10]

Conclusions and Discussion
Compared with the two-parameter lognormal distribution, the three-parameter lognormal distribution is more flexible because of the inclusion of the location parameter. However, the inclusion of the location parameter brings in a lot of technical difficulties to statistical inferences. Only some approximation methods can be found in the literature for constructing confidence intervals for the location parameter. The most commonly used method for finding point estimator is the maximum likelihood estimator. As discussed previously, the maximum likelihood estimator of the location parameter is positively biased.
A method for constructing exact confidence intervals and exact confidence limits for the location parameter is proposed in this paper. The method can also be used to conduct statistical test about the location parameter of the three-parameter lognormal distribution. The point estimator is obtained as well by squeezing the confidence interval of the location parameter.
While the discussion of the method introduced in this paper is for complete samples, the method can also be used for censored data. For example, suppose that only the first r order statistics X (1) , X (2) , . . . , X (r) are available for the statistical analysis. Then i = 1 and k = r. The selection of j is similar to the complete sample case.
The selection of the triplet (i, j, k) can also be discussed for the large sample case. The pivotal quantity ξ(γ) possesses some asymptotic properties when the sample size is sufficiently large. Some of the following discussion uses the results in Bahadur [12] and Embrechts et al. [13]. Let X 1 , . . . , X n be a random sample from the three-parameter lognormal distribution described in (1), and let X (1) , . . . , X (n) be the corresponding order statistics. Let j = [np] + 1 (0 < p < 1). It can be shown that To show this, let Z i = (ln(X i −γ)−μ)/σ (i = 1, 2, . . . , n). Then