Improved Estimators of the Mean of a Normal Distribution with a Known Coefficient of Variation

This paper is to find the estimators of the mean θ for a normal distribution with mean θ and variance aθ2, a > 0, θ > 0. These estimators are proposed when the coefficient of variation is known. A mean square error MSE is a criterion to evaluate the estimators. The results show that the proposed estimators have preference for asymptotic comparisons. Moreover, the estimator based on jackknife technique has preference over others proposed estimators with some simulations studies.


Introduction
For the population that is distributed as normal with mean θ and variance σ 2 , the sample mean X is the unbiased and minimum variance estimator.In the situation that coefficient of variation β is known where β 2 a σ 2 /θ 2 for a > 0 and θ > 0, Khan 1 proposed the unbiased estimator d * and the asymptotic variance of d * is aθ 2 /n 1 2a .This estimator is the linear combination between X and sample variance S and the asymptotic variance of estimator d * is the Cramer-Rao bound.This paper focuses on improving the estimators of θ when the coefficient of variation is known.MSE is a criterion for evaluating the estimators.The estimators are proposed by using the method of Khan 1 and Arnholt and Hebert 2 .Also, the jackknife technique 3 is used to reduce the bias of estimator.Moreover, The Bayesian estimator 4 is proposed based on noninformative prior distribution by using Jeffreys' prior distribution.
The paper is organized as follows.The improved estimators are proposed in Section 2. Asymptotic comparison and simulation study results are presented in Section 3. Finally, Section 4 contains conclusions.

Improved Estimators
Let X 1 , X 2 , . . ., X n be independent and distributed as normal with mean θ and variance σ 2 , and coefficient of variation β is known where β 2 a σ 2 /θ 2 for a > 0 and θ > 0. There are three estimators proposed as follows.
1 Let T 1 be the proposed estimator of θ based on Khan 1 , and Arnholt and Hebert 2 ,

be the proposed estimator of θ based on the jackknife technique. The estimator T *
1 is used to construct the jackknife estimator T 2 as follows.Let T * 1,−i be an estimator T * 1 based on the sample size n − 1 by deleting the ith sample.Denote The estimator T 2 is given by The MSE of T 2 is shown in the simulation study in Section 3.
3 Bayes estimator T 3 is obtained as follows.
The likelihood function of θ given data is The log likelihood function is

2.5
The Jeffreys prior distribution is where I θ is Fisher's information.Then, the prior distribution is

2.7
The posterior distribution, the distribution of θ given data is Therefore, The Bayes estimator of θ, T 3 is given as The MSE of T 3 is shown in the simulation study in Section 3.  From Tables 1-3, the results show that, for small sample size n, the estimator T 2 has smaller MSEs than the estimator T 3 .We also see that, the estimator T 2 has smaller MSEs than the estimator T * 1 , since T 2 is constructed by using the jackknife technique to reduce bias of the biased estimator T * 1 .Therefore, the estimator T 2 is better than the estimators T * 1 and T 3 within the intervals of θ and a.

Conclusions
These estimators T * 1 , T 2 , and T 3 are proposed.The estimator T * 1 is improved based on the methods of Khan 1 and Arnholt and Hebert 2 .The estimator T 2 is obtained by reducing bias of T * 1 .The estimator T 3 is a Bayesian estimator for the noninformative prior distribution by using the Jeffreys prior distribution.The estimator T * 1 is better than the estimators d * and δ * k in the asymptotic comparison.Moreover, the estimator T 2 is better than the estimators T * 1 and T 3 with some simulation studies.

Arnholt and Hebert 2 improved the estimator δ * k kT where T is an unbiased estimator of θ, k cβ 2 1 − 1 ,
and constant c are known.They found that δ * k has smaller mean square error MSE than the estimator T .They also gave the example for T X and obtained the estimator δ * k n β 2 n −1 X and MSE δ * k aθ 2 / a n .Then, δ * k has MSE smaller than the estimator X.
The simulation results are shown for the comparison MSEs among the three proposed estimators, T * 1 , T 2 , and T 3 .Let parameters θ 5, 10, and 15, and a 0.01, 0.09, and 0.25 with small sample size n 10, 20, and 30.The results are shown in Tables 1, 2, and 3.
1 For asymptotic comparison, the estimators are compared based on the relative efficiency RE of MSEs.The RE of d * with respect to T *

Table 1 :
MSEs of the proposed estimators T * 1 , T 2 , and T 3 when n 10.

Table 2 :
MSEs of the proposed estimators T * 1 , T 2 , and T 3 when n 20.

Table 3 :
MSEs of the proposed estimators T * 1 , T 2 , and T 3 when n 30.