Two Kinds of Weighted Biased Estimators in Stochastic Restricted Regression Model

We consider two kinds of weighted mixed almost unbiased estimators in a linear stochastic restricted regression model when the prior information and the sample information are not equally important.The superiorities of the two new estimators are discussed according to quadratic bias and variance matrix criteria. Under such criteria, we perform a real data example and a Monte Carlo study to illustrate theoretical results.


Introduction
In a linear regression, the ordinary least squares estimator (LS) is unbiased and has minimum variance among all linear unbiased estimators and has been treated as the best estimator for a long time.However, the LS can be highly variable when the notorious multicollinearity is present although it has the minimum variance property in the class of linear unbiased estimators.Hence biased alternatives to the ordinary least squares estimator have been recommended in order to obtain a substantial reduction in variance, such as the ordinary ridge regression estimator (ORE) proposed by Hoerl and Kennard [1] and the ordinary Liu regression estimator (OLE) proposed by Liu [2], and many modified methods.On the other hand, for reducing the bias of a biased estimator, Kadiyala [3] introduced a class of almost unbiased shrinkage estimators and Singh et al. [4] introduced the almost unbiased generalized ridge estimator by the jackknife procedure, and Akdeniz and Kac ¸iranlar [5] studied the almost unbiased generalized Liu estimator.Akdeniz and Erol [6] studied bias corrected estimators of the ORE and OLE and discussed the almost unbiased ridge estimator (AURE) and the almost unbiased Liu estimator (AULE).S ¸iray et al. [7] discussed - class estimator and Wu [8] developed principal component Liu-type estimator in the linear regression model.
An alternative technique to combat the multicollinearity problem is to consider the parameter estimator in addition to sample information.When the addition of stochastic linear restrictions on the unknown parameter vector is assumed to be held, Theil [9] proposed the ordinary mixed estimator (OME).Hubert and Wijekoon [10] proposed the stochastic restricted Liu estimator (SRLE).And Li and Yang [11] introduced the stochastic restricted ridge estimator (SRRE) by grafting the ORE into the mixed estimation procedure.Wu [12] discussed Stochastic restricted - class estimator and Stochastic restricted - class estimator in linear regression model.When the prior information and the sample information are not equally important, Schaffrin and Toutenburg [13] introduced the method of weighted mixed regression and developed the weighted mixed estimator (WME).Li and Yang [14] grafted the ORE into the weighted mixed estimation procedure and proposed the weighted mixed ridge estimator.
In this paper, when additional stochastic linear restrictions are supposed to hold, we propose the stochastic weighted mixed almost unbiased ridge estimator by combining the WME and the AURE and also propose the stochastic weighted mixed almost unbiased Liu estimator by combining the WME and the AULE in a linear regression model.We discuss performances of new estimators over other competitive estimators with respect to the quadratic bias (QB) and variance matrix criteria.The results show that the proposed stochastic weighted mixed almost unbiased ridge estimator and stochastic weighted mixed almost unbiased Liu estimator are proved to have smaller quadratic biases than the SRRE and SRLE, respectively.And the variance matrix of the new estimators is more competitive.The rest of the paper is organized as follows: we describe the statistical model and propose the stochastic weighted mixed almost unbiased ridge estimator and stochastic weighted mixed almost unbiased Liu estimator in Section 2. Section 3 compares new estimators with competitive estimators according to quadratic bias criterion.In Section 4, according to variance matrix, superiorities of proposed estimators over relative estimators are compared.Finally, a real data example and a Monte Carlo simulation are studied to justify superiorities of new estimators in Section 5. Some discussions are given in Section 6.
Kadiyala [3] introduced an almost unbiased shrinkage estimator which can be more efficient than the LS estimator and be fewer biases than the corresponding biased estimator.Akdeniz and Erol [6] discussed the almost unbiased ridge estimator (AURE) and the almost unbiased Liu estimator (AULE) which are given as follows: respectively.
In addition to the model ( 1), let us give some prior information about  in the form of a set of  independent stochastic linear restrictions as follows: where  is a  ×  known matrix of rank ,  is a  × 1 vector of disturbances with expectation 0, and covariance matrix  2 Ω, Ω is supposed to be known and positive definite matrix, and the ×1 vector  can be interpreted as a random variable with expectation () = .Furthermore, it is also assumed that the random vector  is stochastically independent of .
For the restricted model specified by ( 1) and ( 6), the ordinary mixed estimator (OME) introduced by Theil [9] is defined as Hubert and Wijekoon [10] proposed the stochastic restricted Liu estimator (SRLE) by combining the ordinary mixed estimator and the Liu estimator, which is defined as And Li and Yang [11] introduced the stochastic restricted ridge estimator (SRRE) by grafting the RE into the mixed estimation procedure, which is defined as In practical, the prior information and the sample information may be not equally important, which resulted in emergence of the weighted mixed estimator (WME) [13]: where  (0 ≤  ≤ 1) is a nonstochastic and nonnegative scalar weight.Now, we are ready to introduce two almost unbiased estimators in the stochastic restricted linear regression model.Combining the WME with the AURE and the AULE, respectively, we can propose the stochastic weighted mixed almost unbiased ridge estimator β (, ) and stochastic weighted mixed almost unbiased Liu estimator β (, ) as follows: where βWME () is the weighted mixed estimator.
In fact, note that Then, the WME (10) can be rewritten as Therefore, we can also derive β (, ) and β (, ) as the LS of  in the framework of the following augmented models: where and (ε  ) = 0. Then the LS of  from the augmented models ( 14) is It can be seen from definitions of β (, ) and β (, ) that they are two general estimators that include the WME, AURE, AULE, OME, and LS, as special cases; that is, Now, by some straightforward calculations, we can compute bias vectors and covariance matrices of the WME, AURE, AULE, SRRE, SRLE, β (, ), and β (, ) as COV ( β (, )) where In rest sections, our primary aim is to study performances of new estimators over relative estimators under the quadratic bias (QB) and variance matrix criteria.
This completes the proof.

Quadratic Bias Comparison between the SRLE and β𝐿 (𝑑,𝑤).
In this subsection, the comparison between the quadratic bias of SRLE and the quadratic bias of β (, ) is discussed.We get the difference of the quadratic bias from (34) and (36) as Theorem 2. The β (, ) is superior to the estimator βSRLE () under the quadratic bias criterion, namely,  2 ( βSRLE (), β (, )) ≥ 0. That is, the proposed β (, ) can be seen as bias corrected estimator of the SRLE.
This completes the proof.

Variance Comparisons of Estimators
In this section, we will focus on variance matrix comparisons among the WME, AURE, AULE, SRRE, SRLE, β (, ), and β (, ).For the sake of convenience, we list a lemma needed in the following discussions.

Variance Comparison between the WME and β𝑅 (𝑘,𝑤).
In this subsection, the comparison of variance matrix between the β (, ) and the WME is discussed.

Variance Comparison between the WME and β𝐿 (𝑑,𝑤).
In this subsection, the comparison of variance matrix between the β (, ) and the WME is discussed.

Variance Comparison between the AURE and β𝑅 (𝑘, 𝑤).
In this subsection, the comparison of variance matrix between the β (, ) and the AURE is discussed.

Variance Comparison between the SRRE and β𝑅 (𝑘,𝑤).
In this subsection, we compare the superiority of the variance matrix between β (, ) and SRRE.

Variance Comparison between the SRLE and β𝐿 (𝑑,𝑤).
In this subsection, we compare the superiority of the variance matrix between β (, ) and SRLE.
We can get the difference of the variance matrix between SRLE and β (, ) from ( 26) and (30) as where

Numerical Example and Monte Carlo Simulation
In order to illustrate our theoretical results, firstly we consider in this section a data set originally due to Webster et al. [16].Considering that comparison results depended on unknown parameters  and  2 and replaced them by their unbiased estimators, namely, LS, the results here and below are performed with R 2.14.1.We can easily obtain that the condition number is approximately 208.5.This information indicates a moderate multicollinearity among regression vectors.The ordinary least squares estimator of  is βLS =  −1    = (1.0561,0.8408, 0.8924, 0.8184, 1.086, 5.0995)  (48) with σ2 LS = 1.3622.Consider the following stochastic linear restrictions: Note that quadratic bias values of AURE, AULE and β (, ), β (, ) are the same from (31), (32), (35), and (36), respectively.And quadratic bias values of the SRRE, β (, ), SRLE, and β (, ) do not change when  takes different values.Therefore, we just compare the quadratic biases of SRRE and β (, ), SRLE and β (, ) when  or  take different values.For the SRRE, β (, ), SRLE, and β (, ), their quadratic bias values are given in Table 1.For the WME,  1 that the β (, ) has smaller quadratic bias values than the SRRE for every case, and this same situation exists that the quadratic bias of the β (, ) is smaller than that of the SRLE.From Table 2, we can find that the β (, ) has smaller MSE values than the AURE for every case.However, when the MSE values of the WME, SRRE, and β (, ) are compared, there is no estimator which is always superior over than other estimators.Especially, when  increases from 0.2 to 0.95, the difference of MSE values between the SRRE and β (, ) is always positive and becomes larger and larger.The similar situation can be found when we compare the WME, AULE, SRLE, and β (, ) in MSE values.
From the simulation results shown in Tables 3-5, we can find that the quadratic biases and the MSE values of the estimators are increasing with the increase of multicollinearity.The β (, ) and β (, ) have smaller quadratic biases than the SRRE and SRLE, respectively, for every case.On the other hand, the value of  is the level of the weight to sample information and prior information, and we can see from the Tables 4 and 5 that the estimated MSE values of the WME, β (, ), and β (, ) become more and more smaller when the value of  increases.The β (, ) has smaller estimated MSE values than the AURE and SRRE for every case when =0.5,  = 0.99, and  = 0.9,  = 0.999, and the β (, ) has smaller estimated MSE values than the AULE for every case when  = 0.5,  = 0.99, and  = 0.9,  = 0.999.Also, superiorities between the β (, ) and WME or SRRE and between the β (, ) and WME or SRLE with respect to the MSE depend on the choice of parameters , , and .More details can be found in Tables 4 and 5.
In the numerical example, the execution times to compute the quadratic bias of the AURE, AULE, β (, ), and β (, ) are 0.003625 s, 0.003415 s, 0.003764 s, and 0.003813s, respectively.Moreover, the execution times to compute the MSE of the AURE, AULE, β (, ), and β (, ) are 0.00399 s, 0.00400 s, 0.00800 s, and 0.00500 s, respectively.On the other hand, in the Monte Carlo study when  = 0.999, the execution times to compute the quadratic bias of the AURE, AULE, β (, ), and β (, ) are 0.00799 s, 0.00700 s, 0.00999 s, and 0.00799 s, respectively.In addition, the execution times to compute the MSE of the AURE, AULE, are 0.00799 s, 0.00700 s, 0.00999 s, and 0.00799 s, respectively.In addition, the execution times to compute the MSE of the AURE and AULE are 0.01100 s, 0.00900 s, 0.01299 s, and 0.01167 s, respectively.All the experiments are implemented in R 2.14.1 on a personal computer (PC) with the AMD Sempron Processor 3100+, 1.81 GHz.