Empirical Likelihood for Partially Linear Single-Index Models under Negatively Associated Errors

In this paper, the authors consider the application of the blockwise empirical likelihoodmethod to the partially linear single-index model when the errors are negatively associated, which often exist in sequentially collected economic data. -ereafter, the blockwise empirical likelihood ratio statistic for the parameters of interest is proved to be asymptotically chi-squared. Hence, it can be directly used to construct confidence regions for the parameters of interest. A few simulation experiments are used to illustrate our proposed method.


Introduction
e partially linear single-index model is as follows: where Y is a response variable, (Z, X) ∈ R p × R q is the covariate, g(·) is an unknown univariate measurable function, ε is a random error, and (β, θ) is an unknown vector in R p × R q with ‖β‖ � 1 (where ‖ · ‖ denotes the Euclidean norm). e restriction ‖β‖ � 1 assures identifiability.
Model (1) is flexible enough to include many important statistical models, so it has attracted much attention and has been extensively studied in recent years. Relevant studies about Model (1) have been done by [1][2][3][4][5]; all of which are based on the independent error sequence. In practice, all the parts of the random error sequence are often associated with each other, such as negatively associated errors, m-dependent errors, and ARCH errors, so that the abovementioned findings cannot be used directly. erefore, it is necessary to study Model (1) with associated errors. e finite random variable sequence (ξ i , 1 ≤ i ≤ n) is negatively associated (NA); if, on the condition of any two arbitrary disjoint subsets A, B ⊂ (1, 2, . . . , n) and any realvalued coordinate-wise nondecreasing functions f 1 and f 2 , it is established that there is Cov f 1 ξ i , i ∈ A , f 2 ξ j , j ∈ B ≤ 0. (2) As for infinite random variable sequence, if any arbitrary finite subset is negatively associated, the infinite sequence is negatively associated.
NA sequence has been introduced and studied by the authors of [6,7] since the 1980s. Because the NA sequence includes the independent sequence, it has been widely applied in multivariate statistical analysis, the permeability analysis, and reliability theory drew much attention, and a lot of research results have been obtained. Under the fields of NA random variables, the author of [8] presented the asymptotic normality and central limit theorem; the authors of [9] proved the law of the iterated logarithm; the authors of [10] studied the exponential inequality, and so on.
However, there is little research about the partially linear single-index model under NA error. is paper, with the enlightenment of [11,12], focuses on estimating β, θ with blockwise empirical likelihood when the errors are subjected to NA in Model (1). roughout this paper, we assume that the data (X i , Z i , Y i ), i � 1, 2, . . . , n are generated from Model (1), and ε i , i � 1, . . . , n are NA errors with E(ε i |Z i , X i ) � 0 and Var(ε i |Z i , X i ) � σ 2 i < ∞ in Model (1). e rest of this paper is organized as follows. In Section 2, the blockwise empirical likelihood method and the relative asymptotic result are presented. In Section 3, some simulations are conducted to illustrate the proposed approach. All proofs are shown in Section 4.

Bias-Corrected Blockwise Empirical
Likelihood. In this part, we will use the bias-corrected blockwise empirical likelihood to construct the confidence region for (β, θ). For this reason, first we introduce an auxiliary vector using the bias-corrected method of [2]. e details are as follows. Now that ‖β‖ � 1 means that the value β is a boundary point on the unit sphere and g(Z T β) does not have a derivative at the point β of the parameter space. Nevertheless, the derivative of g(Z T β) on β in the building of the empirical likelihood ratio statistics must be used. ereby, we adopt the "deleteone-component" method which is widely used in the semiparameter model. Let β � (β 1 , β 2 , . . . , β p ) T and β (r) � (β 1 , . . . , β r− 1 , β r+1 , . . . , β p ) T be a (p − 1)-dimensional parameter vector after deleting the rth component of β.
Without losing its generality, we assume β r > 0. en, we can write Since β can be determined by β (r) , only the confidence regions of (β (r) , θ) need to be taken into consideration. Moreover, ‖β (r) ‖ < 1, which means that β is infinitely differentiable in a neighborhood of the parameter β (r) . us, the Jacobian matrix is where the auxiliary vector is defined as where g ′ (·) is the derivative of g(·) with respect to β (r) . Note that E(η i (β (r) , θ)) � 0 on the condition that (β (r) , θ) is the true parameter. erefore, the empirical likelihood by [13] is used to construct the bias-corrected empirical log-likelihood, which is defined as Since g(·), g ′ (·), μ 1 (·), and μ 2 (·) are unknown, formula (5) cannot be used to construct the confidence regions directly. To replace them with their estimators is what the researchers usually do. Next, we apply the local linear smooth method of [14] to obtain the estimators of g(·) and g ′ (·). For any fixed (β, θ), we focus on finding a and b to where K h (·) � K(·/h)/h, K(·) is a kernel function and h � h(n) is a bandwidth. Let (a, b) be the solution to minimize (7). rough a simple calculation, where en, we define and as the estimators of μ 1 (t) and μ 2 (t), respectively. When (8), (9), (11), and (12) are plugged in (5) and (6), an estimated auxiliary vector and an estimated bias-corrected empirical log-likelihood ratio can be, respectively, defined as follows: Under the independent identically distributed errors, the empirical likelihood ratio statistic is constructed by [2]; and its asymptotic result is presented. In this paper, η i (β (r) , θ), i � 1, . . . , n may be dependent when the error satisfies NA, so the method of [2] cannot be applied directly. For this, we apply the small-block and large-block arguments to construct the blockwise empirical likelihood ratio. where By using the Lagrange multiplier method, the biascorrected blockwise empirical likelihood ratio statistic is l n β (r) , θ � 2 2k+1 j�1 log 1 + λ T β (r) , θ ω nj β (r) , θ , (16) where λ ∈ R p+q− 1 is determined by

Asymptotic Result.
In this subsection, the main result of this paper is summarized. In order to state the asymptotic result, the following assumptions will be used: (i) (C 1 ): the density function f(t) of Z T β is bounded away from 0 on τ and satisfies the Lipschitz condition of order 1 on τ, where τ � t � Z T β: Z ∈ A and A is a compact support set of Z (ii) (C 2 ): g(·), μ 1s (·), and μ 2l (·) have two bounded and continuous derivatives on τ, where μ 1s (·) and μ 2l (·) are the sth and lth components of μ 1 (·) and μ 2 (·), 1 ≤ s ≤ p, 1 ≤ l ≤ q, respectively (iii) (C 3 ): the kernel K(·) is a bounded symmetric density function and satisfies Remark 1. According to [2], (C 1 )-(C 7 ) guarantee the asymptotic distribution theory. (C 8 ) is a common assumption in the NA situation. (C 9 ) constrains the block size in order to obtain the desired results.

Theorem 1.
Assume that (C 1 )-(C 9 ) are satisfied. If (β (r) , θ) is the true value of the parameter and β r > 0, when n ⟶ ∞, then where ⟶ L stands for the convergence in distribution.

Simulation
In this section, we use two examples to conduct some simulation studies to compare the performance of the proposed empirical likelihood method (ELM) and the normal approximation method (NAM).
Since β 1 � β 2 , we only consider the confidence region of the parameter (β 1 , θ). e coverage probabilities of the empirical likelihood confidence regions and the normal approximation confidence regions, with the normal level 0.95, are reported in Table 1. As is expected, the results fit our theory fairly well. e larger the sample size is, the closer the empirical coverage probability is to the nominal level. e proposed empirical likelihood method outperforms the normal approximation method. Figure 1 plots the proposed empirical likelihood confidence region and the normal approximation confidence regions for (β 1 , θ) based on the confidence level of 0.95 when the sample size is 300.
Example 2. In this simulation, the coverage probabilities and average lengths of confidence intervals are calculated by the proposed empirical likelihood method and the normal approximation method. Consider Model (1) with p � 3 and q � 2, where β � (1/ �� 14 √ )(1, 2, 3) T , θ � (0.2, 0.7) T , and g(·) � 1.5 sin(·). Z is independent and all from the uniform U(0, 1), and the two components of X are from the bivariate standard normal distribution. e kernel function is taken as the Epanechnikov kernel Based on 500 simulation runs, the simulation results are reported in Table 2. From Table 2, the following results can be obtained. e coverage probabilities of the empirical likelihood method and the normal approximation method are in agreement with the nominal level of 0.90; the empirical likelihood method has slightly smaller interval lengths compared with the normal approximation method.

Proofs
In order to prove eorem 1, we first give some lemmas. roughout this section, for a concise and convenient representation, we use c(0 < c < ∞) to denote any constant which may take a different values for each appearance, use λ min (A) and λ max (A) to denote the smallest and largest eigenvalues of A, respectively, and write A ⊗2 � A · A T . Lemma 1. Let a i , i � 1, 2, . . . , n and b i , i � 1, 2, . . . , n be any two sequences, then where (j 1 , j 2 , . . . , j n ) is an arbitrary arrangement of order n.
e proof of Lemma 1. Firstly, we assume b i ≥ 0, i � 1, 2, . . . , n. Via Abel inequality, it follows that Secondly, we assume b i < 0, i � 1, 2, . . . , n, and it also follows that Combining (23) and (24), then we get 4 Journal of Mathematics Lemma 2. Let a 1 , a 2 , . . . , a n be any random variables with max 1≤i≤n E|a i | s ≤ c < ∞ for some constants s > 0 and c > 0. en, Refer to the proof of Lemma 1 of [11]. (27) e proof of Lemma 3 can be finished with the work by [16].
e proof of Lemma 4 is similar to the proof of Lemma 3 by [12]. So, the details are omitted here.
e proof of Lemma 5. It is easy to show that