JPSJournal of Probability and Statistics1687-95381687-952XHindawi Publishing Corporation13845010.1155/2012/138450138450Research ArticleNew Bandwidth Selection for Kernel Quantile EstimatorsAl-KenaniAli1YuKeming1GaoJunbin B.1Department of Mathematical Sciences, Brunel University, Uxbridge UBB 3PHUKbrunel.ac.uk201222201220120808201126092011101020112012Copyright © 2012 Ali Al-Kenani and Keming Yu.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We propose a cross-validation method suitable for smoothing of kernel quantile estimators. In particular, our proposed method selects the bandwidth parameter, which is known to play a crucial role in kernel smoothing, based on unbiased estimation of a mean integrated squared error curve of which the minimising value determines an optimal bandwidth. This method is shown to lead to asymptotically optimal bandwidth choice and we also provide some general theory on the performance of optimal, data-based methods of bandwidth choice. The numerical performances of the proposed methods are compared in simulations, and the new bandwidth selection is demonstrated to work very well.

1. Introduction

The estimation of population quantiles is of great interest when one is not prepared to assume a parametric form for the underlying distribution. In addition, due to their robust nature, quantiles often arise as natural quantities to estimate when the underlying distribution is skewed . Similarly, quantiles often arise in statistical inference as the limits of confidence interval of an unknown quantity.

Let X1,X2,,Xn  be independent and identically distributed random sample drawn from an absolutely continuous distribution function F  with density f. Further, let X(1)X(2)Xn  denote the corresponding order statistics. For (0<p<1) a quantile function Q(p)  is defined as follows: Q(p)=inf{x:F(x)p}. If Q̂(p) denotes pth sample quantile, then Q̂(p)=x([np]+1) where [np] denotes the integral part of  np. Because of the variability of individual order statistics, the sample quantiles suffer from lack of efficiency. In order to reduce this variability, different approaches of estimating sample quantiles through weighted order statistics have been proposed. A popular class of these estimators is called kernel quantile estimators. Parzen  proposed a version of the kernel quantile estimator as below: Q̃K(p)=i=1n[i-1/ni/nKh(t-p)dt]X(i). From (1.2) one can readily observe that Q̃K(p) puts most weight on the order statistics X(i), for which i/n is close to  p. In practice, the following approximation to Q̃K(p) is often used:Q̃AK(p)=i=1n[n-1Kh(i/n-p)]X(i). Yang  proved that Q̃K(p) and Q̃AK(p)  are asymptotically equivalent in terms of mean square errors. Similarly, Falk  demonstrates that, from a relative deficiency perspective, the asymptotic performance of Q̃AK(p) is better than that of the empirical sample quantile.

In this paper, we propose a cross-validation method suitable for smoothing of kernel quantile estimators. In particular, our proposed method selects the bandwidth parameter, which is known to play a crucial role in kernel smoothing, based on unbiased estimation of a mean integrated squared error curve of which the minimising value determines an optimal bandwidth. This method is shown to lead to asymptotically optimal bandwidth choice and we also provide some general theory on the performance of optimal, data-based methods of bandwidth choice. The numerical performances of the proposed methods are compared in simulations, and the new bandwidth selection is demonstrated to work very well.

2. Data-Based Selection of the Bandwidth

Bandwidth plays a critical role in the implementation of practical estimation. Specifically, the choice of the smoothing parameter determines the tradeoff between the amount of smoothness obtained and closeness of the estimation to the true distribution .

Several data-based methods can be made to find the asymptotically optimal bandwidth h in kernel quantile estimators for  Q̃AK(p) given by (1.3). One of these methods use derivatives of the quantile density for Q̃AK(p).

Building on Falk , Sheather and Marron  gave the MSE of Q̃AK(p) as follows. If f is not symmetric or f is symmetric but p0.5, AMSE(Q̃AK(p))=14μ2(k)2[Q(p)]2h4+p(1-p)[Q(p)]2n-1-R(K)[Q(p)]2n-1h, where R(K)=2-uK(u)K-1(u)du, μ2(k)=-u2K(u)du  and K-1 is the antiderivative of K.

If Q>0 then hopt=α(K)β(Q)n-1/3, where α(K)=[R(K)/μ2(k)2]1/3, β(Q)=[Q(p)/Q(p)]2/3.

There is no single optimal bandwidth minimizing the AMSE(Q̃AK(p)) when F is symmetric and  p=0.5. Also, If q=0, we need higher terms and the AMSE(Q̃AK(p)) can be shown to beAMSE(Q̃AK(p))=(14-1n)h4[Q(p)]2μ2(k)2+2n-1h2[Q′′(p)]2(q-ht)tK(t)j(t)dt,where j(t)=-txK(x)dx,  see Cheng and Sun .

In order to obtain hopt we need to estimate Q=q and Q′′=q. It follows from (1.3) that the estimator of Q=q can be constructed as follows:q̃AK(p)=Q̃AK(p)=i=1nX(i)[Ka((i-1)n-p)-Ka(in-p)]. Jones  derived that the AMSE(q̃AK(p)) as AMSE(q̃AK(p))=a44μ2(k)2[q(p)]2+1na[q(p)]2K2(y)dy. By minimizing (2.5), we obtain the asymptotically optimal bandwidth for Q̃AK(p): aopt*=[[Q(p)]2K2(y)dyn[Q(p)]2μ2(k)2]1/5. To estimate Q′′=q in (2.2), we employ the known result Q̃′′AK(p)=ddpQ̃AK(p)=1a2i=1nX(i)[K((i-1)/n-pa)-K(i/n-pa)], and it readily follows that aopt**=[3[Q(p)]2K2(x)dxn[Q′′′′(p)]2μ2(k)2]1/7 which represents the asymptotically optimal bandwidth for Q̃′′AK(p). By substituting a=aopt* in (2.4) and a=aopt** in (2.7) we can compute hopt.

3. Cross-Valdation Bandwidth Selection

When measuring the closeness of an estimated and true function the mean integrated squared (MISE) defined as MISE(h)=E01{Q̃(p)-Q(p)}2dp   is commonly used as a global measure of performance.

The value which minimises MISE(h) is the optimal smoothing parameter, and it is unknown in practice. The following ASE(h) is the discrete form of error criterion approximating  MISE(h):ASE(h)=1ni=1n{Q̃(in)-Q(in)}2.

The unknown Q(p) is replaced by Q̂(p) and a function of cross-validatory procedure is created as:1n  i=1n{Q̃-i(in)-Q̂(in)}2,where Q̃-i(i/n) denotes the kernel estimator evaluated at observation xi, but constructed from the data with observation xi omitted.

The general approach of crossvalidation is to compare each observation with a value predicted by the model based on the remainder of the data. A method for density estimation was proposed by Rudemo  and Bowman . This method can be viewed as representing each observation by a Dirac delta function δ(x-xi), whose expectation is f(x), and contrasting this with a density estimate based on the remainder of the data. In the context of distribution functions, a natural characterisation of each observation is by the indicator function I(x-xi)  whose expectation is F(x). This implies that the kernel method for density estimation can be expressed asf̃(x)=1ni=1nKh(x-xi),

when h0  Kh(x-xi)δ(x-xi).

The kernel method for distribution functionF̃(x)=1n  i=1nW(x-xih),

where W is a distribution function, h is the bandwidth controls the degree of smoothing. When h0W(x-xih)I(x-xi),

where I(x-xi)  is the indicator functionI(x-xi)={1,if  x-xi0,0,otherwise.

Now, from (1.3) when h0Q̃AK(p)δ(in-p)X(i), and thus a cross-validation function can be written as CV(h)=1ni=1n01{δ(in-p)X(i)-Q̃-i(in)}2dp. The smoothing parameter h is then chosen to minimise this function. By subtracting a term that characterise the performance of the true (p) we haveH(h)=CV(h)-1ni=1n01{δ(in-p)X(i)-Q(in)}2dp which does not involve h. By expanding the braces and taking expectation, we obtain H(h)=1ni=1n01{Q̃2-i(in)-2δ(in-p)X(i)Q̃-i(in)+2δ(in-p)X(i)Q(in)-Q2(in)}dp. When n the (np)th order statistic x(np) is asymptotically normally distributed x(np)~  AN(  Q(p),p(1-p)n[f(Q(p))]2),E{H(h)}=E[1n  i=1n01{Q̃2-i(in)-2δ(in-p)X(i)  Q̃-i(in)+2δ(in-p)X(i)  Q(in)-Q2(in)}dp[1n  i=1n01{Q̃2-i(in)-2δ(in-p)X(i)  Q̃-i(in)+2δ(in-p)X(i)  Q(in)],E{H(h)}=1n  i=1n01[E{Q̃2-i(in)}-2δ(in-p)Q(in)E{Q̃-i(in)}+2δ(in-p)Q2(in)-Q2(in)]dp,E{H(h)}=E01{Q̃n-1(in)-Q(in)}2dp, where the notation Q̃n-1(i/n) with positive subscript denotes a kernel estimator based on a sample size of n-1. The proceeding arguments demonstrate that CV(h)  provides an asymptotic unbiased estimator of the true MISE(h) curve for a sample size n-1. The identity at (3.12) strongly suggests that crossvalidation should perform well.

4. Theoretical Properties

From (3.1), we can write MISE(h)=01bias2(Q̃K(p))dp+01var(Q̃K(p))dp.

Sheather and Marron  have shown that bias(Q̃K(p))=12  h2μ2(k)Q′′(p)+0(h2). while Falk [4, page 263] proved thatvar(Q̃K(p))=p(1-p)[Q(p)]2n-1-R(K)[Q(p)]2n-1h+0(n-1h).

On combining the expressions for bias and variance we can express the mean integrated square error asMISE(h)=14h4μ2(k)201[Q′′(p)]2dp+p(1-p)01[Q(p)]2dpn-1-R(K)01[Q(p)]2dp  n-1h+0(h4+n-1h),

and for C1=p(1-p)01[Q(p)]2dp,  C2=R(K)01[Q(p)]2dp and C3=μ2(k)201[Q′′(p)]2dp the MISE can be expressed as MISE(h)=C1n-1-C2n-1h+14C3h4+0(h4+n-1h). Therefore, the asymptotically optimal bandwidth is h0=Cn-1/3, where  C={C2/C3}1/3.

We can see from (3.12) that H(h) may be a good approximation to MISE(h) or at least to that function evaluated for a sample of size n-1  rather than n. Additionally, this is true if we adjusted H(h) by adding the quantityJn=01{(Q̂(p)-Q(p))2-E(Q̂(p)-Q(p))2}. This quantity is demean and does not depend on h which makes it attractive for obtaining a particularly good approximation to MISE(h).

Theorem 4.1.

Suppose that Q(p) is bounded on [0,1] and right continuous at the point 0, and that K is a compactly supported density and symmetric about 0. Then, for each δ,ε,C>0, H(h)+J=MISE(h)+02{(n-3/2+n-1h3/2+n-1/2h3)nδ} with probability  1, uniformly in 0hCnδ, as n.

(An outline proof of the above theorem is in the appendix).

From the above theorem, we can conclude that minimisation of H(h) produces a bandwidth that is asymptotically equivalent to the bandwidth h0 that minimises MISE(h).

Corollary 4.2.

Suppose that the conditions of previous theorem hold. If ĥ denotes the bandwidth that minimises CV(h) in the range 0hCnδ, for any C>0 and any 0ε1/3, then ĥh01 with probability 1 as n.

5. A Simulation Study

A numerical study was conducted to compare the performances of the two bandwidth selection methods. Namely, the method presented by Sheather and Marron  and our proposed method.

In order to account for different shapes for our simulation study we consider a standard normal, Exp(1), Log-normal(0,1) and double exponential distributions and we calculate 18 quantiles ranging from p=0.05 to p=0.95. Through the numerical study the Gaussian kernel was used as the kernel function. Sample sizes of 100, 200 and 500 were used, with 100 simulations in each case. The performance of the methods was assessed through the mean squared errors criterion (MSE). MSE(h)=E{Q̃(p)-Q(p)}2. And the relative efficiency (R.E)R.E=[MISEMethod  2(hMethod  2,opt)MISEMethod  1(hMethod  1,opt)].

Further, for comparison purposes we refer to our proposed method and that of Sheather and Marron  as method 1 and method 2 respectively.

(a) Standard normal distribution (see Table 1 and Figure 1).

Mean squared errors results for bandwidth selection methods for different sample sizes and for data from a normal distribution.

pn=100n=200n=500
0.05method 10.348419560.3210738700.298936771
method 20.296367580.1643647380.090598082
0.10method 10.076459560.0654405750.054697205
method 20.049477450.0228463550.015566907
0.15method 10.022915010.0139206680.007384189
method 20.029397080.0133862340.005005849
0.20method 10.018919190.0092737460.003152866
method 20.022288280.0100941720.003812209
0.25method 10.015969480.0085813980.003000777
method 20.018359120.0088806390.003568772
0.30method 10.016149810.0080356670.003208531
method 20.016391480.0082998380.003375445
0.35method 10.014618800.0076775670.003534028
method 20.015447900.0077636290.003012045
0.40method 10.012794740.0073754280.002899081
method 20.014945060.0072484970.002661230
0.45method 10.012242680.0061288170.002183302
method 20.014441530.0067904900.002295830
0.55method 10.014140500.0063488930.001922013
method 20.013732580.0067024300.002099446
0.60method 10.013753730.0063927210.002007274
method 20.013417630.0067627980.002254869
0.65method 10.013447730.0060635020.002589679
method 20.012905690.0068019010.002507202
0.70method 10.013208320.0063941020.002456085
method 20.012339480.0070010640.002691678
0.75method 10.015032640.0070118670.002789939
method 20.012198290.0072163260.002679609
0.80method 10.016048470.0072466050.002715445
method 20.013278360.0076023460.002791240
0.85method 10.017571710.0092395890.004770755
method 20.017409310.0095221810.003848474
0.90method 10.031923790.0232929750.019942754
method 20.037027740.0180539760.012250413
0.95method 10.153238930.1477739630.150811561
method 20.248251880.1468401770.092517440

Left panel: plots of the quantile estimators for method 1 (solid line), method 2 (dotted line), and true quantile (dashed line) for different sample sizes and for data from a normal distribution. Right panel: box plots of mean squared errors for the quantile estimators for method 1 and method 2 for different sample sizes.

(b) Exponential distribution (see Table 2 and Figure 2).

Mean squared errors results for bandwidth selection methods for different sample sizes and for data from an exponential distribution.

pn=100n=200n=500
0.05method 10.0016870250.00146999900.0014107454
method 20.00060232360.00024767458.122873e-05
0.10method 10.0013062110.00092293380.0007744410
method 20.00082252540.00040758221.749150e-04
0.15method 10.0015896460.00089404860.0006237375
method 20.00129635760.00069382873.186597e-04
0.20method 10.0021879900.00114770630.0006801504
method 20.00191881720.00103582724.746909e-04
0.25method 10.0029164170.00158056780.0008156225
method 20.00268386590.00140965236.303538e-04
0.30method 10.0038275110.00197242070.0010289166
method 20.00365426880.00183589567.948940e-04
0.35method 10.0049196180.00255403230.0012720751
method 20.00483016570.00233183589.724792e-04
0.40method 10.0058681130.00319323550.0016253398
method 20.00600922430.00289987511.170038e-03
0.45method 10.0072677830.00399624260.0021094081
method 20.00727856410.00353638161.417269e-03
0.55method 10.0117769760.00651482220.0039208447
method 20.01105991560.00555485522.154130e-03
0.60method 10.0128645210.00703666990.0026965785
method 20.01385853650.00703595612.626137e-03
0.65method 10.0181730970.00864763490.0031472559
method 20.01697094130.00888322633.255114e-03
0.70method 10.0211255320.01116075010.0041235720
method 20.02010497200.01147031804.201740e-03
0.75method 10.0240258360.01507852890.0057215181
method 20.02297639520.01494902505.812526e-03
0.80method 10.0373673440.02046763680.0081595071
method 20.04071068850.01816479768.020787e-03
0.85method 10.0577855390.03174048710.0098128398
method 20.08386576810.03006561491.134861e-02
0.90method 10.0787973790.04264184100.0152139697
method 20.18784568520.11178200162.156987e-02
0.95method 10.1212391020.08101354500.0284524316
method 20.66683238360.49237326841.478679e-01

Left panel: plots of the quantile estimators for method 1 (solid line), method 2 (dotted line) and true quantile (dashed line) for different sample sizes and for data from an exponential distribution. Right panel: box plots of mean squared errors for the quantile estimators for method 1 and method 2 for different sample sizes.

(c) Log-normal distribution (see Table 3 and Figure 3).

Mean squared errors results for bandwidth selection methods for different sample sizes and for data from a Log-normal distribution.

pn=100n=200n=500
0.05method 10.0016630320.00100985730.0006568989
method 20.0023841360.00072704410.0003613541
0.10method 10.0018631410.00084383330.0002915013
method 20.0026019940.00083614750.0002981938
0.15method 10.0026331530.00134928700.0004451506
method 20.0026235520.00119431440.0003738508
0.20method 10.0037534580.00199223560.0006866399
method 20.0031073510.00147245250.0005685022
0.25method 10.0049566350.00271408780.0009886053
method 20.0045643820.00229520790.0008557756
0.30method 10.0064801950.00356031710.0015897314
method 20.0064369670.00315742640.0011938924
0.35method 10.0088588500.00479723720.0023446072
method 20.0084431290.00386261050.0015443970
0.40method 10.0100539690.00559891430.0022496198
method 20.0108933980.00517357210.0017579579
0.45method 10.0129989400.00690583620.0030102466
method 20.0136079310.00636067580.0019799551
0.55method 10.0196878500.01154314730.0051386226
method 20.0205811100.01008288100.0029554466
0.60method 10.0238818830.01292279020.0046644050
method 20.0258454190.01290811380.0040301844
0.65method 10.0321555370.01604761260.0056732073
method 20.0357370080.01671474690.0056528658
0.70method 10.0450279650.02495768360.0077709058
method 20.0426813150.02239363020.0077616346
0.75method 10.0607156760.03188911760.0121926243
method 20.0592761980.03237387490.0104119217
0.80method 10.0876947540.04508149110.0165993582
method 20.0907046300.05303747100.0168162426
0.85method 10.1405373740.08402903730.0311728395
method 20.1938571960.11319499070.0350218855
0.90method 10.2899444170.16422360620.0679038026
method 20.5520926890.27633018180.1112433633
0.95method 11.1197171370.47640266160.1984216218
method 22.3066726681.31590086680.2217620895

Left panel: plots of the quantile estimators for method 1 (solid line), method 2 (dotted line) and true quantile (dashed line) for different sample sizes and for data from a Log-normal distribution. Right panel: box plots of mean squared errors for the quantile estimators for method 1 and method 2 for different sample sizes.

(d) Double exponential distribution (see Table 4 and Figure 4).

Mean squared errors results for bandwidth selection methods for different sample sizes and for data from a double exponential distribution.

pn=100n=200n=500
0.05method 10.353724200.2882077420.251339747
method 20.454588190.3157043200.051385372
0.10method 10.071230720.0436841600.029307307
method 20.148689520.0978710720.023601368
0.15method 10.050817690.0253589460.009241326
method 20.093772440.0352071510.010910214
0.20method 10.024890790.0153602420.007647199
method 20.049973480.0248643590.008013159
0.25method 10.018638020.0122049040.004401402
method 20.031179420.0191010330.006247279
0.30method 10.018696110.0120311620.004145965
method 20.025169320.0146803350.004847191
0.35method 10.015622790.0095608730.003235724
method 20.020174040.0113558080.003513386
0.40method 10.014300680.0078607750.002493813
method 20.016695050.0091652030.002621345
0.45method 10.013863310.0075877050.002485022
method 20.015296640.0082215010.002104265
0.55method 10.015014580.0078010510.002013993
method 20.012806130.0077964110.002227569
0.60method 10.017122030.0090769220.002233672
method 20.013944540.0094756050.002791236
0.65method 10.019462410.0111298700.003521070
method 20.018408940.0125589980.003628169
0.70method 10.020983940.0119974050.003255335
method 20.023330920.0157924660.004534060
0.75method 10.027919430.0168854710.004419826
method 20.029374570.0198521220.005469359
0.80method 10.035328060.0213197140.005471649
method 20.042946340.0247578040.007270187
0.85method 10.054638900.0304899510.011338629
method 20.084411440.0353064150.012054182
0.90method 10.091886210.0585871640.030485192
method 20.147554440.0838442320.024399440
0.95method 10.281849450.2244323720.180645893
method 20.514622090.3191474350.076406491

Left panel: plots of the quantile estimators for method 1 (solid line), method 2 (dotted line) and true quantile (dashed line) for different sample sizes and for data from a double exponential distribution. Right panel: box plots of mean squared error for the quantile estimators for method 1 and method 2 for different sample sizes.

We can compute and summarize the relative efficiency of hMethod  1,opt for the all previous distributions in Table 5.

The relative efficiency (R.E) of hMethod 1,opt.

nStandard normal dist.Exponential dist.Log normal dist.Double exponential dist.
100 1.0372762.6362501.8060821.520903
200 0.69863242.9528082.0963071.308667
500 0.44558282.3244231.1735470.4519134

From Tables 1, 2, 3, and 4, for the all distributions, it can be observed that in 52.3% of cases our method produces lower mean squared errors, slightly wins Sheather-Marron method.

Also, from Table 5 which describes the relative efficiency for hMethod  1,opt we can see hMethod  1,opt more efficient from hMethod  2,opt for all the cases except the standard normal distribution cases with n=200,  500 and double exponential distribution cases with n=500.

So, we may conclude that in terms of MISE our bandwidth selection method is more efficient than Sheather-Marron for skewed distributions but not for symmetric distributions.

6. Conclusion

In this paper we have a proposed a cross-validation-based-rule for the selection of bandwidth for quantile functions estimated by kernel procedure. The bandwidth selected by our proposed method is shown to be asymptotically unbiased and in order to assess the numerical performance, we conduct a simulation study and compare it with the bandwidth proposed by Sheather and Marron . Based on the four distributions considered the proposed bandwidth selection appears to provide accurate estimates of quantiles and thus we believe that the new bandwidth selection method is a practically useful method to get bandwidth for the quantile estimator in the form (1.3).

Appendix<statement id="step1"><title>Step 1.

Let nH=S1-2S2, where S1=i01(Q̃-i(p)-Q(p))2,S2=i01(δ(in-p)X(i)-Q(p))(Q̃-i(p)-Q(p)).

Step 2.

With Di(p)=Kh(i/n-p)X(i)-Q(p) and Di0(p)=  δ(i/n-p)X(i)-Q(p)S1=(n-1)-2n2(n-2)01(Q̃(p)-Q(p))2+(n-1)-2i=1n01Di2(p),S2=(n-1)-1n201(Q̂(p)-Q(p))(Q̃(p)-Q(p))+(n-1)-1i=1n01DiDi0(p).

Step 3.

This step combines Steps 1 and 2 to prove that H={1-(n-1)-2}01(Q̃(p)-Q(p))2+1n(n-1)2i=1n01Di2(p)-2{1+(n-1)-1}01(Q̂(p)-Q(p))(Q̃(p)-Q(p))+2n(n-1)i=1n01Di(p)Di0(p).

Step 4.

This step establishes that E{01(Q̃(p)-Q(p))2}2+E{01(Q̂(p)-Q(p))(Q̃(p)-Q(p))}2=0(n-2+h8),E{n-3i=1n01Di2(p)}2+var(n-2i=1n01DiDi0(p))2=0(n-3).

Step 5.

This step combines Steps 3 and 4, concluding that H+01(Q̂(p)-Q(p))2=01(Q̃(p)-Q̂(p))2+2(n-1)-1  μ(h)+02(n-3/2+n-1h4), where μ(h)=01E(Di(p)Di0(p)).

Let U=02(ξ), for a random variable U=U(n) and a positive sequence ξ=ξ(n)E(U2)=0(ξ2).

Step 6.

This step notes that 01(Q̃(p)-Q̂(p))2=S+T, where S=n-2ijg(Xi,Xj),T=n-2i=1ng(Xi,Xi),g(Xi,Xj)=01{Kh(in-p)X(i)-δ(in-p)X(i)}{Kh(jn-p)X(j)-δ(jn-p)X(j)}dp, and that  S=S(1)+S(2)+(1-n-1)g0, where S(1)=n-2ij{g(Xi,Xj)-g1(Xi)-g1(Xj)+g0},S(2)=2n-1(1-n-1)i=1n{g1(Xi)-g0},g1(x)=E{g(x,X1)},  g0=E{g1(X1)}.

Step 7.

Shows that E{g(X1,X1)2}=0(1), E{g(X1,X2)2}=0(h3), E{g1(X1)2}=0(h6)var{T}=0(n-3), E(S(1))2=0(n-2h3)and E(S(2))2=0(n-1h6).

Step 8.

This step combines the results of Steps 5, 6, 7, obtaining H+01(Q̂(p)-Q(p))2=E(T)+(1-n-1)g0+2(n-1)-1μ(h)+02(n-3/2+n-1h3/2+n-1/2h3)=01E(Q̃(p)-Q̂(p))2+2(n-1)-1  μ(h)+02(n-3/2+n-1h3/2+n-1/2h3).

Step 9.

This step notes that μ(h)=0(h) and 01E(Q̃(p)-Q̂(p))2=01E(Q̃(p)-Q(p))2+01E(Q̂(p)-Q(p))2-2n-1μ(h).

Step 10.

This step combines Steps 8 and 9, establishing that H+01(Q̂(p)-Q(p))2-01E(Q̂(p)-Q(p))2=01E(Q̃(p)-Q(p))2+02(n-3/2+n-1h3/2+n-1/2h3). This means that E{H+J-MISE(h)}2=02(n-3/2+n-1h3/2+n-1/2h3).

SheatherS. J.MarronJ. S.Kernel quantile estimatorsJournal of the American Statistical Association199085410410416114174110.2307/2289777ZBL0705.62042ParzenE.Nonparametric statistical data modelingJournal of the American Statistical Association197974105131YangS.-S.A smooth nonparametric estimator of a quantile functionJournal of the American Statistical Association1985803921004101181960710.2307/2288567ZBL0593.62037FalkM.Relative deficiency of kernel type estimators of quantilesThe Annals of Statistics198412126126873351210.1214/aos/1176346405ZBL0533.62040WandM. P.JonesM. C.Kernel Smoothing1995London, UKChapman and Hall1319818ChengM. Y.SunS.Bndwidth selection for kernel quantile estimationJournal of Chines Statistical Association2006443271295JonesM. C.Estimating densities, quantiles, quantile densities and density quantilesAnnals of the Institute of Statistical Mathematics19924447217272-s2.0-000226431910.1007/BF00053400RudemoM.Empirical choice of histograms and kernel density estimatorsScandinavian Journal of Statistics1982926578668683ZBL0501.62028BowmanA. W.An alternative method of cross-validation for the smoothing of density estimatesBiometrika198471235336076716310.1093/biomet/71.2.353