ISRN.APPLIED.MATHEMATICS ISRN Applied Mathematics 2090-5572 Hindawi Publishing Corporation 271303 10.1155/2014/271303 271303 Research Article A Note on the Adaptive Estimation of a Multiplicative Separable Regression Function Chesneau Christophe Ding F. Skubalska-Rafajlowicz E. So H. C. Laboratoire de Mathématiques Nicolas Oresme Université de Caen Basse-Normandie Campus II Science 3 14032 Caen France unicaen.fr 2014 2032014 2014 18 01 2014 25 02 2014 20 3 2014 2014 Copyright © 2014 Christophe Chesneau. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We investigate the estimation of a multiplicative separable regression function from a bidimensional nonparametric regression model with random design. We present a general estimator for this problem and study its mean integrated squared error (MISE) properties. A wavelet version of this estimator is developed. In some situations, we prove that it attains the standard unidimensional rate of convergence under the MISE over Besov balls.

1. Motivations

We consider the bidimensional nonparametric regression model with random design described as follows. Let ( Y i , U i , V i ) i be a stochastic process defined on a probability space ( Ω , 𝒜 , ) , where (1) Y i = h ( U i , V i ) + ξ i , i , ( ξ i ) i is a strictly stationary stochastic process, ( U i , V i ) i is a strictly stationary stochastic process with support in [ 0,1 ] 2 , and h : [ 0,1 ] 2 is an unknown bivariate regression function. It is assumed that 𝔼 ( ξ 1 ) = 0 , 𝔼 ( ξ 1 2 ) exists, ( U i , V i ) i are independent, ( ξ i ) i are independent, and, for any i , ( U i , V i ) and ξ i are independent. In this study, we focus our attention on the case where h is a multiplicative separable regression function: there exist two functions f : [ 0,1 ] and g : [ 0,1 ] such that (2) h ( x , y ) = f ( x ) g ( y ) . We aim to estimate h from the n random variables: ( Y 1 , U 1 , V 1 ) , , ( Y n , U n , V n ) . This problem is plausible in many practical situations as in utility, production, and cost function applications (see, e.g., Linton and Nielsen , Yatchew and Bos , Pinske , Lewbel and Linton , and Jacho-Chávez ).

In this note, we provide a theoretical contribution to the subject by introducing a new general estimation method for h . A sharp upper bound for its mean integrated squared error (MISE) is proved. Then we adapt our methodology to propose an efficient and adaptive procedure. It is based on two wavelet thresholding estimators following the construction studied in Chaubey et al. . It has the features to be adaptive for a wide class of unknown functions and enjoy nice MISE properties. Further details on wavelet estimators can be found in, for example, Antoniadis , Vidakovic , and Härdle et al. . Despite the so-called “curse of dimensionality” coming from the bidimensionality of (1), we prove that our wavelet estimator attains the standard unidimensional rate of convergence under the MISE over Besov balls (for both the homogeneous and inhomogeneous zones). It completes asymptotic results proved by Linton and Nielsen  via nonadaptive kernel methods for the structured nonparametric regression model.

The paper is organized as follows. Assumptions on (1) and some notations are introduced in Section 2. Section 3 presents our general MISE result. Section 4 is devoted to our wavelet estimator and its performances in terms of rate of convergence under the MISE over Besov balls. Technical proofs are collected in Section 5.

2. Assumptions and Notations

For any p 1 , we set (3) 𝕃 p ( [ 0,1 ] ) = { v : [ 0,1 ] ; v p = ( 0 1 | v ( x ) | p d x ) 1 / p < } . We set (4) e o = 0 1 f ( x ) d x , e * = 0 1 g ( x ) d x , provided that they exist.

We formulate the following assumptions.

There exists a known constant C 1 > 0 such that (5) sup x [ 0,1 ] | f ( x ) | C 1 .

There exists a known constant C 2 > 0 such that (6) sup x [ 0,1 ] | g ( x ) | C 2 .

The density of ( U 1 , V 1 ) , denoted by q , is known and there exists a constant c 3 > 0 such that (7) c 3 inf ( x , y ) [ 0,1 ] 2 q ( x , y ) .

There exists a known constant ω > 0 such that (8) | e o e * | ω .

The assumptions (H1) and (H2), involving the boundedness of h , are standard in nonparametric regression models. The knowledge of q discussed in (H3) is restrictive but plausible in some situations, the most common case being ( U 1 , V 1 ) ~ 𝒰 ( [ 0,1 ] 2 ) (the uniform distribution on [ 0,1 ] 2 ). Finally, mention that (H4) is just a technical assumption more realistic to the knowledge of e o and e * (depending on f and g , resp.).

3. MISE Result

Theorem 1 presents an estimator for h and shows an upper bound for its MISE.

Theorem 1.

One considers (1) under (H1)–(H4). One introduces the following estimator for h (2): (9) h ^ ( x , y ) = f ~ ( x ) g ~ ( y ) e ~ 1 { | e ~ | ω / 2 } , where f ~ denotes an arbitrary estimator for f e * in 𝕃 2 ( [ 0,1 ] ) , g ~ denotes an arbitrary estimator for g e o in 𝕃 2 ( [ 0,1 ] ) , 1 denotes the indicator function, (10) e ~ = 1 n i = 1 n Y i q ( U i , V i ) , and ω refers to (H4).

Then there exists a constant C > 0 such that (11) 𝔼 ( 0 1 ( h ^ ( x , y ) - h ( x , y ) ) 2 d x d y ) C ( 1 n 𝔼 ( g ~ - g e o 2 2 ) + 𝔼 ( f ~ - f e * 2 2 ) hinda + 𝔼 ( g ~ - g e o 2 2 f ~ - f e * 2 2 ) + 1 n ) .

The form of h ~ (9) is derived to the multiplicative separable structure of h (2) and a ratio-type normalization. Other results about such ratio-type estimators in a general statistical context can be found in Vasiliev .

Based on Theorem 1, h ^ is efficient for h if and only if f ~ is efficient for f e * and g ~ is efficient for g e o in terms of MISE. Even if several methods are possible, we focus our attention on wavelet methods enjoying adaptivity for a wide class of unknown functions and having optimal properties under the MISE. For details on the interests of wavelet methods in nonparametric statistics, we refer to Antoniadis , Vidakovic , and Härdle et al. .

Before introducing our wavelet estimators, let us present some basics on wavelets.

4.1. Wavelet Basis on [0, 1]

Let us briefly recall the construction of wavelet basis on the interval [ 0,1 ] introduced by Cohen et al. . Let N be a positive integer, and let ϕ and ψ be the initial wavelets of the Daubechies orthogonal wavelets d b 2 N . We set (12) ϕ j , k ( x ) = 2 j / 2 ϕ ( 2 j x - k ) , ψ j , k ( x ) = 2 j / 2 ψ ( 2 j x - k ) . With appropriate treatments at the boundaries, there exists an integer τ satisfying 2 τ 2 N such that the collection 𝒮 = { ϕ τ , k ( · ) ,    k { 0 , , 2 τ - 1 } ;    ψ j , k ( · ) ;    j - { 0 , , τ - 1 } , k { 0 , , 2 j - 1 } } , is an orthonormal basis of 𝕃 2 ( [ 0,1 ] ) .

Any v 𝕃 2 ( [ 0,1 ] ) can be expanded on 𝒮 as (13) v ( x ) = k = 0 2 τ - 1 α τ , k ϕ τ , k ( x ) + j = τ k = 0 2 j - 1 β j , k ψ j , k ( x ) , x [ 0,1 ] , where α j , k and β j , k are the wavelet coefficients of v defined by (14) α j , k = 0 1 v ( x ) ϕ j , k ( x ) d x , β j , k = 0 1 v ( x ) ψ j , k ( x ) d x .

4.2. Besov Balls

For the sake of simplicity, we consider the sequential version of Besov balls defined as follows. Let M > 0 , s ( 0 , N ) , p 1 and r 1 . A function v belongs to B p , r s ( M ) if and only if there exists a constant M * > 0 (depending on M ) such that the associated wavelet coefficients (14) satisfy (15) 2 τ ( 1 / 2 - 1 / p ) ( k = 0 2 τ - 1 | α τ , k | p ) 1 / p + ( j = τ ( 2 j ( s + 1 / 2 - 1 / p )    ( k = 0 2 j - 1 | β j , k | p ) 1 / p ) r ) 1 / r M * . In this expression, s is a smoothness parameter and p and r are norm parameters. For a particular choice of s , p , and r , B p , r s ( M ) contains the Hölder and Sobolev balls (see, e.g., DeVore and Popov , Meyer , and Härdle et al. ).

4.3. Hard Thresholding Estimators

In the sequel, we consider (1) under (H1)–(H4).

We consider hard thresholding wavelet estimators for f ~ and g ~ in (9). They are based on a term-by-term selection of estimators of the wavelet coefficients of the unknown function. Those which are greater to a threshold are kept; the others are removed. This selection is the key to the adaptivity and the good performances of the hard thresholding wavelet estimators (see, e.g., Donoho et al. , Delyon and Juditsky , and Härdle et al. ).

To be more specific, we use the “double thresholding” wavelet technique, introduced by Delyon and Juditsky  then recently improved by Chaubey et al. . The role of the second thresholding (appearing in the definition of the wavelet estimator for β j , k ) is to relax assumption on the model (see Remark 6).

Estimator f ~ for f e * . We define the hard thresholding wavelet estimator f ~ by (16) f ~ ( x ) = k = 0 2 τ - 1 α ^ τ , k ϕ τ , k ( x ) + j = τ j 1 k = 0 2 j - 1 β ^ j , k 1 { | β ^ j , k | κ C * λ n } ψ j , k ( x ) , where (17) α ^ τ , k = 1 a n i = 1 a n Y i q ( U i , V i ) ϕ τ , k ( U i ) , where a n is the integer part of n / 2 , (18) β ^ j , k = 1 a n i = 1 a n W i , j , k 1 { | W i , j , k | C * / λ n } , W i , j , k = Y i q ( U i , V i ) ψ j , k ( U i ) , where j 1 is the integer satisfying ( 1 / 2 ) a n < 2 j 1 a n , κ = 2 + 8 / 3 + 2 4 + 16 / 9 , C * = ( 2 / c 3 ) ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) , and (19) λ n = ln a n a n . Estimator g ~ for g e o . We define the hard thresholding wavelet estimator g ~ by (20) g ~ ( x ) = k = 0 2 τ - 1 υ ^ τ , k ϕ τ , k ( x ) + j = τ j 2 k = 0 2 j - 1 θ ^ j , k 1 { | θ ^ j , k | κ * C * η n } ψ j , k ( x ) , where (21) υ ^ τ , k = 1 b n i = 1 b n Y a n + i q ( U a n + i , V a n + i ) ϕ τ , k ( V a n + i ) , Where a n is the integer part of n / 2 , b n = n - a n , (22) θ ^ j , k = 1 b n i = 1 b n Z a n + i , j , k 1 { | Z a n + i , j , k | C * / η n } , Z a n + i , j , k = Y a n + i q ( U a n + i , V a n + i ) ψ j , k ( V a n + i ) , Where j 2 is the integer satisfying ( 1 / 2 ) b n < 2 j 2 b n , κ * = 2 + 8 / 3 + 2 4 + 16 / 9 , C * = ( 2 / c 3 ) ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) , and (23) η n = ln b n b n . Estimator for h. From f ~ (16) and g ~ (20), we consider the following estimator for h (2): (24) h ^ ( x , y ) = f ~ ( x ) g ~ ( y ) e ~ 1 { | e ~ | ω / 2 } , where (25) e ~ = 1 n i = 1 n Y i q ( U i , V i ) and ω refers to (H4).

Let us mention that h ~ is adaptive in the sense that it does not depend on f or g in its construction.

Remark 2.

Since f ~ is defined with ( Y 1 , U 1 , V 1 ) , , ( Y a n , U a n , V a n ) and g ~ is defined with ( Y a n + 1 , U a n + 1 , V a n + 1 ) , , ( Y n , U n , V n ) , thanks to the independence of ( Y 1 , U 1 , V 1 ) , , ( Y n , U n , V n ) , f ~ and g ~ are independent.

Remark 3.

The calibration of the parameters in f ~ and g ~ is based on theoretical considerations; thus defined, f ~ and g ~ can attain a fast rate of convergence under the MISE over Besov balls (see , Theorem 6.1]). Further details are given in the proof of Theorem 4.

4.4. Rate of Convergence

Theorem 4 investigates the rate of convergence attains by h ^ under the MISE over Besov balls.

Theorem 4.

We consider (1) under (H1)–(H4). Let h ^ be (24) and let h be (2). Suppose that

f B p 1 , r 1 s 1 ( M 1 ) with M 1 > 0 , r 1 1 , either { p 1 2 and s 1 ( 0 , N ) } or { p 1 [ 1,2 ) and s 1 ( 1 / p 1 , N ) } ,

g B p 2 , r 2 s 2 ( M 2 ) with M 2 > 0 , r 2 1 , either { p 2 2 and s 2 ( 0 , N ) } or { p 2 [ 1,2 ) and s 2 ( 1 / p 2 , N ) } .

Then there exists a constant C > 0 such that (26) 𝔼 ( 0 1 ( h ^ ( x , y ) - h ( x , y ) ) 2 d x d y ) C ( ln n n ) 2 s * / ( 2 s * + 1 ) , where s * = min ( s 1 , s 2 ) .

The rate of convergence ( ln n / n ) 2 s * / ( 2 s * + 1 ) is the near optimal one in the minimax sense for the unidimensional regression model with random design under the MISE over Besov balls B p , r s * ( M ) (see, e.g., Tsybakov , and Härdle et al. ). Thus Theorem 4 proves that our estimator escapes to the so-called “curse of dimensionality.” Such a result is not possible with the standard bidimensional hard thresholding wavelet estimator attaining the rate of convergence ( ln n / n ) 2 s / ( 2 s + d ) with d = 2 under the MISE over bidimensional Besov balls defined with s as smoothness parameter (see Delyon and Juditsky ).

Theorem 4 completes asymptotic results proved by Linton and Nielsen  investigating this problem for the structured nonparametric regression model via another estimation method based on nonadaptive kernels.

Remark 5.

In Theorem 4, we take into account both the homogeneous zone of Besov balls, that is, { p 1 2 and s 1 ( 0 , N ) } , and the inhomogeneous zone, that is, { p 1 [ 1,2 ) and s 1 ( 1 / p 1 , N ) } , for the case f B p 1 , r 1 s 1 ( M 1 ) and the same for g B p 2 , r 2 s 2 ( M 2 ) . This has the advantage to cover a very rich class of unknown regression functions h .

Remark 6.

Note that Theorem 4 does not require the knowledge of the distribution of ξ 1 ; { 𝔼 ( ξ 1 ) = 0 and the existence of 𝔼 ( ξ 1 2 ) } is enough.

Remark 7.

Let us mention that the phenomenon of curse of dimensionality has also been studied via wavelet methods by Neumann  but for the multidimensional Gaussian white noise model and with different approaches based on anysotropic frameworks.

Remark 8.

Our study can be extended to the multidimensional case considered by Yatchew and Bos , that is, f : [ 0,1 ] q 1 and g : [ 0,1 ] q 2 ; q 1 and q 2 denoting two positive integers. In this case, adapting our framework to the multidimensional case ( q 1 dimensional Besov balls, q 1 dimensional (tensorial) wavelet basis, q 1 dimensional hard thresholding wavelet estimator, see, e.g, Delyon and Juditsky ), one can prove that (9) attains the rate of convergence ( ln n / n ) 2 s * / ( 2 s * + q * ) , where s * = min ( s 1 , s 2 ) and q * = max ( q 1 , q 2 ) .

5. Proofs

In this section, for the sake of simplicity, C denotes a generic constant; its value may change from one term to another.

Proof of Theorem <xref ref-type="statement" rid="thm1">1</xref>.

Observe that (27) h ^ ( x , y ) - h ( x , y ) = f ~ ( x ) g ~ ( y ) e ~ 1 { | e ~ | ω / 2 } - f ( x ) g ( y ) = 1 e ~ ( f ~ ( x ) g ~ ( y ) - f ( x ) g ( y ) e ~ ) 1 { | e ~ | ω / 2 } - f ( x ) g ( y ) 1 { | e ~ | < ω / 2 } . Therefore, using the triangular inequality, the Markov inequality, (H1), (H2), (H4), { | e ~ | < ω / 2 } { | e * e o | ω } { | e ~ - e * e o | ω / 2 } , and again the Markov inequality, we get (28) | h ^ ( x , y ) - h ( x , y ) | 2 ω | f ~ ( x ) g ~ ( y ) - f ( x ) g ( y ) e ~ | + | f ( x ) | | g ( y ) | 1 { | e ~ | < ω / 2 } C ( | f ~ ( x ) g ~ ( y ) - f ( x ) g ( y ) e ~ | + 1 { | e ~ - e * e o | ω / 2 } ) C ( | f ~ ( x ) g ~ ( y ) - f ( x ) g ( y ) e ~ | + | e ~ - e * e o | ) . On the other hand, we have the decomposition (29) f ~ ( x ) g ~ ( y ) - f ( x ) g ( y ) e ~ = f ( x ) e * ( g ~ ( y ) - g ( y ) e o ) + g ( y ) e o ( f ~ ( x ) - f ( x ) e * ) + ( g ~ ( y ) - g ( y ) e o ) ( f ~ ( x ) - f ( x ) e * ) + f ( x ) g ( y ) ( e * e o - e ~ ) . Owing to the triangular inequality, (H1) and (H2), we have (30) | f ~ ( x ) g ~ ( y ) - f ( x ) g ( y ) e ~ | C ( | g ~ ( y ) - g ( y ) e o | + | f ~ ( x ) - f ( x ) e * | h + | g ~ ( y ) - g ( y ) e o | | f ~ ( x ) - f ( x ) e * | + | e ~ - e * e o | ) . Putting (28) and (30) together, we obtain (31) | h ^ ( x , y ) - h ( x , y ) | C ( | g ~ ( y ) - g ( y ) e o | + | f ~ ( x ) - f ( x ) e * | h + | g ~ ( y ) - g ( y ) e o | | f ~ ( x ) - f ( x ) e * | + | e ~ - e * e o | ) . Therefore, by the elementary inequality: ( a + b + c + d ) 2 8 ( a 2 + b 2 + c 2 + d 2 ) , ( a , b , c , d ) 4 , an integration over [ 0,1 ] 2 and taking the expectation, it comes (32) 𝔼 ( 0 1 ( h ^ ( x , y ) - h ( x , y ) ) 2 d x d y ) C ( 𝔼 ( g ~ - g e o 2 2 ) + 𝔼 ( f ~ - f e * 2 2 ) h + 𝔼 ( g ~ - g e o 2 2 f ~ - f e * 2 2 ) + 𝔼 ( ( e ~ - e * e o ) 2 ) ) . Now observe that, owing to the independence of ( U i , V i ) i , the independence between ( U 1 , V 1 ) and ξ 1 , and 𝔼 ( ξ 1 ) = 0 , we obtain (33) 𝔼 ( e ~ ) = 𝔼 ( Y 1 q ( U 1 , V 1 ) ) = 𝔼 ( h ( U 1 , V 1 ) q ( U 1 , V 1 ) ) + 𝔼 ( ξ 1 ) 𝔼 ( 1 q ( U 1 , V 1 ) ) = 0 1 f ( x ) g ( y ) q ( x , y ) q ( x , y ) d x d y = ( 0 1 f ( x ) d x ) ( 0 1 g ( y ) d y ) = e * e o . Then, using similar arguments to (33), ( a + b ) 2 2 ( a 2 + b 2 ) , ( a , b ) 2 , (H1), (H2), (H3), and 𝔼 ( ξ 1 2 ) < , we have (34) 𝔼 ( ( e ~ - e * e o ) 2 ) = 𝕍 ( e ~ ) = 1 n 𝕍 ( Y 1 q ( U 1 , V 1 ) ) 1 n 𝔼 ( ( Y 1 q ( U 1 , V 1 ) ) 2 ) 2 n 𝔼 ( ( h ( U 1 , V 1 ) ) 2 + ξ 1 2 ( q ( U 1 , V 1 ) ) 2 ) 2 c 3 2 ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) 1 n = C 1 n . Equations (32) and (34) yield the desired inequality: (35) 𝔼 ( 0 1 ( h ^ ( x , y ) - h ( x , y ) ) 2 d x d y ) C ( 𝔼 ( g ~ - g e o 2 2 ) + 𝔼 ( f ~ - f e * 2 2 ) hind + 𝔼 ( g ~ - g e o 2 2 f ~ - f e * 2 2 ) + 1 n ) .

Proof of Theorem <xref ref-type="statement" rid="thm2">4</xref>.

We aim to apply Theorem 1 by investigating the rate of convergence attained by f ~ and g ~ under the MISE over Besov balls.

First of all, remark that, for γ { ϕ , ψ } , any integer j τ and any k { 0 , , 2 j - 1 } .

Using similar arguments to (33), we obtain (36) 𝔼 ( 1 a n i = 1 a n Y i q ( U i , V i ) γ j , k ( U i ) ) = 𝔼 ( Y 1 q ( U 1 , V 1 ) γ j , k ( U 1 ) ) = 𝔼 ( h ( U 1 , V 1 ) q ( U 1 , V 1 ) γ j , k ( U 1 ) ) + 𝔼 ( ξ 1 ) 𝔼 ( γ j , k ( U 1 ) q ( U 1 , V 1 ) ) = 0 1 f ( x ) g ( y ) q ( x , y ) γ j , k ( x ) q ( x , y ) d x d y = ( 0 1 f ( x ) γ j , k ( x ) d x ) ( 0 1 g ( y ) d y ) = 0 1 ( f ( x ) e * ) γ j , k ( x ) d x .

Using similar arguments to (34) and γ j , k 2 2 = 1 , we have (37) i = 1 a n 𝔼 ( ( Y i q ( U i , V i ) γ j , k ( U i ) ) 2 ) = 𝔼 ( ( Y 1 q ( U 1 , V 1 ) γ j , k ( U 1 ) ) 2 ) a n 2 𝔼 ( ( h ( U 1 , V 1 ) ) 2 + ξ 1 2 ( q ( U 1 , V 1 ) ) 2 ( γ j , k ( U 1 ) ) 2 ) a n 2 c 3 ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) 𝔼 ( ( γ j , k ( U 1 ) ) 2 q ( U 1 , V 1 ) ) a n = 2 c 3 ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) 0 1 ( γ j , k ( x ) ) 2 q ( x , y ) q ( x , y ) d x d y a n = 2 c 3 ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) γ j , k 2 2 a n = C * 2 a n ,

with C * 2 = ( 2 / c 3 ) ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) .

Applying [6, Theorem 6.1] (see the Appendix) with n = μ n = υ n = a n , δ = 0 , θ γ = C * , W i = ( Y i , U i , V i ) , (38) q i ( γ , ( y , x , w ) ) = y q ( x , w ) γ ( x ) and f B p 1 , r 1 s 1 ( M 1 ) (so f e * B p 1 , r 1 s 1 ( M 1 e * ) ) with M 1 > 0 , r 1 1 , either { p 1 2 and s 1 ( 0 , N ) } or { p 1 [ 1,2 ) and s 1 ( 1 / p 1 , N ) } , we prove the existence of a constant C > 0 such that (39) 𝔼 ( f ~ - f e * 2 2 ) C ( ln a n a n ) 2 s 1 / ( 2 s 1 + 1 ) C ( ln n n ) 2 s 1 / ( 2 s 1 + 1 ) , when n is large enough.

The MISE of g ~ can be investigated in a similar way: for γ { ϕ , ψ } , any integer j τ and any k { 0 , , 2 j - 1 } .

We show that (40) 𝔼 ( 1 b n i = 1 b n Y a n + i q ( U a n + i , V a n + i ) γ j , k ( V a n + i ) ) = 0 1 ( g ( x ) e o ) γ j , k ( x ) d x .

We show that (41) i = 1 b n 𝔼 ( ( Y a n + i q ( U a n + i , V a n + i ) γ j , k ( V a n + i ) ) 2 ) C * 2 b n ,

with always C * 2 = ( 2 / c 3 ) ( C 1 2 C 2 2 + 𝔼 ( ξ 1 2 ) ) .

Applying again [6, Theorem 6.1] (see the Appendix) with n = μ n = υ n = b n , δ = 0 , θ γ = C * , W i = ( Y i , U i , V i ) , (42) q i ( γ , ( y , x , w ) ) = y q ( x , w ) γ ( w ) and g B p 2 , r 2 s 2 ( M 2 ) with M 2 > 0 , r 2 1 , either { p 2 2 and s 2 ( 0 , N ) } or { p 2 [ 1,2 ) and s 2 ( 1 / p 2 , N ) } ; we prove the existence of a constant C > 0 such that (43) 𝔼 ( g ~ - g e o 2 2 ) C ( ln b n b n ) 2 s 2 / ( 2 s 2 + 1 ) C ( ln n n ) 2 s 2 / ( 2 s 2 + 1 ) , when n is large enough.

Using the independence between f ~ and g ~ (see Remark 2), it follows from (39) and (43) that (44) 𝔼 ( g ~ - g e o 2 2 f ~ - f e * 2 2 ) = 𝔼 ( g ~ - g e o 2 2 ) 𝔼 ( f ~ - f e * 2 2 ) C ( ln n n ) 4 s 1 s 2 / ( 2 s 1 + 1 ) ( 2 s 2 + 1 ) . Owing to Theorem 1, (39), (43) and (44), we get (45) 𝔼 ( 0 1 ( h ^ ( x , y ) - h ( x , y ) ) 2 d x d y ) C ( 1 n 𝔼 ( g ~ - g e o 2 2 ) + 𝔼 ( f ~ - f e * 2 2 ) + 𝔼 ( g ~ - g e o 2 2 f ~ - f e * 2 2 ) + 1 n ) C ( ( ln n n ) 2 s 2 / ( 2 s 2 + 1 ) + ( ln n n ) 2 s 1 / ( 2 s 1 + 1 ) + ( ln n n ) 4 s 1 s 2 / ( 2 s 1 + 1 ) ( 2 s 2 + 1 ) + 1 n ) C ( ln n n ) 2 s * / ( 2 s * + 1 ) , with s * = min ( s 1 , s 2 ) .

Theorem 4 is proved.

Appendix

Let us now present in detail [6, Theorem 6.1] which is used two times in the proof of Theorem 4.

We consider a general form of the hard thresholding wavelet estimator denoted by f ^ H for estimating an unknown function f 𝕃 2 ( [ 0,1 ] ) from n independent random variables W 1 , , W n : (A.1) f ^ H ( x ) = k = 0 2 τ - 1 α ^ τ , k ϕ τ , k ( x ) + j = τ j 1 k = 0 2 j - 1 β ^ j , k 1 { | β ^ j , k | κ ϑ j } ψ j , k ( x ) , where (A.2) α ^ j , k = 1 υ n i = 1 n q i ( ϕ j , k , W i ) , β ^ j , k = 1 υ n i = 1 n q i ( ψ j , k , W i ) 1 { | q i ( ψ j , k , W i ) | ς j } , ς j = θ ψ 2 δ j υ n μ n ln μ n , ϑ j = θ ψ 2 δ j ln μ n μ n , κ 2 + 8 / 3 + 2 4 + 16 / 9 and j 1 is the integer satisfying (A.3) 1 2 μ n 1 / ( 2 δ + 1 ) < 2 j 1 μ n 1 / ( 2 δ + 1 ) . Here, we suppose that there exist

n functions q 1 , , q n with q i : 𝕃 2 ( [ 0,1 ] ) × W i ( Ω ) for any i { 1 , , n } ,

two sequences of real numbers ( υ n ) n and ( μ n ) n satisfying lim n υ n = and lim n μ n = ,

such that, for γ { ϕ , ψ } ,

any integer j τ and any k { 0 , , 2 j - 1 } , (A.4) 𝔼 ( 1 υ n i = 1 n q i ( γ j , k , W i ) ) = 0 1 f ( x ) γ j , k ( x ) d x .

there exist two constants, θ γ > 0 and δ 0 , such that, for any integer j τ and any k { 0 , , 2 j - 1 } , (A.5) i = 1 n 𝔼 ( | q i ( γ j , k , W i ) | 2 ) θ γ 2 2 2 δ j υ n 2 μ n .

Let f ^ H be (A.1) under (A1) and (A2). Suppose that f B p , r s ( M ) with r 1 , { p 2 and s ( 0 , N ) } or { p [ 1,2 ) and s ( ( 2 δ + 1 ) / p , N ) } . Then there exists a constant C > 0 such that (A.6) 𝔼 ( f ^ H - f 2 2 ) C ( ln μ n μ n ) 2 s / ( 2 s + 2 δ + 1 ) .

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Linton O. B. Nielsen J. P. A kernel method of estimating structured nonparametric regression based on marginal integration Biometrika 1995 82 1 93 100 2-s2.0-77956888636 10.1093/biomet/82.1.93 ZBL0823.62036 Yatchew A. Bos L. Nonparametric least squares estimation and testing of economic models Journal of Quantitative Economics 1997 13 81 131 Pinske J. Feasible Multivariate Nonparametric Regression Estimation Using Weak Separability 2000 Vancouver, Canada University of British Columbia Lewbel A. Linton O. Nonparametric matching and efficient estimators of homothetically separable functions Econometrica 2007 75 4 1209 1227 2-s2.0-34250372181 10.1111/j.1468-0262.2007.00787.x ZBL1134.91548 Jacho-Chávez D. Lewbel A. Linton O. Identification and nonparametric estimation of a transformed additively separable model Journal of Econometrics 2010 156 2 392 407 2-s2.0-77950517752 10.1016/j.jeconom.2009.11.008 Chaubey Y. P. Chesneau C. Doosti H. Adaptive wavelet estimation of a density from mixtures under multiplicative censoring 2014, http://hal.archives-ouvertes.fr/hal-00918069 Antoniadis A. Wavelets in statistics: a review (with discussion) Journal of the Italian Statistical Society B 1997 6 2 97 144 10.1007/BF03178905 Vidakovic B. Statistical Modeling by Wavelets 1999 New York, NY, USA John Wiley & Sons Härdle W. Kerkyacharian G. Picard D. Tsybakov A. Wavelet, Approximation and Statistical Applications 1998 129 New York, NY, USA Springer Lectures Notes in Statistics Vasiliev V. A. One investigation method of a ratios type estimators Proceedings of the 16th IFAC Symposium on System Identification July 2012 Brussels, Belgium 1 6 Cohen A. Daubechies I. Vial P. Wavelets on the interval and fast wavelet transforms Applied and Computational Harmonic Analysis 1993 1 1 54 81 2-s2.0-0027840161 10.1006/acha.1993.1005 ZBL0795.42018 DeVore R. Popov V. Interpolation of Besov spaces Transactions of the American Mathematical Society 1988 305 397 414 10.1090/S0002-9947-1988-0920166-3 ZBL0646.46030 Meyer Y. Wavelets and Operators 1992 Cambridge, UK Cambridge University Press Donoho D. L. Johnstone I. M. Kerkyacharian G. Picard D. Density estimation by wavelet thresholding Annals of Statistics 1996 24 2 508 539 2-s2.0-0030504418 10.1214/aos/1032894451 ZBL0860.62032 Delyon B. Juditsky A. On minimax wavelet estimators Applied and Computational Harmonic Analysis 1996 3 3 215 228 2-s2.0-0000954086 10.1006/acha.1996.0017 ZBL0865.62023 Tsybakov A. B. Introduction à L'Estimation Non-Paramétrique 2004 New York, NY, USA Springer Neumann M. H. Multivariate wavelet thresholding in anisotropic function spaces Statistica Sinica 2000 10 2 399 431 2-s2.0-0034414848 ZBL0982.62039