Wavelet Optimal Estimations for Density Functions under Severely Ill-Posed Noises

and Applied Analysis 3 Theorem 3. Let φ satisfy (C3) and φ be the Meyer scaling function. If p ∈ [1,∞), q, r ∈ [1,∞], ‖xf‖ μ ≤ A (μ ≥ 1, A > 0), then, with K n ∼ e(ln n) θ (0 < θ < 1), (3/8π)((ln n)/4c)1/α < 2 ≤ (3/4π)((ln n)/4c)1/α, s := s − (1/r − 1/p) + , and x + = max{x, 0}, sup


Introduction and Preliminary
Wavelets have made great achievements in studying the statistical model  =  + , where  stands for real-valued random variable with unknown probability density , and  denotes an independent random noise (error) with density .
In 1999, Pensky and Vidakovic [1] investigate Meyer wavelet estimation over Sobolev spaces   2 (R) and  2 risk under moderately and severely ill-posed noises.Three years later, Fan and Koo [2] extend those works from   2 (R) to Besov spaces   , (R) (1 ≤  ≤ 2).It should be pointed out that, by using different method, Lounici and Nickl [3] study wavelet optimal estimation over Besov spaces   ∞,∞ (R) and  ∞ risk under both noises.In [4], we provide a wavelet optimal estimation over   , (R) and   risk (1 ≤  < ∞, ,  ∈ [1, ∞]) under moderately ill-posed noise.This current paper deals with the same problem under the severely illposed noises.It turns out that our result contains some theorems of [1,2] as special cases.Our discussion also shows that nonlinear wavelet estimations are not needed for severely ill-posed noise, which is totally different with moderately illposed case.
In the next section, we define a wavelet linear estimator and provide an upper bound estimation over Besov spaces   , (R) and   risk under the condition (C3); the third part gives a lower bound estimation which shows the result of Section 2 optimal; some concluding remarks are discussed in the last part.
Remark 4. Note that the choices of  and   do not depend on the unknown parameters , , and .Then our linear wavelet estimator    over Besov space   , is adaptive or implementable.The same conclusion holds for  ∞ and  2 estimations; see Theorem 2 in [3] and Corollary 1 in [1].On the other hand, when  = 2 and 1 ≤  ≤ 2, our Theorem 3 reduces to Theorem 4 in [2]; from the proof of Theorem 3, we find that, for  > 1, the assumption ‖ 2 ()‖  ≤  can be replaced by ‖()‖ ∞ ≤ , which is the same as in [1].Therefore, for  =  =  = 2, Theorem 3 of [1] follows directly from our Theorem 3.
The second classical theorem is taken from [8].
Lemma 8 (Fano).Let (X, F,   ) be probability measurable spaces and   ∈ F,  = 0, 1, . . ., . where , and   denotes the complement of a set . Now, we are in the position to state the main theorem in this section.

Concluding Remarks
This paper provides an   (1 ≤  < ∞) risk upper bound for a linear wavelet estimator    (Theorem 3), which turns out to be optimal (Theorem 9).Therefore, nonlinear estimations are not needed under severely ill-posed noises.Although we assume  < ∞ in Theorem 9, the proof of that theorem shows that, for  = ∞, In particular, when  =  = ∞, this above estimation reduces to partial result of Theorem 1 in [3].Note that our model assumes the noise to be severely illposed; that is, the density  of noise  satisfies | φ()| ∼ (1 + || 2 ) −/2  −||  (a.e.).Then it is reasonable to choose the Meyer scaling function as  because the compact supportness of φ makes K   well defined, where Compare with the proof of Theorem 1 in [3], the arguments of Theorem 9 are more complicated in the sense that we use Varshmov-Gilbert Lemma (Lemma 7).It is reasonable because we deal with unmatched estimation sup ∈  , (R) ‖  − ‖   ( and  may not be equal), while they do the matched case sup ∈  ∞,∞ (R) ‖  − ‖ ∞ .Although the Shannon function   () = sin / is much simpler than the Meyer's, it cannot be used in our discussion because the Shannon's does not belong to (R), while our theorems cover the case for  = 1.
Finally, it should be pointed out that we assume the independence of observations  1 ,  2 , . . .,   in this paper.However, some dependent data are more important (of course, more complicated) in practice.We will investigate that case in future.