Uniform Bounds of Aliasing and Truncated Errors in Sampling Series of Functions from Anisotropic Besov Class

Errors appear when the Shannon sampling series is applied to approximate a signal in real life. This is because a signal may not be bandlimited, the sampling series may have to be truncated, and the sampled values may not be exact and may have to be quantized. In this paper, we truncate the multidimensional Shannon sampling series via localized sampling and obtain the uniform bounds of aliasing and truncation errors for functions from anisotropic Besov class without any decay assumption. The bounds are optimal up to a logarithmic factor. Moreover, we derive the corresponding results for the case that the sampled values are given by a linear functional and its integer translations. Finally we give some applications.


Introduction
Since Shannon introduced the sampling series in the landmark paper [1], the Shannon sampling theorem has been a fundamental result in the field of information theory, in particular telecommunications and signal processing; see [2][3][4][5][6][7] and the references therein. The theorem states that a bandlimited signal can be exactly recovered from an infinite sequence of its samples if the bandlimit is no greater than half the sampling rate. The theorem also leads to a formula for reconstruction of the original function from its samples. When the function is not bandlimit, the reconstruction exhibits imperfections known as aliasing. Moreover, in practice, the signal and the sampled values are not the accurate functional values. So several types of errors such as aliasing errors, truncated errors, jitter errors, and amplitude errors appear when the Shannon sampling series is applied to approximate a signal in real life. These types of errors have been widely studied under the assumption that signals satisfy some decay conditions at infinity; see [8][9][10][11][12][13]. On the other hand, one can avoid assumptions upon the decay rate of the initial signals by using localized sampling; see [14][15][16][17][18][19].
Recently the uniform bounds for truncated Shannon series based on local sampling are derived for nonbandlimited functions from Sobolev classes without decay assumption; see [18,19]. In this paper we study errors in truncated multivariable Shannon sampling series via localized sampling by considering nonbandlimited functions from anisotropic Besov classes.
It is well known that the sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. The multivariable sampling theorem can be used in the reconstruction of some types of images such as gray-scale images.
We begin our discussion with the definitions of some function spaces. Let (R ), 1 ≤ ≤ ∞, be the space of all th power Lebesgue integrable functions on R equipped with the usual norm for 1 ≤ < ∞ and Set = {1, 2, . . . , }. For any vector k := (V : ∈ ) with positive coordinates we say an entire function ℎ is of exponential type k provided that for every > 0 there exists a positive number such that for all complex vectors z := ( : ∈ ) ∈ C we have the bound Denote by k (C ) the space of all entire functions of exponential type k. Let k (R ) be the subset of k (C ) which are bounded on R . Set Every vector k = (V : ∈ ) ∈ R + determines the rectangle According to the Schwartz theorem [20], wherêis the Fourier transform of in the sense of distribution. For the case = 2, it is the classical Paley-Wiener theorem. Now we define anisotropic Besov space. Suppose that ∈ and t ∈ R . For ∈ (R ), we define the th partial difference of in the th coordinate direction e at the point t ∈ R with step ∈ R by the formula Let l = ( 1 , . . . , ) ∈ N , r = ( 1 , . . . , ) ∈ R + , and > for ∈ , 1 ≤ , and ≤ ∞. We say ∈ r (R ) if ∈ (R ), and the following seminorm is finite: The linear space r (R ) is a Banach space with the norm and is called an anisotropic Besov space. We introduce the which plays an important role in our error estimates. In this paper, we assume (r) > 1/ , which ensures that r (R ) is embedded into (R ) by a Sobolev-type embedding theorem, and therefore function values are well defined; see [20]. Now we make some illustrations about why we choose Besov spaces as the hypothesis function spaces; that is, why we assume the signals come from Besov spaces. Firstly in studying the aliasing errors for nonbandlimited functions, one often uses Lipschitz or Sobolev regularity to replace the strong bandlimited assumption. In this way one can derive some reasonable convergence rates as the distance between the sampling periods tends to zero. However the aliasing and truncation errors by local sampling for these nonbandlimited functions have not been thoroughly studied. In particular errors by localized sampling approximation for these spaces of functions with measured values have never been considered. In this paper, using the tools in the study of mean -dimension width for Besov classes and the related imbedding theorems, we can consider anisotropic Besov spaces which include Lipschitz or Sobolev spaces as special cases. Thus our results immediately lead to the results for these two types of hypothesis function spaces. Of course the results on these two spaces are also novel. On the other hand, from the viewpoint of approximation theory it is worth studying the Besov class, since the best possible orders of approximation by bandlimited functions are known for Besov classes from the corresponding results of meandimension width theory. Note that a convergent Shannon series is a bandlimited function. So it is natural to ask if one can use Shannon interpolation formula to realize the best approximation for these spaces. In what follows we will give an affirmative answer to this question.
By the way for later use we recall the classical Sobolev space (R ) which consists of functions ∈ (R ) such that for all multi-index vector l = ( 1 , . . . , ) ∈ N , with |l| = ∑ =1 ≤ , the distributional partial derivative belongs to (R ). The remaining part of this paper is organized as follows. In Section 2, we consider errors in truncated Shannon sampling series with exactly functional values based on localized sampling. In Section 3 we firstly generalize part of the results in Section 2 to the sampling series with measured sampled values and then give some applications.
In what follows, let k, t, and so forth denote vector variables living in R , and write k/k := ( 1 /V 1 , . . . , /V ) and k ⋅t := (V 1 1 , . . . , V ). We use the same symbol for possibly different positive constants. These constants are independent of N ∈ N and k ∈ R + . Denote by [ ] the largest integer not exceeding .

The Exactly Functional Values Case
The famous Shannon sampling theorem states every function ∈ 2 V (R) can be completely reconstructed from its sampled Abstract and Applied Analysis 3 values taken at instances { /V} ∈Z (cf. [1]). In this case the representation of is given by where sinc ( ) = sin / , ̸ = 0, and sinc (0) = 1. Series (11) converges absolutely and uniformly on R.
where sinc (t) = ∏ =1 sinc( ) . The series on the right-hand side of (12) converges absolutely and uniformly on R .
Shannon's expansion requires us to know the exact values of a signal at infinitely many points and to sum an infinite series. In practice, only finitely many samples are available, and hence the symmetric truncation error has been widely studied under the assumption that satisfies some decay condition. Among others, in [11] the uniform truncation error bounds are determined for ∈ 2 V (R) with a decay condition. In [12] the uniform bounds of truncation error and aliasing error are derived for functions belonging to the Besov class r ∞ (R ) with the same decay condition as in [11]. Since their results are the motivations of our works, we restate them as follows. Throughout the paper we denote the unit ball of the space r (R ) by U( r (R )).
Theorem B (see [12]). Let ∈ U( r ∞ (R )), 1 ≤ ≤ ∞, and r ∈ R + satisfy the decay condition inequality where > 0 and 0 < ≤ 1 are constants and |t| 2 = ( 2 Theorem C (see [12]). Let ∈ U( r ∞ (R )), 1 ≤ ≤ ∞, satisfy the decay condition (14). Then for any N = Now we truncate the series on the right-hand side of (12) based on localized sampling. That is, if we want to estimate (t), we only sum over values of on a part of Z /k near t. Thus for any N ∈ N we consider the finite sum as an approximation to (t). In this way we can derive the uniform bounds for the associated truncation error and aliasing error without any assumption about the decay of ∈ U( r (R )).
Our main result of this section is the following uniform bound of the aliasing error Theorem 1. Let ∈ U( r (R )) with 1 < < ∞, 1 ≤ ≤ ∞, and > for ∈ . For > , define k in the same manner as in Theorem B; then one has We firstly note that due to the localized sampling the function in Theorem 1 does not need to satisfy any decay assumption at infinity. Next we make a comment on the bound − (r)+1/ ln . It is known from the results of mean -dimension Kolmogorov widths for Besov class U( r (R )) that Thus the bound in Theorem 1 is optimal up to the logarithmic factor ln ; see [21]. As a consequence of Theorem 1, we show that, using truncated sampling series (17), we can still achieve this near optimal bound.
To prove Theorem 1 we will choose an intermediate function which is a good approximation for both and k . Now we describe how to choose this function. For more details, one can see [21,22].
For any positive real number > 0, we define the function where the constant is taken such that ∫ R ( ) = 1.
When ∈ R and ∈ , we let and observe from formulas (25) and (26) and introduce the operator Consequently, u is given by It is known from [20] that u ∈ 2 u (R ). We will exploit the following properties of u in the proof of Theorem 1.
We also need the following bound for sinc series: ∑ k∈Z |sinc(k ⋅ t − k)| .
Proof of Theorem 2. By the triangle inequality, we have By the arguments similar to those used in the proof of Theorem 1, we obtain where we use = [( (r) ) + 1] in the last inequality. Combining Theorem 1 and (58), we complete the proof of Theorem 2.

The Measured Sampled Values Case
In practice, the sampled values of a signal may not be exactly the functional values and may have to be quantized. Typical errors arising from these facts are jitter errors and amplitude errors. Using the key idea of quasi-interpolation which adopts integer translations of a basic function and integer translations of a linear functional to approximate functions, see [8,25] and the references therein. We may consider sampled values that are the results of a linear functional and its integer translations acting on an undergoing signal [4,25]. Such sampled values are called measured sampled values because they are closer to the true measurements taken from a signal. The sampling series with the measured sampled values is defined to be where = { } ∈Z is any sequence of continuous linear functionals 0 (R ) → C, with 0 (R ) being the set of all continuous functions defined on R and tending to zero at infinity.
Similar to the definition of ( k,N )(t) and ( k,N )(t), we have the finite sum and the truncation error To establish our theorems we need the error modulus We write Ω( , ) for Ω k ( , ) if no confusion arises. The error modulus Ω( , ) provides a quantity for the quality of signal's measured sampling values. When the functionals in are concrete, we may get some reasonable estimates for Ω( , ).
Sampling series with measured sampled values has been studied in [8] for bandlimited functions but without truncation. The truncation errors are considered for functions from Lipschitz class with a decay condition in [13]. Now we recall a typical result in [13].
In [9] the author obtain the uniform bound of symmetric truncation error for functions from isotropic Besov space with a similar decay condition. Now we will provide the estimation for the truncation error without any assumption about the decay of ∈ r (R ).  Proof. By the triangle inequality, we have Similar to (52), we have Using Hölder's inequality we obtain where 1/ 0 +1/ 0 = 1. Now we select the same and 0 as in the proof of Theorem 1. Similar to (55), we have ∑ =1 −1/ ≤ − (r) . Thus A simple computation gives (∏ =1 (2 + 1)) 1/ 0 = and 0 ≤ ln ∏ =1 ≤ ln . Notice that Ω( , ) ≤ 0 − (r)+1/ . Collecting these results, we obtain It follows from Theorem 1, (72), and (73) that which completes the proof.
Finally we apply Theorem 10 to some practical examples. The first one is that the measured sampled values are given by averages of a function. For ∈ 0 (R ) we define the modulus of continuity where may be any positive number.
The second example is an estimate for the combination of all four errors existing in sampling series: the amplitude error, the time-jitter error, the truncation errors, and the aliasing errors. We give some explanation for the amplitude error and the time-jitter error.
We assume the amplitude error results from quantization, which means the functional value ( ) of a function at moment is replaced by the nearest discrete value or machine number ( ). The quantization size is often known before hand or can be chosen arbitrarily. We may assume that the local error at any moment is bounded by a constant > 0; that is, | ( ) − ( )| ≤ . The time-jitter error arises if the sampled instances are not met correctly but might differ from the exact ones by , ∈ Z; we assume | | ≤ for all and some constant > 0. The combined error is defined to be (80) Corollary 12. Let ∈ U( (R)), 1 < < ∞, 1 ≤ ≤ ∞, > 1, and V > . Then Proof. We define where is the Dirac distribution. Then = { } ∈Z is a sequence of linear functional on 0 (R). It is clear that (⋅+ /V) = ( /V + ). Then Thus Ω( , ) ≤ 0 V − +1/ . By Theorem 10 we get the desired result.