1. Introduction
Since Shannon introduced the sampling series in the landmark paper [1], the Shannon sampling theorem has been a fundamental result in the field of information theory, in particular telecommunications and signal processing; see [2–7] and the references therein. The theorem states that a bandlimited signal can be exactly recovered from an infinite sequence of its samples if the bandlimit is no greater than half the sampling rate. The theorem also leads to a formula for reconstruction of the original function from its samples. When the function is not bandlimit, the reconstruction exhibits imperfections known as aliasing. Moreover, in practice, the signal and the sampled values are not the accurate functional values. So several types of errors such as aliasing errors, truncated errors, jitter errors, and amplitude errors appear when the Shannon sampling series is applied to approximate a signal in real life. These types of errors have been widely studied under the assumption that signals satisfy some decay conditions at infinity; see [8–13]. On the other hand, one can avoid assumptions upon the decay rate of the initial signals by using localized sampling; see [14–19]. Recently the uniform bounds for truncated Shannon series based on local sampling are derived for nonbandlimited functions from Sobolev classes without decay assumption; see [18, 19]. In this paper we study errors in truncated multivariable Shannon sampling series via localized sampling by considering nonbandlimited functions from anisotropic Besov classes.
It is well known that the sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. The multivariable sampling theorem can be used in the reconstruction of some types of images such as gray-scale images.
We begin our discussion with the definitions of some function spaces. Let Lp(ℝd), 1≤p≤∞, be the space of all pth power Lebesgue integrable functions on ℝd equipped with the usual norm
(1)∥f∥Lp:=(∫ℝd|f(t)|pdt)1/p
for 1≤p<∞ and
(2)∥f∥L∞:=ess supt∈ℝd|f(t)|.
Set Zd={1,2,…,d}. For any vector v:=(vj:j∈Zd) with positive coordinates we say an entire function h is of exponential type v provided that for every ɛ>0 there exists a positive number c such that for all complex vectors z:=(zj:j∈Zd)∈ℂd we have the bound
(3)|h(z)|≤cexp(∑j∈Zd(vj+ɛ)|zj|).
Denote by Ev(ℂd) the space of all entire functions of exponential type v. Let Bv(ℝd) be the subset of Ev(ℂd) which are bounded on ℝd. Set
(4)Bvp(ℝd):=Bv(ℝd)∩Lp(ℝd), 1≤p≤∞.
Every vector v=(vj:j∈Zd)∈ℝ+d determines the rectangle
(5)Ivd:=∏j∈Zd[-vj,vj].
According to the Schwartz theorem [20],
(6)Bvp(ℝd)={f∈Lp(ℝd):supp f^⊆Ivd},
where f^ is the Fourier transform of f in the sense of distribution. For the case p=2, it is the classical Paley-Wiener theorem.
Now we define anisotropic Besov space. Suppose that k∈N and t∈ℝd. For f∈Lp(ℝd), we define the kth partial difference of f in the lth coordinate direction el at the point t∈ℝd with step xl∈ℝ by the formula
(7)Δxlkf(t)=∑i=0k(-1)i+k(ki)f(t+ixlel).
Let l=(l1,…,ld)∈ℕd, r=(r1,…,rd)∈ℝ+d, and li>ri for i∈Zd, 1≤p, and θ≤∞. We say f∈Bpθr(ℝd) if f∈Lp(ℝd), and the following seminorm is finite:
(8)∥f∥bpθri:={{∫ℝ(∥Δxilif∥Lp|xi|ri+(1/θ))θdxi}1/θ,1≤θ<∞,sup|xi|≠0∥Δxilif∥Lp|xi|ri,θ=∞,i=1,…,d.
The linear space Bpθr(ℝd) is a Banach space with the norm
(9)∥f∥Bpθr:=∥f∥Lp+∑i∈Zd∥f∥bpθri
and is called an anisotropic Besov space. We introduce the notation g(r)=(∑i=1d(1/ri))-1 which plays an important role in our error estimates. In this paper, we assume g(r)>1/p, which ensures that Bpθr(ℝd) is embedded into C(ℝd) by a Sobolev-type embedding theorem, and therefore function values are well defined; see [20].
Now we make some illustrations about why we choose Besov spaces as the hypothesis function spaces; that is, why we assume the signals come from Besov spaces. Firstly in studying the aliasing errors for nonbandlimited functions, one often uses Lipschitz or Sobolev regularity to replace the strong bandlimited assumption. In this way one can derive some reasonable convergence rates as the distance between the sampling periods tends to zero. However the aliasing and truncation errors by local sampling for these nonbandlimited functions have not been thoroughly studied. In particular errors by localized sampling approximation for these spaces of functions with measured values have never been considered. In this paper, using the tools in the study of mean σ-dimension width for Besov classes and the related imbedding theorems, we can consider anisotropic Besov spaces which include Lipschitz or Sobolev spaces as special cases. Thus our results immediately lead to the results for these two types of hypothesis function spaces. Of course the results on these two spaces are also novel. On the other hand, from the viewpoint of approximation theory it is worth studying the Besov class, since the best possible orders of approximation by bandlimited functions are known for Besov classes from the corresponding results of mean σ-dimension width theory. Note that a convergent Shannon series is a bandlimited function. So it is natural to ask if one can use Shannon interpolation formula to realize the best approximation for these spaces. In what follows we will give an affirmative answer to this question.
By the way for later use we recall the classical Sobolev space Wpr(ℝd) which consists of functions f∈Lp(ℝd) such that for all multi-index vector l=(l1,…,ld)∈ℕd, with |l|=∑j=1dlj≤r, the distributional partial derivative
(10)∂lf:=∂|l|f∂l1x1⋯∂ldxd
belongs to Lp(ℝd).
The remaining part of this paper is organized as follows. In Section 2, we consider errors in truncated Shannon sampling series with exactly functional values based on localized sampling. In Section 3 we firstly generalize part of the results in Section 2 to the sampling series with measured sampled values and then give some applications.
In what follows, let k, t, and so forth denote vector variables living in ℝd, and write k/v:=(k1/v1,…,kd/vd) and v·t:=(v1t1,…,vdtd). We use the same symbol C for possibly different positive constants. These constants are independent of N∈ℕd and v∈ℝ+d. Denote by [x] the largest integer not exceeding x.
2. The Exactly Functional Values Case
The famous Shannon sampling theorem states every function f∈Bv2(ℝ) can be completely reconstructed from its sampled values taken at instances {k/v}k∈ℤ (cf. [1]). In this case the representation of f is given by
(11)f(t)=(Svf)(t)=∑k=-∞+∞f(kv)sinc(vt-k),
where sinc (t)=sinπt/πt, t≠0, and sinc (0)=1. Series (11) converges absolutely and uniformly on ℝ.
In [10], the authors establish multidimensional Shannon sampling theorem by extending (11) to the case f∈Bvp(ℝd), 1<p<∞, and d>1. They obtained the following theorem.
Theorem A.
Let f∈Bvp(ℝd), 1≤p<∞. Then for any t∈ℝd(12)f(t)=(Svf)(t):=∑k∈ℤdf(kv)sinc(v·t-k),
where
sinc
(t)=∏i=1dsinc(ti) . The series on the right-hand side of (12) converges absolutely and uniformly on ℝd.
Shannon's expansion requires us to know the exact values of a signal f at infinitely many points and to sum an infinite series. In practice, only finitely many samples are available, and hence the symmetric truncation error
(13)|f(t)-∑|kd|≤Nd⋯∑|k1|≤N1f(kv)sinc(v·t-k)|
has been widely studied under the assumption that f satisfies some decay condition. Among others, in [11] the uniform truncation error bounds are determined for f∈Bv2(ℝ) with a decay condition. In [12] the uniform bounds of truncation error and aliasing error are derived for functions belonging to the Besov class B∞θr(ℝd) with the same decay condition as in [11]. Since their results are the motivations of our works, we restate them as follows. Throughout the paper we denote the unit ball of the space Bpθr(ℝd) by 𝒰(Bpθr(ℝd)).
Theorem B (see [12]).
Let f∈𝒰(B∞θr(ℝd)), 1≤θ≤∞, and r∈ℝ+d satisfy the decay condition inequality
(14)|f(t)|≤A(1+|t|2)δ,
where A>0 and 0<δ≤1 are constants and |t|2=(t12 + ⋯ + td2)1/2. For σ>0 define the associated v=(v1,…,vd) by setting vj=σg(r)/rj for j∈Zd. If vi>(1/2)e2/δ for i∈Zd, then
(15)|f(t)-(Svf)(t)|≤C·σ-g(r)lndσ.
Theorem C (see [12]).
Let f∈𝒰(B∞θr(ℝd)), 1≤θ≤∞, satisfy the decay condition (14). Then for any N=(N1,…,Nd)∈ℕd with Ni>(1/2)e2/δ, i=1,…,d, one has
(16)|f(t)-∑|kd|≤Nd⋯∑|k1|≤N1f(kv)sinc(v·t-k)| ≤C(∑i=1dlnNi)d∑i=1dNi-ri/(1+(n/δ)ri).
Now we truncate the series on the right-hand side of (12) based on localized sampling. That is, if we want to estimate f(t), we only sum over values of f on a part of ℤd/v near t. Thus for any N∈ℕd we consider the finite sum
(17)(Sv,Nf)(t):=∑k-v·t∈INdf(kv)sinc(v·t-k)
as an approximation to f(t). In this way we can derive the uniform bounds for the associated truncation error
(18)(Ev,Nf)(t):=|f(t)-(Sv,Nf)(t)|
and aliasing error
(19)|f(t)-(Svf)(t)|
without any assumption about the decay of f∈𝒰(Bpθr(ℝd)).
Our main result of this section is the following uniform bound of the aliasing error
(20)|f(t)-(Svf)(t)|.
Theorem 1.
Let f∈𝒰(Bpθr(ℝd)) with 1<p<∞, 1≤θ≤∞, and ri>d for i∈Zd. For σ>e, define v in the same manner as in Theorem B; then one has
(21)|f(t)-(Svf)(t)|≤C·σ-g(r)+1/plndσ.
We firstly note that due to the localized sampling the function in Theorem 1 does not need to satisfy any decay assumption at infinity. Next we make a comment on the bound σ-g(r)+1/plndσ. It is known from the results of mean σ-dimension Kolmogorov widths for Besov class 𝒰(Bpθr(ℝd)) that
(22)infg∈Bvp(ℝd) supf∈𝒰(Bpθr(ℝd))∥f-g∥L∞≥Cσ-g(r)+1/p.
Thus the bound in Theorem 1 is optimal up to the logarithmic factor lndσ; see [21]. As a consequence of Theorem 1, we show that, using truncated sampling series (17), we can still achieve this near optimal bound.
Theorem 2.
Let f∈𝒰(Bpθr(ℝd)), with the same p, θ, and r as in Theorem 1. For σ>e, define v as in Theorem B. Then for N=(N1,…,Nd)∈ℕd with Ni=[(σg(r))p+1], one has
(23)(Ev,Nf)(t)≤C·σ-g(r)+1/plndσ.
To prove Theorem 1 we will choose an intermediate function which is a good approximation for both f and Svf. Now we describe how to choose this function. For more details, one can see [21, 22].
For any positive real number u>0, we define the function
(24)gu(x)=As(sinc π-1ux)2s, x∈ℝ,2s>1,
where the constant As is taken such that ∫ℝgu(x)dx=1.
Suppose uj>0, j∈Zd. For any f∈Bp,θr(ℝd), set
(25)(Tujsf)(t) ≔∫ℝguj(xj)((-1)lj+1(Δxjljf)(t)+f(t))dxj =∫ℝguj(xj)∑i=1ljdif(t1,…,tj-1,tj+ixj,tj+1,…,td)dxj,
where ∑i=1ljdi=1.
When t∈ℝ and j∈Zd, we let
(26)Guj(x):=∑i=1ljdiiguj(xi)
and observe from formulas (25) and (26) that Tuj has the alternative representation
(27)(Tujsf)(t):=∫ℝGuj(xj)f(t+ixjej)dxj, x∈ℝd.
We define the value of a kernel Gu at x:=(x1,x2,…,xd) by
(28)Gu(x):=∏i∈ZdGui(xi)
and introduce the operator
(29)Tus:=Tu1s∘⋯∘Tuds.
Consequently, Tus is given by
(30)(Tusf)(t)=∫ℝdGu(x)f(t+x)dx, t∈ℝd.
It is known from [20] that Tusf∈B2sup(ℝd). We will exploit the following properties of Tusf in the proof of Theorem 1.
Lemma 3.
Let f∈𝒰(Bpθr(ℝd)), 1≤p, and θ≤∞. For σ>0, define u∈ℝd with uj=σg(r)/rj for j∈Zd; then one has
(31)∥f-Tusf∥L∞≤C·σ-g(r)+1/p.
Proof.
When p=∞, the inequality was proved in [12]. By the imbedding relationship
(32)𝒰(Bpθr(ℝd))⊂C·𝒰(B∞θr′(ℝd)),
where r′=(1-(1/p)(∑j=1d(1/rj)))r (see [20] for more details) we can derive the corresponding inequalities for the case 1≤p<∞ from that of p=∞.
Lemma 4 (see [22]).
If f∈Lp(ℝd), 1≤p≤∞, and u∈ℝ+d, then
(33)∥Tusf∥Lp≤2α∥f∥Lp,
where α:=∑i∈Zdli.
For 1≤p<∞, let lp(ℤd) be the Banach space of all infinite bounded p-summable sequences y={yk}k∈ℤd such that the norm
(34)∥y∥lp:=(∑k∈ℤd|yk|p)1/p
is finite.
Lemma 5 (see [10]).
Let y={yk}∈lp(ℤd), 1<p<∞. Then the series
(35)Lv(y,t):=∑k∈ℤdyk
sinc(v·t-k)
converges uniformly on ℝd to a function in Bvp(ℝd).
We also need the following bound for sinc series: ∑k∈ℤd|sinc(v·t-k)|q.
Lemma 6 (see [11]).
Let d≥1, q>1, v=(v1,…,vd), and vi>1, i=1,…,d. Then for any t∈ℝd,
(36)(∑k∈ℤd|sinc(v·t-k)|q)1/q≤(qq-1)d.
For f∈Bvp(ℝd), one has the following Marcinkiewicz-type inequality.
Lemma 7 (see [20, 23]).
Let f∈Bvp(ℝd), 1≤p<∞. Then one has
(37)(∏i=1dvi-1∑k∈ℤd|f(kv)|p)1/p≤C∥f∥Lp.
The next lemma presents a Marcinkiewicz-type inequality for functions from Sobolev spaces.
Lemma 8 (see [10]).
Let f∈Wpl(ℝd), 1≤p<∞, and l≥d. Then
(38)(∏i=1d1vi∑k∈ℤd|f(kv)|p)1/p ≤C(∥f∥Lp+∑i=1d1vi∥∂f∂xi∥Lp +∑1≤i≤j≤d1vi1vj∥∂2f∂xi∂xj∥Lp +⋯+∏i=1d1vi∥∂df∂x1⋯∂xd∥Lp).
Lemma 9 (see [20, 24]).
Let l=(l1,…,ld)∈ℕd, r=(r1,…,rd)∈ℝ+d, 1≤p, θ≤∞, 1-∑i=1dli/ri>0, and r′:=(1-∑i=1dli/ri)r. For f∈Bpθr(ℝd), it follows that there exists a constant C depending on r,r′,p, and θ but independent of f, such that
(39)∥∂|l|f∥Bpθr′≤C∥f∥Bpθr.
Proof of Theorem 1.
It is known from Lemma 9 (letting li=1 for i∈Zd) that the fact f∈𝒰(Bpθr(ℝd)) with ri>d,i∈Zd implies f∈Wpd(ℝd); therefore by Lemma 8(40)∥{f(kv)}∥lp≤Cσ1/p.
And hence by Lemma 5,
(41)(Svf)(t)=∑k∈ℤdf(kv)sinc(v·t-k)
converges uniformly on ℝd.
Set u=v/2s and 2s>d+max{ri, i=1,…,d}. So Tusf∈Bvp(ℝd) as mentioned above. By Theorem A we have Tusf=Sv(Tusf). Thus
(42)(Tusf)(t)-(Svf)(t) =∑k∈ℤd((Tusf)(kv)-f(kv))sinc(v·t-k).
Using the triangle inequality we obtain
(43)|f(t)-(Svf)(t)| ≤|f(t)-(Tusf)(t)|+|(Tusf)(t)-(Svf)(t)| ≤|f(t)-(Tusf)(t)|+|Sv,N(Tusf-f)(t)| +|Sv(Tusf-f)(t)-Sv,N(Tusf-f)(t)|.
By Lemma 3,
(44)|f(t)-(Tusf)(t)|≤Cσ-g(r)+1/p.
It is clear that
(45)|Sv,N(Tusf-f)(t)| =|∑v·t-k∈INd(Tusf-f)(kv)sinc(v·t-k)|.
Applying Hölder's inequality with exponent p0, we get
(46)|Sv,N(Tusf-f)(t)| ≤(∑v·t-k∈INd|(Tusf-f)(kv)|p0)1/p0 ·(∑v·t-k∈INd|sinc(v·t-k)|q0)1/q0 ≤C(∏i=1d2Ni+1)1/p0σ-g(r)+1/p·p0d,
where 1/p0+1/q0=1, and the second inequality follows from (44) and Lemma 6.
Next we estimate |Sv(Tusf-f)(t)-Sv,N(Tusf-f)(t)|. By Hölder's inequality,
(47)|Sv(Tusf-f)(t)-Sv,N(Tusf-f)(t)| ≤(∑v·t-k∉INd|(Tusf-f)(kv)|p)1/p ·(∑v·t-k∉INd|sinc(v·t-k)|q)1/q,
where 1/p+1/q=1.
By Lemma 7 and Lemma 4, we obtain
(48)∥{(Tusf)(kv)}∥lp≤Cσ1/p∥Tusf∥Lp≤Cσ1/p∥f∥Lp≤Cσ1/p.
It follows from (40), (48), and Minkowski inequality
(49)∥{(Tusf-f)(kv)}∥lp≤Cσ1/p.
Set h(t):=(∑v·t-k∉INd|sinc(v·t-k)|q)1/q. Note that h(t+m/v)=h(t), for all t∈ℝd and m∈ℤd. Thus to give an upper estimate for h(t) on ℝd, we only need to bound it on ∏i=1d[0,1/vi]. Note that
(50){k:k∉INd}⊂⋃i=1d{k:ki∉[-Ni,Ni]}.
A straightforward computation shows that for ti∈[0,1/vi](51)(∑ki∉(-Ni,Ni]|sinc (viti-ki)|q)1/q ≤(C∑ki∉(-Ni,Ni]1|k|q)1/q ≤(C∫Ni∞t-q)1/q ≤CNi-1/p.
Therefore
(52)|Sv(Tusf-f)(t)-Sv,N(Tusf-f)(t)|≤Cσ1/p(∑i=1dNi-1/p).
It follows from (46) and (52) that
(53)|(Tusf-Svf)(t)| ≤C((∏i=1d(2Ni+1))1/p0σ-g(r)+1/pp0d +σ1/p(∑i=1dNi-1/p)).
We choose Ni=[(σg(r))p+1] and p0=∑i=1dln(2Ni+1). It is easy to see that p0>1 and (∏i=1d(2Ni+1))1/p0=e. A simple computation gives
(54)p0d≤C(ln∏i=1dNi)d≤C lndσ.
Note that Ni≥σg(r)p. Thus we have
(55)∑i=1dNi-1/p≤Cσ-g(r).
Collecting the above results we obtain
(56)|(Tusf)(t)-(Svf)(t)|≤C·σ-g(r)+1/plndσ.
Combining (44) and (56), we prove the theorem.
Proof of Theorem 2.
By the triangle inequality, we have
(57)(Ev,Nf)(t)≤|f(t)-(Svf)(t)|+|(Svf)(t)-(Sv,Nf)(t)|.
By the arguments similar to those used in the proof of Theorem 1, we obtain
(58)|(Svf)(t)-(Sv,Nf)(t)| ≤(∑v·t-k∉INd|f(kv)|p)1/p ·(∑v·t-k∉INd|sinc (v·t-k)|q)1/q ≤Cσ1/p(∑i=1dNi-1/p) ≤Cσ-g(r)+1/plndσ,
where we use Ni=[(σg(r))p+1] in the last inequality.
Combining Theorem 1 and (58), we complete the proof of Theorem 2.
3. The Measured Sampled Values Case
In practice, the sampled values of a signal may not be exactly the functional values and may have to be quantized. Typical errors arising from these facts are jitter errors and amplitude errors. Using the key idea of quasi-interpolation which adopts integer translations of a basic function and integer translations of a linear functional to approximate functions, see [8, 25] and the references therein. We may consider sampled values that are the results of a linear functional and its integer translations acting on an undergoing signal [4, 25]. Such sampled values are called measured sampled values because they are closer to the true measurements taken from a signal. The sampling series with the measured sampled values is defined to be
(59)(Svλf)(t):=∑k∈ℤdλkf(·+kv)sinc(v·t-k),
where λ={λk}k∈ℤ is any sequence of continuous linear functionals C0(ℝd)→ℂ, with C0(ℝd) being the set of all continuous functions defined on ℝd and tending to zero at infinity.
Similar to the definition of (Sv,Nf)(t) and (Ev,Nf)(t), we have the finite sum
(60)(Sv,Nλf)(t):=∑k-v·t∈INdλkf(·+kv)sinc(v·t-k),
and the truncation error
(61)(Ev,Nλf)(t):=|f(t)-(Sv,Nλf)(t)|.
To establish our theorems we need the error modulus
(62)Ωv(f,λ):=supk∈ℤ|λkf(·+kv)-f(kv)|, v>0.
We write Ω(f,λ) for Ωv(f,λ) if no confusion arises. The error modulus Ω(f,λ) provides a quantity for the quality of signal's measured sampling values. When the functionals in λ are concrete, we may get some reasonable estimates for Ω(f,λ).
Sampling series with measured sampled values has been studied in [8] for bandlimited functions but without truncation. The truncation errors are considered for functions from Lipschitz class with a decay condition in [13]. Now we recall a typical result in [13].
Denote by LipL(1;C(ℝd)) the set of all continuous functions f satisfying
(63)|f(x)-f(y)|≤L(|x1-y1|+⋯+|xd-yd|)
for all x=(x1,…,xd) and y=(y1,…,yd)∈ℝd. Set k/v:=(k1/v,…,kd/v), v·t:=(vt1,…,vtd), and ∥t∥∞:=max{t1,…,td}.
Theorem D.
Let f∈LipL(1;C(ℝd)) satisfy the decay condition
(64)|f(t)|≤Mf∥t∥∞-γ, ∥t∥∞≥1,
for some 0<γ≤1. Let λ={λk} be any sequence of continuous linear functionals. For each v≥e2 one has for the truncation error at t∈ℝd(65)|f(t)-∑k∈ℤd,∥k∥≤Nλkf(kv)sinc(v·t-k)| ≤Cv-1lndv,
provided N:=⌊(v1+1/γ-1)/2⌋, where ⌊x⌋ is the smallest integer that is greater or equal to a given x∈ℝ, and Ω(f,λ)≤c0v-1.
In [9] the author obtain the uniform bound of symmetric truncation error for functions from isotropic Besov space with a similar decay condition. Now we will provide the estimation for the truncation error
(66)(Ev,Nλf)(t):=|f(t)-(Sv,Nλf)(t)|
without any assumption about the decay of f∈Bpθr(ℝd).
Theorem 10.
Let f∈𝒰(Bpθr(ℝd)), with the same p, θ, and r as in Theorem 1. For σ>e, define v as in Theorem B. Let λ={λk} be any sequence of continuous linear functionals. For v one has
(67)(Ev,Nλf)(t)≤C·σ-g(r)+1/plndσ,
provided Ni=[(σg(r))p+1] and Ω(f,λ)≤C0σ-g(r)+1/p for some constant C0>0.
Proof.
By the triangle inequality, we have
(68)(Ev,Nλf)(t) ≤|f(t)-(Svf)(t)|+|(Svf)(t)-(Sv,Nλf)(t)| ≤|f(t)-(Svf)(t)|+I1+I2,
where
(69)I1≔|∑v·t-k∉INdf(kv)sinc(v·t-k)|,I2:=|∑v·t-k∈INd(f(kv)-λkf(·+kv))sinc(v·t-k)|.
Similar to (52), we have
(70)I1≤Cσ1/p(∑i=1dNi-1/p).
Using Hölder's inequality we obtain
(71)I2≤(∑v·t-k∈INd|f(kv)-λkf(·+kv)|p0)1/p0·(∑v·t-k∈INd|sinc(v·t-k)|q0)1/q0≤C(∏i=1d(2Ni+1))1/p0Ω(f,λ)·p0d,
where 1/p0+1/q0=1. Now we select the same Ni and p0 as in the proof of Theorem 1. Similar to (55), we have ∑i=1dNi-1/p≤Cσ-g(r). Thus
(72)I1≤Cσ-g(r)+1/p.
A simple computation gives (∏i=1d(2Ni+1))1/p0=e and p0≤Cln∏i=1dNi≤Clnσ. Notice that Ω(f,λ)≤C0σ-g(r)+1/p. Collecting these results, we obtain
(73)I2≤Cσ-g(r)+1/plndσ.
It follows from Theorem 1, (72), and (73) that
(74)(Ev,Nλf)(t)≤Cσ-g(r)+1/plndσ,
which completes the proof.
Finally we apply Theorem 10 to some practical examples. The first one is that the measured sampled values are given by averages of a function. For f∈C0(ℝd) we define the modulus of continuity
(75)ω(f,τ):=sup∥h∥≤τ∥f(·+h)-f(·)∥∞,
where τ may be any positive number.
Corollary 11.
Let f∈𝒰(Bpθr(ℝd)), with the same p, θ, and r as in Theorem 1. For σ>e, define v,N as in Theorem 10. Suppose the sampled values fk of f are obtained by the rule
(76)fk:=12d∏j=1d1hj∫Ihdf(t+kv)dt,
where hj are numbers satisfying 0<hj≤τ for all k∈ℤd and τ>0. If τ<1/2 and ω(f,τ)≤min{ed/(-g(r)+1/p),C0σ-g(r)+1/p}, then
(77)(Ev,Nλf)(t)≤C·ω(f,τ)g(r)-1/plnd1ω(f,τ).
Proof.
Let λ={λk}k∈ℤ be the sequence of the linear functionals on C0(ℝd) and g a continuous function on Ihd. Define
(78)λkg:=12d∏j=1d1hj∫Ihdg(t+kv)dt.
Then fk=λkf(·+k/v) . Clearly, |f(t+k/v)-f(k/v)|≤ω(f,τ). Therefore
(79)Ωv(f,λ) =supk∈ℤ|λkf(t+kv)-f(kv)| =supk∈ℤ|12d∏j=1d1hj∫Ihd(f(t+kv)-f(kv))dt| ≤supk∈ℤ|12d∏j=1d1hj∫Ihdω(f,τ)dt| =ω(f,τ).
Note that the function x↦xg(r)-1/plnd(1/x) is monotonely increasing for x∈(0,ed/(-g(r)+1/p)). The corollary follows from Theorem 10.
The second example is an estimate for the combination of all four errors existing in sampling series: the amplitude error, the time-jitter error, the truncation errors, and the aliasing errors. We give some explanation for the amplitude error and the time-jitter error.
We assume the amplitude error results from quantization, which means the functional value f(t) of a function f at moment t is replaced by the nearest discrete value or machine number f-(t). The quantization size is often known before hand or can be chosen arbitrarily. We may assume that the local error at any moment t is bounded by a constant ɛ>0; that is, |f-(t)-f(t)|≤ɛ. The time-jitter error arises if the sampled instances are not met correctly but might differ from the exact ones by τk′,k∈ℤ; we assume |τk′|≤τ′ for all k and some constant τ′>0. The combined error is defined to be
(80)(ENf)(t):=f(t)-∑v·t-k∈[-N,N]f-(kv+τk′)sinc(vt-k).
Corollary 12.
Let f∈𝒰(Bpθr(ℝ)), 1<p<∞, 1≤θ≤∞, r>1, and v>e. Then
(81)∥ENf∥∞≤Cv-r+1/plnv,
provided N=[vrp+1], |f(t)-f-(t)|≤c1v-r+1/p, and ω(f,τ′)≤c2v-r+1/p, where c1, c2 are positive constants, and c1+c2≤C0.
Proof.
We define
(82)λk=f-(k/v+τk′)f(k/v+τk′)δ(·-τk′),
where δ is the Dirac distribution. Then λ={λk}k∈ℤ is a sequence of linear functional on C0(ℝ). It is clear that λkf(·+k/v)=f-(k/v+τk′). Then
(83)|λkf(·+kv)-f(kv)| ≤|f-(kv+τk′)-f(kv+τk′)| +|f(kv+τk′)-f(kv)| ≤(c1+c2)v-r+1/p.
Thus Ω(f,λ)≤C0v-r+1/p. By Theorem 10 we get the desired result.