ON BANDLIMITED SIGNALS WITH MINIMAL PRODUCT OF EFFECTIVE SPATIAL AND SPECTRAL WIDTHS

It is known that signals (which could be functions of space or time) belonging to 𝕃 2 -space cannot be localized simultaneously in space/time and frequency domains. Alternatively, signals have a positive lower bound on the product of their effective spatial andeffective spectral widths, for simplicity, hereafter called the effective space-bandwidthproduct (ESBP). This is the classical uncertainty inequality (UI), attributed to many, but, from a signal processing perspective, to Gabor who, in his seminal paper, established the uncertainty relation and proposed a joint time-frequency representation in which the basis functions have minimal ESBP. It is found that the Gaussian function is the only signal that has the lowest ESBP. Since the Gaussian function is not bandlimited, no bandlimited signal can have the lowest ESBP. We deal with the problem of determining finite-energy, bandlimited signals which have the lowest ESBP. The main result is as follows. By choosing the convolution product of a Gaussian signal (with σ as the variance parameter) and a bandlimited filter with a continuous spectrum, we demonstrate that there exists a finite-energy, bandlimited signal whose ESBP can be made to be arbitrarily close (dependent on the choice of σ ) to the optimal value specified by the UI.


Introduction
We deal with real signals f which are treated as functions of the real variable x ∈ R : { f (x) : f (x) ∈ L 2 }, that is, the class of square integrable functions, centered at the origin.The independent variable "x" can denote either time (for dealing with time-dependent phenomena) or space (for describing space-dependent functions like images).In what follows, we use the terms "space" and "time" interchangeably.Let F(ω) denote the Fourier transform of f (x).Note that F(ω) ∈ L 2 .
The energy (E) and the effective spatial and spectral widths (∆ x ,∆ ω ) of f (x) are defined as follows: 1590 Signals with minimum space-bandwidth product Note that the formulas in (1.2) are the second-order moments of the energy distribution of the signal in the spatial and spectral domains, respectively.In physical terms, ∆ x represents the effective spread (or, more simply, concentration) of the signal in the spatial domain, and ∆ ω that in the spectral domain.The term "bandwidth" used by (electrical and other) engineers is quite different from the effective spectral width of a signal (and also from, e.g., the "Q" factor in the analysis of LCR circuits).See Bracewell [2] for a comparison of the various measures of spread of signals.
It follows by the Parseval identity that (1.3) (In the latter case, we assume that F(ω) is continuous.)Further, it is well known that f (x) and F(ω) cannot both be of short duration.And this is made explicit (a) qualitatively by the scaling theorem, and (b) quantitatively by the uncertainty inequality (UI) which places a lower bound on the product of effective spatial and spectral widths (ESBP) of continuous signals [10]: Note that the equality sign holds only if the signal f (x) is the Gaussian function, g(x) = exp(−x 2 /2σ 2 ), where σ is the variance parameter, governing the spread of g(x) in both the spatial and spectral domains.In practical terms, the interpretation of inequality (1.5) is that the optimal localization in both the spectral and spatial domains can be achieved only by the Gaussian function.It is important to note here that Gabor [6] invoked the Schwarz inequality in order to establish (1.5) (and this seems to be the only method of proof in the literature).
We now restrict ourselves to the class Ꮾ of continuous and finite energy signals f (x), which are bandlimited, (this assumption is crucial when we want to sample signals, using the Shannon-Whittaker theorem [10], for processing on a computer) that is, (1.6) Since the Gaussian function is not bandlimited, there is obviously no f (x) ∈ Ꮾ that satisfies the equality in (1.5).In fact, the ESBP of any f (x) ∈ Ꮾ is strictly greater than the lower bound as given in (1.5).This motivates us to formulate the following problem.What is the greatest lower limit (infimum) for the ESBP of signals in Ꮾ, and how close is this limit to the optimal value in (1.5)?
Ishii and Furukawa [7], Calvez and Vilbe [4] obtain a strict uncertainty inequality for signals in Ꮾ using their discrete samples.However, they do not answer the question raised above.Doroslovacki [5] proposes a generalized uncertainty principle that holds for both continuous-and discrete-time signals under appropriate conditions.More specifically, if a certain convolution-invariance condition is satisfied for the optimal signals, the generalized UI of Doroslovacki [5] is the same as (1.5).But the question of the lower limit for the ESBP of functions in Ꮾ is not examined in [5].However, optimal functions for the discrete case are presented in [5], which are different from the optimal bandlimited functions we are looking for.
The rest of the paper is organized as follows.In Section 2, we provide a mathematical formulation of the above problem, and derive inequalities for effective spatial and spectral widths.Using these inequalities, we derive an upper bound for the ESBP in Section 2.3, and show that this upper bound can be brought arbitrarily close to the limit specified in (1.5).We conclude the paper in Section 3.

Proposed approach and main results
The main result of this paper is to prove the existence of bandlimited signals whose ESBP is arbitrarily close to the optimal value.To this end, we employ the following notations.The normalized effective spatial and spectral widths of the signal f (x) are defined by Using the above notations, the UI can be written as It is to be noted here that inequality (2.3) applies to real signals in L 2 , whose Fourier transform magnitude is an even function.The effective spatial and spectral widths correspond to second-order moments centered at the origin in the spatial and frequency domains, respectively.The spectral width is therefore two sided.
We now generate a bandlimited signal f (x) by convolving a Gaussian function g(x) (with the Fourier transform G(ω)) with a bandlimited function h(x) whose Fourier transform H(ω) is assumed to be given by (2.4) 1592 Signals with minimum space-bandwidth product Therefore, the Fourier transform of f (x) is given by F(ω) = G(ω)H(ω); it belongs to L 2 , is real and differentiable.As a consequence, S x and B ω are finite.In order to prove our main result, the strategy is to derive upper bounds for S x and B ω , and show that, for a suitable choice of σ, the ESBP can be made arbitrarily close to the optimal value in (1.5).Let ξ = S 2 x , λ = B 2 ω , and P = ξλ.Note that P is a function of σ,W, and .And in order to make this dependence explicit, we write P(σ;W, ).(The semicolon separates σ from the other two parameters, W and , which are assumed to be prescribed in the specifications of the optimal f (x).)Therefore, our goal is to determine σ, for a given δ 0 > 0, such that In what follows, we find upper bounds for ξ and λ (and hence for P(σ;W, )) for the above choice of f (x).

Effective spectral width.
In order to facilitate the computation of λ = B 2 ω , we split the (square of the) numerator in (2.2) into three parts: ( It is shown in Appendix A that the right-hand side (RHS) of (2.6) obeys the following inequality: we obtain the following upper bound for λ (defined above): (2.12)

Product of effective spatial and spectral widths (ESBP).
An inequality for the square of ESBP is obtained by multiplying (2.9) and (2.12): The upper bound (2.13) on the square of ESBP has been obtained without a reference to the Schwarz inequality.In Appendix C, it is shown that the RHS of (2.13) can be simplified to obtain the inequality where C (which is a function of σ, W, and ) is given by

.15)
Note that C → 0 as W → ∞, independent of the choice of σ.It is clear from (2.14) and (2.15) that, for a given δ 0 > 0, there exists a σ 0 such that Note that as σ becomes larger, the spread of G(ω) becomes smaller.Let the second term on the RHS of (2.14) be denoted by δ(σ;W, ).For an illustration of the nature of the dependence of δ(σ;W, ) on σ, see Figure 2.1 for fixed values of W and (W = 1.0, = 0.1).It should be noted that, as W → ∞, δ → 0. This is the special case of the equality sign in (2.17) for the Gaussian function.It is interesting to note here that this result has been obtained without invoking the Schwarz inequality.

Conclusions
It is known that only the Gaussian function (with the variance parameter σ) attains the lowest ESBP value of 1/2.In an attempt to check on the existence of bandlimited functions whose ESBP is arbitrarily close to this number, we synthesize a bandlimited signal f (x) by filtering the Gaussian function with a bandlimited function h(x) having a continuous spectrum.We have shown that, by appropriately choosing σ, the ESBP of f (x) can be made arbitrarily close to 1/2 (but without reaching it), that is, the infimum of the ESBP of signals in Ꮾ is 1/2.

A. Effective spectral width
In order to facilitate the computation of ξ = B 2 ω , we split the (square of the) numerator in (2.2) into three parts: The sum of the integrals B 1 and B 3 can be written as The above simplification is achieved by substituting ω + W = −ω and ω − W = ω in B 1 and B 3 , respectively.Since cos 2 (ωπ/2 ) ≤ 1, we have Using the method of integration by parts, the integral in the above expression can be simplified as Simplifying further, we get Since the second term in the RHS of the above equation is strictly a positive quantity, we obtain the inequality 1596 Signals with minimum space-bandwidth product Combining the above inequality with (A.3), we have Finally, using the fact that we obtain an upper bound for B as

B. Effective spatial width
Let the first, second, and third integrals on the RHS of (2.10) be denoted as S a , S b , and S c , respectively.Consider the term S a .We observe that Therefore, The term S b can be written as Substituting ω + W = ω and ω − W = ω in the first and second integral in the RHS of the above expression, respectively, we get Using (B.1) and the derivative of H(ω), the term S c can be written as The above simplification is achieved by substituting ω + W = −ω and ω − W = ω in the first and second integrals, respectively.Now, using integration by parts, the above integral can be further simplified as 0e l s e w h e r e .