This paper provides an overview about the usage of the Fourier transform and its related methods and focuses on the subtleties to which the users must pay attention. Typical questions, which are often addressed to the data, will be discussed. Such a problem can be the origin of frequency or band limitation of the signal or the source of artifacts, when a Fourier transform is carried out. Another topic is the processing of fragmented data. Here, the Lomb-Scargle method will be explained with an illustrative example to deal with this special type of signal. Furthermore, the time-dependent spectral analysis, with which one can evaluate the point in time when a certain frequency appears in the signal, is of interest. The goal of this paper is to collect the important information about the common methods to give the reader a guide on how to use these for application on one-dimensional data. The introduced methods are supported by the spectral package, which has been published for the statistical environment R prior to this article.
1. Introduction
Because the field of interest is as wide as the amount of information about spectral methods, it becomes necessary to compose this article out of the most important methods. It is split into three main parts, which point at different levels of abstraction and complexity. Section 2 will give a brief and short overview of the mathematical principles and ideas behind the common methods of spectral analysis. For further reading, the Appendix contains a selection of derivations. Remember, the intention of this work is not the completeness of mathematical proofs. Its focus lies on the application of methods on discrete data and the mistakes to be avoided. Section 3 will focus on simple one-dimensional methods, their properties, advantages, and drawbacks.
The individual sections are supported by examples which are programmed in the statistical language R [1] with the help of Seilmayer’s spectral [2] package which has been published prior to this article. The main goal of the provided R code examples and the introduced package is an easy-to-use user interface which computes scaled output data according to the input. This means if the input time vector would be in units of seconds the output of the spec.fft(x,y)-function would be the natural frequency unit Hz. Besides the analysis of data sets it will be shown how spectral methods can be utilized to carry out (or even accelerate) several calculations on discrete data, like derivation, integration, or folding. Here, source code snippets of the spectral package highlight the implementation and try to underline the methodology.
2. Mathematical Concepts
This chapter introduces the basic ideas and principles which lead to the methods of spectral analysis for discrete data. Thereby, the focus lays on measurements and time series of physical processes. For now, utilizing the time as variable makes it easier to understand the explained methods. Of course, the time variable t might be replaced by the space variable x, if a distribution of spatial events is of interest. So in this sense space and time are meant to be the location where the signal is defined.
The following definitions and explanations are short and sweet. For further reading, it is suggested that you refer to “The Scientist and Engineer’s Guide to Digital Signal Processing” by Smith [3] or to “Time-Frequency Analysis” by Cohen [4]. These two comprehensive works elaborate the underlying mathematical framework in detail. Both books are highly recommended if theoretical approaches and mathematical subtleties are of interest. But from now on the practical application to measured data and understanding of the complex connections is of interest.
2.1. Signal Definition
In the context of this article, a signal s(t) corresponds to a physical measure p(t) and is therefore real-valued and causal. This means that with the measurement of the process p(t) the signal s(t) starts to exist at a certain point in time and ends later when the measurement is finished. With that in mind, the signal function s(t) represents a slice of the length T for times t>0. This can be properly defined as follows:(1)st=pt0<t≤T,0elsewhere.The simple noisy signal in Figure 1 illustrates that. For this example, the underlying physical process p(t) could be the temperature, which is measured in the time range of 0<t≤1. Evidently the temperature of an object exists before and after the measurement takes place, so the content of s(t) only maps to a time interval of this process.
The arbitrary signal with overlapped normal distributed noise p(t)=0.4cos(2π·1.5t-π/2)+N(0,0.02)+0.5 is shown here. It is only within the rectangular window that the measurement takes place and the signal s(t) can exist. Outside this window the physical process might go on but is not recognized anymore. The points and arrows mark the sampling of that signal.
With respect to a digital representation of information, the measurement of p(t) takes place by acquiring data in the form of sampling the continuous signal at N different instances in time. The resulting sampled signal series(2)s^n=∑n=0N-1sn·Ts·e→nis the data vector s^n. Here, the index n addresses the nth element in this N-dimensional data vector s^. In (2), the basis vector e→n ensures that s^ becomes a vector in a mathematical sense, which simply represents an ordered list of elements. Remember the absolute value e→n=1 is always equal to one, so the sum only helps to iterate along the continuous signal s(t).
2.2. The Fundamental Idea of Frequency Analysis
To explain the idea of frequency analysis right from the beginning, it is necessary to redefine the signal function s(t). From now on, the signal represents a sum of sinusoidal functions in the form:(3)st=∑iAi·cos2πfi︸ωit+φi,which, in the final consequence, leads to the expansion into orthogonal functions.
The aim of frequency analysis is to calculate the amplitude and phase spectrum of a signal. The result of this procedure is a mapping of amplitudes to their corresponding frequencies. If a measured time series of a physical process is the sum of many individual sinusoidal functions, it could be of interest to estimate each of the amplitudes Ai and phases φi for a given frequency fi.
The first approach to do would be to estimate each A and φ by a least square fit against a sinusoidal model for each frequency of interest. This is a legitimate attempt with certain drawbacks and some outstanding advantages, which are discussed in Section 3.8.
The roots of a more general method to determine the spectral components of a signal are the two common addition theorems:(4)cosα·cosβ=12cosα-β+cosα+β,cosα·sinβ=12sinβ-α+sinα+β,which mark the starting point of the following explanations. A collection of addition theorems can be found in almost any book of formulae and tables like Bronštejn and Semendjajew [5].
Given a signal in the continuous domain (5)st=A·cosωt+φ,then multiplying this with cos(ω0t) as well as sin(ω0t), and finally integrating over all times yield, for instance, for the cos-term(6)∫-∞∞st·cosω0tdt=A2∫-∞∞cosω-ω0t+φdt+∫-∞∞cosω+ω0t+φdt︸=0(7)=A2cosφ,ω=ω0⟶0,ω≠ω0,(8)=Rreal part or inphase component, which is going to be named R. Herein, the variable ω0 denotes the frequency of interest. The sin-term is then very similar(9)∫-∞∞st·sinω0tdt=A2sinφ,ω=ω0⟶0,ω≠ω0,=Iimaginary part or quadrature component and is called I. In both equations on the right hand side, the right integral vanishes to zero, because each positive area element of a sin-function can be mapped to a negative area element. Only in the case of ω0=ω, a phase-dependent part is left.
With the identity sin2α+cos2α=1, the amplitude can be calculated as(10)A=2R2+I2.The phase information is then gained by(11)φ=arctanIR.The introduced procedure is called quadrature demodulation technique (QDT). Finally, by evaluating the amplitude A at many different frequencies ω0, a periodogram emerges. Note the R and I part can also be interpreted as the real and imaginary part of a signal. The complex exponential formulation then corresponds to(12)st=ReA·eiωt+φi with the identity eiϕ=cosϕ+isinϕ.
One of the main disadvantages arises for finite time ranges 0<t≤T. In that case, this method produces a truncation error ε, which depends on the remainder after the modulus division(13)n=2πf·Tmod2πwith respect to the total time interval T. In other words, if the period 2π/ω of the signal does not fit completely in the integration range, the integrals containing ω+ω0 will not vanish completely. However, remember the right term of (6). Here the integral can be split into an n·2π periodic part and a residue. The periodic integral of cos(ω+ω0)t+φ vanishes, (14)R=A2∫0Tcosω-ω0t+φdt+∫0Tcosω+ω0t+φdtω=ω0=A2cosφ+∫0n·2π-φ/ω+ω0cosω+ω0t+φdt︸→0+∫n·2π-φ/ω+ω0Tcosω+ω0t+φdt︸=ε=A2cosφ+ε,whereas a residue ε remains in the second integral from the last n·2π period up to the full length T. Obviously, a minimum of ε is reached if T=n·2π. All this holds if ω=ω0. If this is not the case, also the integral ∫cos(ω-ω0)t+φ splits into two parts from 0 to n·2π and a residue, which leads to a second error ε2. Finally, with ω≈ω0 this leakage effect will produce artificial amplitudes in the nearest neighborhood of ω0. Anyway, note that, from inserting the result (14) in (10) or (11), mixed products arise, which makes it difficult or even impossible to reject the error ε. All this is far away from being mathematically complete, but it points in the direction towards an error discussion, which is presented in more detail by Jerri [6, Chap. 6].
At this point, the statistical approach “fitting a sinusoidal model to the data set” becomes attractive. The nature of a fitting algorithm enables the minimization of the phase error which leads to an optimal result. A brief discussion of this concept is given in “Studies in Astronomical Time Series Analysis. II” by Scargle [7]. Section 3.8 provides an introduction to the Lomb-Scargle method, which gives equal results compared to the already mentioned least square fit.
2.3. Integral Transforms
In contrast to the previously described QDT, which is more or less a statistical approach, the integral transforms are the mathematical backbone of spectral analysis. An integral transform represents the bijective mapping s(t)↔S(ω) between the time and frequency domains, which states that all information present in the first will be transformed to the second domain. In the following, capital letters denote always the frequency domain, whereas lowercase letters represent the time domain of the corresponding variable.
2.3.1. Fourier Transform
First of all, the Fourier transform should be introduced. Thereby, the definitions and some of the properties in the continuous and discrete time domain are different. A comparison is given in Table 1. Nevertheless, the signal s(t) forms a Fourier pair with S(ω). The following notation describes a transform of the signal s(t) into the frequency domain,(15)Fst=Sω.This uses the Fourier operator F, which expresses the calculus according to Table 1. The inverse back transform is denoted by F-1. The Fourier transform gains different features like the frequency shifting(16)Feibtft=Fω-bb∈R,which becomes important when amplitude modulation in Section 3.4 is discussed. All necessary properties are listed in Appendix B.
Continuous and discrete Fourier transform.
Continuous
Discrete
Transform
Sω=∫-∞∞st·e-iωtdt
S^(m)=1N∑n=0N-1s^ne-im2π/Nn
Back transform
st=12π∫-∞∞Sω·eiωtdω
s^(n)=∑m=0N-1S^meim2π/Nn
In addition to the characteristics of the continuous Fourier transform, the discrete Fourier transform (DFT) has a 2π-periodic spectrum. As shown in Appendix C, the result of the DFT is a complex valued data vector, in which the first element S^1 always contains the mean value of the data series. Subsequent bins hold the amplitude values up to S^N/2. From this position on the spectrum is mirrored and repeats up to the sampling frequency. In Section 3, the symmetry of the DFT and its consequences are discussed in more detail. How to take advantage of this to recover the “negative” frequencies will be shown. In case of more dimensional data, it turns out that the result of the DFT contains additional information, where also the “negative” frequency components play a role.
2.3.2. Hilbert Transform
The Hilbert transform (HT) is defined by the real-valued folding operation on the real-valued signal s(t)(17)Hst=1π∫-∞∞st′t-t′dt′.Additionally, the linear operator H is introduced to execute the Hilbert transform on a function or data set. From the representation in the Fourier space, the constant phase shifting feature becomes clear,(18)FHst=-i·signω·Fst.Hence, (18) can be used to easily calculate Hs(t) in terms of a Fourier transform. Remember, here the sign(ω) function is defined on -∞<ω<∞. This must be taken into account, when performing H(s^(n)) on discrete data. In this case, the discrete Hilbert transform is calculated by(19)FHs^n=i·Fs^n0≤n<N2-i·Fs^nN2≤n<N.This formalism considers the fact that “negative” frequencies are mirrored into the upper parts of the DFT data vector. However, the result of Hs(t) is real-valued, with each frequency component phase-shifted by ±π. In conjunction with a real-valued and causal signal s(t), the HT helps to represent the real physical character of measured data, which should only contain positive frequencies, due to real-world processes. From this, the analytical signal,(20)at=st+iHst,follows, which is a complex representation of s(t) containing a one-sided Fourier spectrum.
One derivation of the Hilbert transform in conjunction with the analytical signal is given in Appendix D and its application in terms of data filtering is briefly discussed in Section 3.3 and the following.
In addition to that, the concept of instantaneous frequency can be explained with the help of the HT. This topic is discussed in detail in “The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis” by Huang et al. [8]. For further reading on that “Amplitude, Phase, Frequency—Fundamental Concepts of Oscillation Theory” by Vakman and Vaĭnshteĭn [9] should be mentioned.
3. Spectral Methods in One Dimension
The application of the introduced Fourier and Hilbert transform will be discussed in detail in this chapter. Each of the following sections includes a theoretical part, which is finally supported by examples. The focus lies on the application and fundamental understanding of the corresponding methods.
First of all, let the signal function(21)st=∑i=0mAi·cosωit+φi+N0,σbe a set of sinusoidal functions plus an optional noise term. This signal is sampled with an equidistant spacing Ts, so the discrete data vector(22)s^n=∑n=0N-1sn·Ts·e→n of length N represents the sampling series. Using the discrete Fourier transform (see Table 1), the spectrum of the data vector s^(23)S^m=Fs^n can be calculated.
For further reading, a brief introduction can be found in “The Shannon Sampling Theorem—Its Various Extensions and Applications: A Tutorial Review” by Jerri [6]. A detailed comprehensive mathematical work is given in the book “Fourier Analysis” by Duoandikoetxea Zuazo [10].
3.1. The Essence of Band Limitation and the Nyquist Condition
The question of band limitation is a question of the unique representation of a function s(t) in the frequency space S(ω). In the simplest case, a band limited signal has a spectrum(24)Sω=≠0ω≤ωmax0elsewherewhich is only nonzero within the range ω≤ωmax. Outside this interval, the spectrum is equal to zero. The sampling theorem by Shannon [11] and the proof of Whittaker [12] state that in such a situation even a time-limited continuous function is completely defined by a discrete and finite set of samples, if all the samples outside the time range are exactly zero (see the exemplary function in Figure 1). Closely related to that is the minimal required sampling frequency (twice the “Nyquist frequency”) 1/Ts=fs≥2fmax, which is necessary to obtain all information about the signal.
The consequence is that there should not be any frequency present in the data set higher than fs/2 to achieve a unique sampling in application. Theoretically, this means a function s(t) is completely determined by its discrete spectrum S(n·2πfs) only if the signal is bandlimited to f<fs/2. Therefore the discrete spectrum of the discrete sampled signal s^n provides a unique spectral representation of its continuous counterpart s(t).
3.1.1. Application
The artificial function(25)yt=sin2π·4t+0.5·cos2π·2t+1.5is given in Figure 2(a). It consists of two functions and an offset value. The sampling took place at a rate of Ts=0.05, so that there are theoretically N=21 possible samples. Note y(t) is perfectly periodic, so the first sample would equal the last sample. Because the latter marks the first point of the next sampling period, it must be removed from the data set, which now has N∗=20 elements. Ignoring this fact would lead to an error (leakage effect) with the result that the elements to the left and to the right of the frequency peaks are not zero anymore. The root of this behavior is discussed in Section 2.2 and becomes visible in the subsequent figures in this section. The resulting frequency resolution in this example is(26)δf=1TsN∗=1, so the signal’s frequencies fit perfectly into this grid. The absolute amplitude spectrum Y^n is given in Figure 2(b). Here, one can see that the mean, saved in the first bin, has its original value of A0=1.5, whereas the magnitude of the amplitudes at fn=2=2 and fn=4=4 is split into the upper and lower frequency domain. To retrieve the correct signal amplitudes the individual values in both half-planes must be added. Remember the DFT is mirror-symmetric to the Nyquist frequency fs/2=10 so that the first amplitude A1 becomes(27)A1=Y^n=4+Y^n=16.
Oversampled signal y(t). Band limitation is achieved by definition of the signal. Mention that the first sample and the ith+1 sample at t=1 would be the same, because of the periodicity of the signal.
Figures 2 and 3(a) display an oversampled signal, where the required Nyquist condition is fulfilled. The result is a unique mapping of the time domain into frequency domain. In other words, the corresponding frequency vector f^n is only valid for 0≤n<N/2, because it repeats in the inverse order for N/2≤n<N.
(a, b) The signal y(t) is sampled marginally. In (c, d), it is undersampled so the upper and lower parts of the spectrum infiltrate each other. Compare the filled symbols, which correspond to the points in the lower half plane of (b). The black line in (c) indicates the alternative reconstruction.
However, as Figure 3(c) indicates, if the requirement of band limitation (fs≥2fmax) is violated, the mapping is not unique anymore. Remember the frequency resolution in all given examples is δf=1. In comparison to Figure 3(b), where the condition of band limitation is complied with, Figure 3(d) illustrates that the upper and lower parts of the frequency bands now infiltrate each other. This makes it even more complicated to distinguish between certain frequencies and their physical correspondence. In such a scenario, it becomes completely unclear which of the two functions belong to the sampling points.
3.1.2. Example: Undersampling
The conditions described above can be extended to the requirement(28)fmax-fmin=Δf≤fs, which means that a unique mapping is still possible, yet only if the bandwidth Δf is not larger than the sampling frequency. In addition to this, the frequency range must be known. This is automatically fulfilled in the most common case, whenever the frequency ranges within 0≤f≤fs.
The example given in Figure 4 shows the artificial function(29)yt=cos2π·25·t+0.5·sin2π·27·t+1.5, which is sampled with a sampling frequency of fs=20. The amplitude spectrum is given in Figure 4(b). Obviously, y(t) is undersampled. Remember the discrete Fourier transform calculates a periodic spectrum, which projects the high-frequency parts into the lower frequency range between 0 and fs. Without any additional information, a mapping from spectral domain to time domain will fail. In the present case, we assume that the signal frequency is in the range 25≤f≤27, so the whole spectrum can be shifted by an offset of fo=fs. The second f-axis indicates this. The upper and lower frequency bands can be clearly distinguished, so the mapping is unique. In contrast to that, the interpenetration of the upper and lower bands prevents such a mapping in the example in Figure 3(d).
A signal with a limited bandwidth is sampled times slower than the signal frequency. The black line in (a) displays again the second possible curve, which would map to the spectrum in (b).
3.1.3. Example: The Derivative of a Function
One important property of the Fourier transform is the change of mathematical operations in the spectral domain. According to Appendix B, taking the derivative of a function reduces to a multiplication with iω(30)Fddxyx=iω·Fyx︸Yω in the spectral domain. In this manner, differential equations can be reduced to simpler linear algebraic representations.
In contrast to the continuous case, discrete sampled functions must be periodic with a finite bandwidth to fulfill the requirements for the DFT. In case the function is nonperiodic within the sampled window, the application of a proper window function is necessary to minimize leakage. The following code explains how to estimate the derivative of the non-band-limited function like(31)yx=-x3+3·x,with ddxyx=-3·x2+3 which is going to be sampled with N=40 discrete points.
require(spectral)
# 40 sample positions
x <- seq(-2.5,2.5,length.out = 40)
# window the function and generate
# the sampling points
y <- win.tukey(x,0.2) ⋆ (-x∧3+3⋆x)
# doing the DFT
Y <- spec.fft(y,x,center = T)
# calc deriv.
Y$A <- 1i ⋆ 2⋆pi ⋆ Y$fx ⋆ Y$A
# the realpart of the back transform
# contains the signal of interesst
dy <- Re(spec.fft(Y,inverse = T)$y)
It must be mentioned that the function y(x) has to be windowed in advance to achieve the periodic property. In this example, this is done via the “Tukey-” window, which rises and falls with a quarter cycle of a cosine function (compare dashed curve). Figure 5 shows the output of the listed code above. Pay special attention to the windowed version of y(x) (black curve) and its almost perfect derivative. Here, the effect of the windowing process is to achieve Y(ω>0.5·ωs)=0, which relaxes the function at the right and left side. In consequence, the resulting derivative equals the analytic solution in the mid-range but also deviates at the boundaries. In comparison to that, the unweighted function in Figure 5(a) (gray line) produces an unusable faulty derivative in Figure 5(b) (crosses). The reason for that is the discussed missing band limitation.
Calculating the derivative of y(x). (a) displays the original function (gray), the windowed function (black), and the sample points. The dashed line defines the window function. In (b), the analytic (gray line) and estimated numeric (points) derivative dy(t)/dx is shown. The crosses indicate the result of the calculation without utilizing the window function. Note that some crosses are vertically outside the window for x<-2 and x>2.
With a look into line 7 of the above code, notice that the additional parameter center = T is provided to the spec.fft() function. The effect is a zero-symmetric spectrum with the corresponding frequencies ranging within -fs/2≤f<fs/2, which enables the direct computation of iω·Y(ω) utilizing the elements of the list variable Y. Remember a pure DFT would map to frequencies in the range of 0≤f<fs. It follows that the negative frequencies -fs/2≤f<0 were projected into the upper half of the frequency domain fs/2≤f<fs. In case of center = F, the latter frequency range would be generated and used in further calculations, which must be considered to prevent wrong results.
3.2. Center a Discrete Spectrum
As initially explained in Section 2.3, the spectrum of a continuous function s(t) is defined in the range of -∞<ω<∞. In contrast to that, its counterpart the discrete Fourier transform produces a 2π-periodic spectrum, which is defined for a positive number of samples 0≤n<N and 0≤m frequencies. The negative frequency components are projected into the range of fs/2≤f<fs, so the spectrum becomes mirror-symmetric with respect to the Nyquist frequency fs/2.
Figure 6(a) shows the result of the continuous Fourier transform applied to the artificial example function (25):(32)yt=sin2π·4t+0.5·cos2π·2t+1.5. It is clear that the mean value y(t)¯ is located at f=0 and the sinusoidal components are located symmetrically around zero. In contrast to that, the discrete Fourier transform produces a data vector Y^m, to which only positive discrete frequencies can be assigned. To convert the data vector into a zero-symmetric representation, the function y(t) must be modulated by fs/2:(33)y∗t=yt·eiπfs·t.This method utilizes the shifting or damping property of the Fourier transform as stated in Appendix B (B.4), assuming a damping factor of a=1. By doing so, the complete spectrum (Figure 6(b)), including the mean value, shifts towards the half of the sampling frequency right in the middle, as it is indicated in Figure 6(a).
Two spectra of function (25). The difference between the natural spectrum in (a) and the normal discrete Fourier transform in (b) is the symmetry. In (a), there is a symmetric spectrum around zero, whereas in (b) the point of symmetry is fs/2. The dots in panel (a) indicate the result of the DFT of the modulated sampling series of (25). Note the exceptional first bin in (b), where the mean value is stored.
Natural spectrum (arrows) and DFT spectrum (circles)
Discrete spectrum
3.2.1. Application
In an equally spaced sampling series, the discrete time vector t=n·Ts changes (33) to(34)y∗n·Ts=y^n·eiπfs·n·Ts=y^n·eiπn︸1-1+1-⋯=y^n·-1nand simplifies it. From (34), it follows that the modulation of the data vector can be computed easily by a multiplication of the alternating series (-1)n. Subsequently, all this can be extended into n dimensions, so even 2D spectra can be centered [13, p. 154].
The following R-code which is part of the spec.fft() function explains the method. First, assume a vector x for the spatial location where the measurements y were taken.
nx <- length(x)
ny <- length(y) # is equal nx
# calculate the sampling
Ts <- min(diff(x))
# modulate the data vector
y <- y⋆(-1)∧(1:ny)
# determine the corresponding
# frequency vector
fx <- seq(-(nx)/(2⋆Ts ⋆ nx)
,(nx - 2)/(2⋆Ts ⋆ nx)
,length.out = nx)
# calculate and normalize the spectrum
A <- fft(y)⋆1/ny
In the calculation of the frequency vector fx, the maximum frequency is (nx - 2)/(2⋆Ts ⋆ nx), in which (nx − 2) considers the position of f=0 and the intrinsic periodicity of the signal y. Remember the examples in Section 3.1, when the last possible sampling point matches the first point of the next signal period.
The back transform of the centered spectrum is the simple reverse of the described procedure. The function spec.fft() supports both methods to decompose a data vector into its centered or noncentered spectral representation. The usage of the code is explained below.
# defining the sampling
Ts <- 0.05
x <- seq(0,1,by = Ts)
# remove last sample to avoid
# error with periodicity
x <- x[-length(x)]
y <- function(x) sin(2⋆pi ⋆ 4⋆x) +
0.5 ⋆ cos(2⋆pi ⋆ 2⋆x) + 1.5
# the normal fft()
Y <- spec.fft(y(x), x, center = F)
# prior modulation
Yc <- spec.fft(y(x), x, center = T)
# spec.fft() returns a list object
# containing the following:
# Y$fx, (Y$fy), Y$A, Y$x, Y$y, (Y$z)
3.3. The Analytic Signal
The concept of the analytic signal is based on the idea that a real-valued signal, for example, some measured data, can only contain positive frequencies. At least in the 1D case, the interpretation of negative frequencies is almost impossible. Next to that, a real-world signal does only exist for positive times, usually. This makes the signal causal, which also must be taken into account; see (D.2) in Appendix D. The foundation of the analytic signal is the Hilbert transform. For a short introduction on the Hilbert transformation itself and its properties, refer to Section 2.3.2. Besides that, the attribute “analytic” originates from the Cauchy-Riemann conditions for differentiability, which is fulfilled for analytic functions in this sense [4].
Now, let us introduce the analytic signal(35)at=st+iHst.It is formally defined by the sum of a signal s(t) and its Hilbert transform iHs(t) multiplied by the imaginary unit. So the real-valued signal s(t) transforms to a(t), which now has an one-sided spectrum. Frequencies above the Nyquist frequency fs/2 are coerced to be zero and frequencies below that are gained by the factor of two, so that energy is conserved. This can be seen in Figure 7, which shows the result of a DFT applied on the analytic representation of the sampled signal from Figure 2. Starting from that several applications can be derived.
Single sided spectrum of the analytic signal of y(t).
First, all the calculations which utilize the signal’s frequencies, for example, the estimation of the derivative introduced in Section 3.1.3, get simplified with respect to discrete data because the distinction of frequencies above and below fs/2 can be omitted. The example of the derivative in Section 3.1.3 instead utilized the discrete centered spectrum, which provides a frequency vector with positive and negative parts, which can be multiplied directly to the spectrum.
Second, the imaginary part of a(t) equals the Hilbert transform of its real part. The Hilbert transform’s phase shifting properties form the basis to calculate, for instance, the envelope function, which is discussed in the next section. The analytic signal also provides a way to estimate the instantaneous frequency of a signal. The latter ends up in the empirical mode decomposition, which gives a time depending spectral decomposition according to [8], which is out of scope of this guide. In Section 3.7, it will be shown how to utilize the Fourier transform and how to overcome its drawbacks in conjunction with the analytic signal of data of a nonstationary process.
3.3.1. Implementation
According to Appendix D, the function analyticFunction(y) provided by the spectral package calculates the analytic signal. An example is given in Figure 7, which displays the analytic representation of the sampled artificial example function (25) from Section 3.1. Here the spectrum is single sided and all the amplitudes have their correct value. Note that all components above fs/2 are zero, because the upper half of the spectrum is projected into the lower half by this method. In consequence, the illustration in Figure 7 is in contrast to the given examples (e.g., Figure 6) above, where individual amplitudes only contain the half of the true value.
The program code
# normalized DFT-spectrum
X <- fft(x) / length(x)
# synthetic spatial vector
f <- 0:(length(X) - 1)
# shifted by half the length
# so 0 is in the middle
f <- f - mean(f)
# Hilbert transform
X <- X ⋆ (1 - sign( f - 0.5 ))
# correct mean value
X[1] <- 0.5 ⋆ X[1]
ht <- fft(X, inverse = T)
explains how the analytic signal is calculated. Here the part (1 − sign( f − 0.5 )) solves three things. First, this statement shifts the phase with respect to the sign of the virtual frequency vector f, whereby the “negative” frequencies are located in the upper half of the data set. This provides a better solution than (19) because the evenness of the data sets length does not matter anymore. Second, since f is an integer vector, the subtraction f − 0.5 circumvents the problem sign(0)=0, which avoids an error with even length data sets at f=0. And third, the above code calculates the Hilbert transform and the analytic signal in one step in the spectral domain by combining (18) and (33).
The code below produces the single sided spectrum Y given in Figure 7. Note that the amplitudes correspond to the real input values of the function (25) and all spectral parts above fs/2 are zero.
x <- seq(0, 1, by = 0.05)
x <- x[ -length(x) ]
y <- function(x) sin(2⋆pi ⋆ 4⋆x) +
0.5 ⋆ cos(2⋆pi ⋆ 2⋆x) + 1.5
Y <- spec.fft( analyticFunction(y(x))
, x, center = F )
3.4. Calculating the Envelope
Sometimes it becomes necessary to calculate the envelope function of data. Different approaches are possible, for instance, finding all maxima and then fitting a spline function to these points. However, in the following, the spectral way to do that is introduced.
First, remember the trigonometric identity(36)sin2x+cos2x=1, which becomes the key component of the calculation later. Next to that, assume a function(37)yt=At·cosω0t with A(t) as the envelope function, which is modulated with the carrier cos(ω0t). It is clear now that calculating the envelope is equivalent to an amplitude demodulation.
Now, the amplitude function is gained by (38)At=yt2+y∗t2,=At·cos2ω0+sin2ω0︸=1, where y∗(t) denotes the signal y(t) phase-shifted by π/2, so that y∗(t) equals the Hilbert transform(39)y∗t=At·sinω0t(40)=Hyt.Under the condition of sufficient band limitation, the statement above can be expressed as(41)At=yt2+Hyt2,which works, because (40) acts like an ideal phase shifter as discussed in Section 2.3 and Appendix D.
3.4.1. Application
The calculation is straightforward and follows (41). The spectral package includes the function envelope() to perform the calculation of 1D envelope. The only problem with this is the band limitation, which is necessary to achieve reasonable results. Compared with Figure 8(b), here the required bandwidth to demodulate the envelope becomes clearly visible in the negative and positive half plane.
The envelope function of a signal. The spectrum (b) clearly indicates the bandwidth, which is needed to reconstruct the envelope. In case of a noisy signal the band pass filter must have at least this width to resolve reasonable results.
Time domain
Spectral domain
3.5. Convolution
The Fourier transform not only maps a spatial function into the spectral domain, but also converts several mathematical operations. One example is the derivation of(42)dftdt⟷iωFω, which equals a simple multiplication with the complex variable iω in the spectral domain.
In the following, the convolution ∫-∞∞f(τ)g(t-τ)dτ will be introduced by the identities(43)F∫-∞∞fτgt-τdτ︸=ft∗gt=Fft·Fgt,(44)Fft·gt=∫-∞∞FτGω-τdτ,which are valid only if the integrals ∫-∞∞f(t)2dt and ∫-∞∞g(t)2dt are existent. Equation (43) describes the folding operation (convolution) in the spacial domain, which corresponds to a simple multiplication in spectral domain.
The working principle and the implementation of (43) and (44) via the fast Fourier transform (FFT) do form one of the most powerful tools in the field of numeric computation.
3.5.1. Example: Polynomial Multiplication
Suppose two polynomials of the degree n=2. Then the multiplication of these two would be(45)a0+a1x+a2x2·b0+b1x+b2x2=a0b0+a0b1x+a0b2x2+a1b0x+a1b1x2+a1b2x3+a2b0x2+a2b1x3+a2b2x4=∑i=02ncixi,which finally gives an expression of degree of 2n=4. Performing the expansion will end in a convolution of the two coefficient vectors, which must be 2n+1 elements long to fit the result in the output vector. Doing all that will spend (n+1)2=9 floating point multiplications on a computer. The statement(46)a0a1a200∗b0b1b200=a0b0a0b1+a1b0a0b2+a1b1+a2b0a1b2+a2b1a2b2expresses the convolution and its result, where the symbol ∗ denotes the folding operator. Alternatively, this operation can be done with the help of the Fourier transform. According to (43),(47)c→=F-1Fa→·Fb→ requires the transform to be calculated three times.
The great advantage of using the Fourier transform, instead of the straightforward expansion, is the scaling of n·log2n for the implementation of the FFT [14, Chap. 30]. As shown in Table 2, the FFT accelerates the calculation from a value of n>32 for this example.
Computation costs for the polynomial multiplication. The calculation amount for the FFT is estimated by 3·(2n+1)log2(2n+1).
Degree
Expansion
FFT
8
64
209
16
256
500
32
1024
1175
64
4096
2714
The standard algorithm of the discrete FFT is limited by a vector size of N=2n, which can be overcome with the implementation of the DFT method as described by Frigo and Johnson [15], which even allows prime numbered lengths without a loss of performance.
As a simple example, the following code illustrates the method described above.
# defining the coefficients
a <- c(1:3, 0, 0)
b <- c(5:7, 0, 0)
# calculating the FFT
A <- fft(a)
B <- fft(b)
# convolve via multiplication
# of A and B
c <- Re( fft(A ⋆ B, inverse=T)
/ length(a) )
# would give the same result
c2 <- convolve(a, b, conj = F)
print(c)
####### OUTPUT ########
> 5 16 34 32 21
The result is given in the last line of the program listing. The R-command convolve(x, y, conj = F) uses the same mechanism and produces exactly the same results.
3.6. Spectral Filtering in Conjunction with the Sampling Theorem
Every sampled signal has its starting point t=0, followed by a point in time T where it ends. The sampling procedure itself and the nature of the signal finally define the underlying band limitation. The intention of the first part of this section is to show the relation between the measurement and the consequences which arise if a causal signal is sampled. It turns out that in conjunction with the Fourier transform several convolutions take place in the time and frequency domain by selecting the maximum time T and the maximum frequency.
Regarding that, it is important to understand that the filter process already begins with the sampling of the signal. Figure 9 illustrates that for the function(48)yt=0.8·cos2π·2t,which fits perfectly into the period of the sampling window. In principle, this function is defined on the interval of -∞<t<∞. Then, the data acquisition process defines a starting point at t=0 where the measurement begins and an end point t=T where the measurement stops. This is illustrated by the bold rectangular window(49)wt=1,0≤t<T=10,elsewhere in Figure 9(a). This window function picks out a range 0≤t<T of y(t) so that the signal(50)st=wt·yt is now defined by y(t) multiplied by w(t). According to the convolution properties of the Fourier transform (see Section 3.5), this operation corresponds to the convolution(51)Sω=∫-∞∞Wω-τ·Yωdτ=Wω∗Yωof the ideal spectrum Y(ω)=F(y(t)) with the spectral representation(52)Wω=1iω1-e-iωT of the window function w(t), which is in fact Shannon’s sampling function [11]. The consequence for the signal’s spectrum S(ω) is that the ideal Dirac-spectrum Y(ω) smears out to sin(x)/x like functions, because each arrow is weighted with W(ω). Note the zeros of the resulting spectrum are in an equidistant spacing of δf=1/T now. At this moment, the original function y(t) is just windowed and not sampled.
In (a), the function y(t) from (48) is sampled in the time range 0≤t<1. The finite time interval corresponds to a rectangular weight function. The continuous spectrum (arrows in (b)) is then convoluted with the spectral representation of that rectangular window. The black line indicates the convolution step for one left spectral part. Note that only the Dirac impulse generates a value, because all other components remain zero. The dashed line in (b) indicates the limitation to a maximum frequency, which corresponds to a convolution in the time domain (a). Open symbols show the sampling (a) and the result of the DFT (b).
Sampled signal with Ts=0.1
Spectrum and folding kernel
The next step is to limit the spectrum in Figure 9(b) to a maximum frequency; for example, fmax=1/2Ts, by applying an additional rectangular window in the frequency domain(53)Vω=1,ω≤2Ts-10,elsewhere. The parameter Ts becomes the period of the sampling frequency. The important link to the Nyquist frequency would be clarified later. However, the corresponding representation of V(ω) in the time domain(54)vt=sin2Ts-1·tt is displayed by the dashed line in Figure 9(a). In the context of the convolution, which now takes place in time domain, the windowed signal(55)st=wt·yt∗vtis finally convoluted with v(t). But still, the result (55) is not sampled yet which is done in the next step.
Sampling is a procedure where individual values of the function s(t) are selected and stored. With regard to the nature of the sinc-like functions in Figure 9, the sampling should take place at each possible zero of v(t), as well as v(t=0) which is the only sampling point with a nonzero value of v(t). This behavior guarantees that the convolution (55) rejects everything but the points of sampling. In the same moment, also the distance in between them is set to δt=Ts, because the function v(t) provides two zeros in an interval of (2Ts)-1·t≤2π, which correspond to the next nonzero samples of the neighboring points. In other words, by choosing the next sample, v(t) must be shifted by exactly Ts to reject the former sample and to take the current value. Note violating this restriction, because the location of sampling points jitters, will lead to an error in the discrete spectral representation S^m.
Finally, this conceptual description gives second access to the question why the sampling frequency should be at least twice the maximum signal frequency. A detailed mathematical derivation of the discrete Fourier transform and the properties of sampling is given in the book of Debnath and Shah [16, Chap 3]. However, the discrete sampling given in Figure 9(a) and the finite length of the sampling series cause the convolution (51) to take the only possible peaks in the spectrum at f=-2,2, because all other 2π-periodic values multiplied with the sampling kernel (sinc-function) evaluate to exactly zero.
If the signal period does not fit perfectly into the window, which is the case in reality, the kernel function W(ω) never matches the true Dirac, but instead of that it sums up values into the left and right bin of the corresponding discrete Fourier vector S^m. To reduce these side effects, other window functions can be used in advance, before the DFT takes place. The intent of the window functions (Hamming-, Blackman-, or Tukey-window) is to convolve the spectrum with a steeper decreasing function amplitude compared to W(ω) to minimize the effect of the period’s misfit. Moreover, w(t) has an infinite spectrum, which is then truncated in the finite and discrete data processing, which leads to additional side effects if the period does not fit the sampling window. However, window functions can help to reduce such interferences, which are generated when local nonperiodic events or infinite spectra are present.
3.6.1. Application
Convolution filter in the time domain, such as the moving average filter, is easy to calculate with the help of the FFT, given the signal(56)st=cos4π·t+sin20π·t+N0,0.50≤t<1, which is defined in the range of 0≤t<1. The moving average of the sampled signal s^n with the filter kernel length of NK=5,(57)s^n¯=∑i=1NKk^i·s^n-i, is then calculated for the nth element by taking the weighted sum of NK elements before that element. The filter kernel k^ for the example in Figure 10 is represented by the weighting coefficients(58)k^=1511111.
Calculating the moving average to reduce the noise and to reject the high frequency. Panel (a) shows the original function (gray line) and the noisy sampling points (open symbols). The result of the moving average in the time domain is given in (a) for the convolution (dashed) and the shifted FFT (dash-dotted). (b) displays the spectrum of the original function (gray), the filtered result (black), and the kernel function (dashed).
Time domain
Spectral domain
With respect to the spectrum of k^, it becomes clear that the length of k^ is chosen in a way that the high-frequency part sin(20π·t) is rejected completely. But it is also evident that other frequency components still remain in the result or were damped falsely, like the low frequency part cos(4π·t).
Figure 10(a) illustrates that the application of the moving average filter in the time domain will lead to a phase-shift in the signal. Working in the spectral domain, one will gain the possibility of shifting the signal back and reconstruct the whole definition range as follows: (59)Fst¯=eiωNK/2·Sω·Kω,Fs^n¯=ei2π0⋯N-1·NK/2N·S^m·K^m. The result of this operation illustrates the dash-dotted line in Figure 10(a). The code which calculates this is presented below.
Pay attention to the statement fft( c(k, rep(0, length(y) − length(k)))) in the first line. It first fills up k with zeros until the length of the whole data set, before the FFT is calculated. This is because the multiplication in frequency space can only take place between vectors of equal lengths.
With respect to the spectrum of the kernel K^, it becomes evident that the moving average filter is not able to suppress higher frequencies completely.
3.6.2. Example: The Ideal Low Pass Filter
Since the moving average has some drawbacks (because of its infinite frequency response), one should think about an ideal (low pass) filter with a finite frequency response. Such a filter has the advantage that unwanted spectral components can be rejected completely. On the other hand, this results in an infinite long kernel in the time domain, which is nevertheless calculated in the spectral domain.
The basic principle of such a filter is to set each component above a certain frequency ωmax value to zero(60)Sω>ωmax=0. Figure 11 illustrates the procedure. Compared to the previous moving average example in Figure 10, the output signal’s shape changes to a more smooth and accurate form. However, there remains a difference compared to the ideal low frequency part (dashed line in Figure 11(a)), which is the result of the remnant spectral components around the signal frequency f=2.
Calculating the ideal low pass filter. Panel (a) shows the original function (gray line) and the noisy sampling points (open symbols). The result of the ideal low pass filter in the time domain is given in (a) by the solid line. The dashed line in (a) illustrates the low frequency part of the signal with f=2. Panel (b) displays the spectrum of the original function (gray), the filtered result (black), and the spectral kernel function (dashed).
Time domain
Spectral domain
A code snippet to calculate the ideal low pass is presented below.
# assuming x and y to hold the time
# vector and the sampling points
Y <- spec.fft(y, x)
# doing the filtering
Y$A[abs(Y$fx) > 3] <-0
# reconstruct the filtered signal
yf <- spec.fft(Y, inverse = T)$y
The spectral package provides the filter.fft(y,x,BW, fc=0, n=3) function to do that in one step. It utilizes the analyticFunction (y) to filter the one-sided spectrum and enables the user to apply a band pass filter with the arbitrary polynomial weight function (61)w∝ωn.The parameter n sets the polynomial degree, whereas 3 is close to the moving average. Passing higher values, say n = 10, will successively converge to the ideal bandpass solution. All this is because sometimes it is desirable to have a smooth weight of frequencies and sometimes not.
The used code then changes to the following example.
yf <- filter.fft(y, x, fc=0, BP=3, n=10)
As seen in Figure 11, the bandwidth BP is symmetric around the center frequency fc = 0, so the values must be set in accordance with the desired window which should remain in the spectrum. Utilizing the analytic signal representation avoids mistakes when programming the weight function and keeps the code clean.
3.6.3. Example: Noise Reduction via Autocorrelation
By choosing an appropriate kernel function, it becomes possible to extract certain features of the signal. The previous example illustrated how a low pass filter can filter out a noisy signal by zeroing all upper frequency components in the spectrum. In the time domain, this corresponds to a folding operation of the input data with an infinite “sinc-” like kernel function.
Now the opposite approach is to ask what the most significant periodic components within the signal are. First of all, the continuous autocorrelation function (62)acfτ=∫-∞∞sts∗t-τdτ(63)=F-1Sω·S∗ωprovides a mechanism to examine a data set or function with respect to its self-similarity. The equations above convolute the signal s(t) with its complex conjugate s∗(t). According to the convolution properties, (62) can be expressed as the inverse Fourier transform of the product of the signals spectra. Note for a real-valued signal s(t)=s∗(t). Suppose the signal consists of a stationary sinusoidal signal. It is quite evident that the acf rejects the noise which overlays s(t). The underlying noise equals a nonstationary process and therefore it cancels out by utilizing the acf. A generalization of this statement is the Wiener-Khinchin theorem. Its application is discussed in detail by Cohen [4, Chap 1.9], Marks [17, Chap. 4], and Wiener [18].
Assuming the function(64)st=cos2π·2·t+sin2π·10·t+N0,0.5 with some normal distributed noise overlaid, the signal s^n is then the sampled function s(n·Ts) with Ts=0.01, so it fits perfectly into one sampling period of T=1. This example is very artificial and real-world signals often cause aliasing effects, so attention must be paid.
The discrete spectrum of acfn can now be calculated by(65)ACFm=S^m·S^∗m. Note the resulting spectrum is real-valued, so the phase information will be lost. A proper noise reduction can be achieved by defining a weighting vector(66)Wm=0,ACF<sdACFm1,else, which sets every spectral component smaller than the standard deviation of the ACFm to zero. The resulting filtered signal(67)s^fn=F-1Wm·S^m is given in Figure 12. For the given example, it is important to calculate everything with the analytic function representation to preserve the energy content of the data vectors.
x <- seq(0, 1, by = Ts)
# for perfect periodicity
x <- x[ -length(x) ]
y <- cos(2⋆pi ⋆ 2⋆x) + sin(2⋆pi ⋆ 10⋆x)
+ rnorm(length(x), sd = 1)
# calculating the autocorrelation
Y <- fft( analyticFunction(y) ) / length(y)
ACF <- Y ⋆ Conj(Y)
# calculating the weight vector
w <- ACF
w[abs(w) < sd( abs(w) )] <- 0
w[w != 0] <- 1
# backtransform of the filtered signal
yf <- Re(fft( Y ⋆ w, inverse = T) )
Noise reduction with autocorrelation. In the left panel (a) the sampled function (gray line) is overlaid with noise (circles). The bold line is the result of the weighted filtering procedure. In the frequency domain (b) the single sided spectra of the analytic functions are displayed. The input function (gray symbols) contains much noise which is reduced due to the autocorrelation (dotted black). The horizontal line indicates the threshold below which all spectral components are ignored.
Many signals in reality do not fulfill the requirement of stationarity. It is very often the case that the measured signal is overlaid by a trend or a “temporal-local” event that takes place only once. All these in-stationary and nonperiodic processes will spread out into all frequencies in the corresponding Fourier decomposition of the data. If a proper band limitation cannot be achieved, the mapping between physical frequencies and the spectral representation might fail too, as stated in Section 3.1.
The equation(68)yt=12π·0.05·e-t-0.22/2·0.052︸A1t·sin2π·20·t+12π·0.1·e-t-0.72/2·0.12︸A2t·sin2π·40·t describes a very artificial example of a nonstationary signal, which is illustrated in Figure 13(a). Here two different events occur. The first one has a low frequency at f1=20 Hz, whereas the second one oscillates faster with f2=40 Hz. Both parts are modulated with a Gaussian envelope to switch each of them on and off. Finally, Figure 13(b) shows the resulting time variant decomposition of the signal y(t). The waterfall() function from the spectral package can be used to calculate such kinds of waterfall diagrams.
Basic simple example of a nonstationary signal (a) with its time depending spectral decomposition in (b). The different line types of the envelope correspond to the first and second event.
Time domain
Waterfall diagram
Note this type of analysis produces a two-dimensional time-frequency map like a shifting FFT, which selects only a window of the data before doing the spectral decomposition for one time step. The amplitude functions A1(t) and A2(t) play the role of arbitrary window functions which mask the time series. The interpretation is that Ai(t) contains the information in time where the finite process is located. All this relies on the assumption that the signal of interest can be modeled as(69)yt=∑iAit·cosωi+φi.
In contrast to the shifting FFT, the introduced approach is to do an amplitude demodulation by calculating the signal’s envelope. The result is the amplitude function A(t) for a given center frequency fc, which corresponds to the frequency of the physical process. Finally, a sufficient bandwidth around fc guarantees that A(t) will be reconstructed correctly.
The first advantage of this method is that for each single frequency of interest the whole data set is taken into account. Note the shifting FFT would select only a certain range of the data, so information about low frequencies can be lost if the selected window becomes too small. However, the overall frequency resolution decreases to the length of the window size of the shifting FFT. In the end, two signals that are very close in the frequency space might not be distinguishable anymore. The second point is the necessary band pass filter at fc, which is changed according to the actual frequency. This draws attention to the uncertainty principle, which states that with increasing frequency the locating of an event becomes sharper but the exact frequency gets more incorrect.
3.7.1. Implementation
As stated above, the key component is a band pass filter BP, which must be applied twice to calculate the two envelopes in Figure 13. The following code shows how this can be done in R with the appropriate functions provided by the spectral package.
A1 <- Re( filter.fft(y,x,fc=20, BW=10, n=10) )
A1 <- Re( envelope( A1 ) )
A2 <- Re( filter.fft(y, x, fc=40, BW=10, n=10) )
A2 <- Re( envelope( A2 ) )
The next step is to do this calculation for many frequencies. A reasonable range is to start from f=0 up to fs/2. To accelerate the code the waterfall() function uses a fast version of the envelope() function, which combines the filtering and the Hilbert transform as follows.
# Defining the frequency vector
Y.f <- seq(0,(n-1)⋆df,length.out=n)
# calculating the sign for the HT
sY.f <- (1 - sign(Y.f-mean(Y.f)))
# correct first bin
sY.f[1] <- 1
fast_envelope <- function(y,x,fc,BW,nf)
Y <- BP(Y.f,fc,BW,nf) ⋆ fft(y)/n ⋆ sY.f
hk <- base::Mod( fft(Y + 1i⋆Y,inverse=T)/ sqrt(2) )
return(hk)
Here the working band pass filter is implemented as a weighting vector multiplied by the amplitudes. The second line in the fast_envelope() function performs the back transform. Compared to the calculation of A1 and A2, this method saves two back transforms and one real part extraction.
Finally, the bandwidth calculation is done by an empirical approach, like(70)BWfc=4δffc<16δffc416δf≤fc≤wd·δfwd·δfelse. Here δf=1/Δx denotes the frequency step and wd is the normalized width of the resulting band pass. The task of BW(fc) is to widen the frequency band for higher frequencies, whereas low frequencies take a very small bandwidth. This somehow pays attention to the uncertainty principle, which states that “a narrow waveform yields a wide spectrum and a wide waveform yields a narrow spectrum and both the time waveform and frequency spectrum cannot be made arbitrarily small simultaneously” [19].
3.7.2. Application
The two following examples illustrate how the waterfall() function can be used for intricate signals. First of all the function (71)yt=2t-1·sin2π·10·t+0t≤0.5sin2π·20·t2elseis going to be analyzed. Figure 14 illustrates the results. The waterfall diagram in Figure 14(b) displays all the features of (71). Note that even the x function and the sinx2 term can be distinguished, whereas the normal time-invariant Fourier spectrum in Figure 14(c) hides this property completely. Here the 10 Hz carrier is the only “correctly” visible signal component.
Complex example for the temporal-dependent spectral decomposition. (a) shows the time series and (b) its decomposition. Panel (c) demonstrates that the simple 1D spectrum would produce misleading results. Here one can only identify the 10 Hz carrier, but the frequency drift is hidden behind the wide band spectrum at around 30 Hz.
Time domain
Waterfall diagram
1D spectrum
A second example is given in Figure 15(a). Here the given data represents the outflow temperature of a cooling system with about 300 kW power. It consists of a heat exchanger outside the building and a secondary water loop inside from which to draw the heat. A sprinkler system improves the performance of the outside heat exchanger, so that outflow temperatures below the ambient temperature become possible. Figure 15(b) illustrates the ambient temperature near the external air heat exchanger. The impact of the sun’s radiation becomes visible in a signal amplitude with a very long period of about Tsun≈80 min. In terms of period lengths, the high eigenfrequencies of the hydraulic system become visible at the bottom, approximately within a range of 10 min to 20 min, when the control loop reaches its limit.
Outflow temperature of a 300 kW cooling system. Panel (a) shows the performance of the outflow temperature. At about 4 p.m., the system reaches its control limits, because the ambient temperature (b) exceeds significantly the set-point (20°C) of the outflow temperature. The high eigenfrequencies of the hydraulic system become visible from about 2:30 p.m. on.
Outflow temperature
Ambient temperature
Spectrum of the outflow temperature
Finally, the following code shows how to invert the frequencies to periods. Note that a resampling of the matrix must be performed, because the unequally spaced vector 1/f now maps to the corresponding spectrum. This task is done by the plot.fft() function, which also plots objects with the attribute mode = "waterfall".
# calculating the waterfall diagram
wf <- waterfall(temp,time,nf = 3)
# change to periods = 1/f
wf$fx <- 1/wf$fx
# avoid "infinite" values
c <- !is.infinite(wf$fx)
wf$fx[1] <- 2 ⋆ max( wf$fx[c],na.rm = T )
# plot graph with hour-scale on the x-axis
wf$x <- wf$x/60
plot(wf,xlim=c(11,17),ylim=c(3,80)
,zlim=c(0,0.5) )
3.8. Fragmented and Irregularly Sampled Data
Sometimes the nature of measurement prevents an equally spaced sampling. Observations of astrophysical bodies like the sun or the moon are examples of objects only visible at a certain time of day. Concerning a definite property of these bodies, the corresponding time series would be fragmented, which makes a spectral analysis with a standard Fourier transform almost impossible. A nice application example can be found in “The Lick Planet Search: Detectability and Mass Thresholds” by Cumming et al. [20]. Another issue would be the measurement with randomly sampled data, for instance, the occurrence of an event like the appearance of a malfunctioning product in a production lane or even a measurement with a high jitter.
The time instances tn when a sample is taken now depend on the sample number n. This means the time interval (72)Δt=tn-1-tn≠const.is not constant anymore, so the sampling time becomes an additional data vector. This enables one advantage in case of randomly sampled data. Given a stationary process, it becomes possible to estimate signal amplitudes for frequencies which are in the order of fmax≈Omin(Δt)-1, with the minimal distance between two points. In fact, this is related to an average Nyquist frequency f¯c, where(73)fmax>12Nmaxt-mint=f¯c, which can be defined by the time range of the time series and the total number of samples.
Let us introduce the Lomb-Scargle periodogram as a consistent estimator for spectral properties of discrete data. The estimation in that sense is not a mathematical transform like the Fourier transform. The frequencies of interest can be selected freely, so the whole method analyzes a data set against amplitude and phase. The associated mathematical details of this statistical approach can be found in the papers by Mathias et al. [21], Hocke and Kämpfer [22], Scargle [7], and Lomb [23], who originally developed the method.
In principle, the concept of the procedure is about a least square optimization of the parameters Ai and Bi for the model function(74)yt=∑iAi·cosωi·t-τi+Bi·sinωi·t-τi,which is fitted to the data. Remember the introductory Section 2.2 shows a slightly different approach in conjunction with the trigonometric identities (4), which also become the key component here. While the traditional calculation of a Newton algorithm takes several iteration steps until the result converges, the optimal parameters can alternatively be estimated by one single matrix inversion; see [21]. This can be done only if the orthogonal property of the trigonometric model function is taken into account. The mathematical proof can be found in “A Generalized Inverse for Matrices” by Penrose and Todd [24]. As a necessary condition to obtain an optimal result for Ai and Bi, the relation(75)∑n=1Ncosωi·tn-τi·sinωi·tn-τi=0must be fulfilled. Given that it can be shown that the error of the resulting fit against the normal quadrature approach from Section 2.2 is minimized, from (75), the calculation of τi can be derived:(76)tan2ωi·τi=∑nsin2ωi·tn∑ncos2ωi·tn.
Next to that, the parameters R, I, C, and S will be defined as follows:(77)Rωi≡∑nyncosωi·tn-τi,Iωi≡∑nynsinωi·tn-τi,Cωi≡∑ncos2ωi·tn-τi,Sωi≡∑nsin2ωi·tn-τi.
Finally, the power spectral density P, the absolute amplitude A, and the phase φ(78)Pωi=12σ2Rωi2Cωi+Iωi2Sωi,Aωi=2NRωi2Cωi+Iωi2Sωi,φωi=-tan-1IR+ωiτi can be calculated from these coefficients. Hereby σ describes the standard deviation of the discrete data vector yn. In comparison to (14) from Section 2.2, the equations above work quite similarly. But now the modified function argument—extended by τi, the correction terms C(ωi) and S(ωi)—leads to the optimal least square fit solution.
In the limit of an infinite number of equally spaced samples, the Lomb-Scargle estimator converges to the Fourier transform, so it becomes a consistent estimator [21]. The last statement is very important, because with an increasing number of samples the error reduces until the result converges to the true amplitude and phase.
To value the significance of the estimated amplitude the “false alarm probability” (FAP)(79)pP<P0=1-1-e-PMis defined. Here the probability p that there is no other larger amplitude P than P0 is expressed in terms of the exponential function above. For small values of p, the approximation(80)pP<P0≈M·e-Pp≪1 can be used. The free parameter M counts the independent frequencies in the data. These are difficult to measure a priori, but it turns out that with M=N/2 sufficient results can be achieved. A brief discussion on this issue can be read in the work of Townsend [25, Cap. 6.2], Zechmeister and Kürster [26], and Cumming et al. [20].
3.8.1. Implementation
The calculation of the phase is tricky. Invoking the arctan2() function is mandatory. Nevertheless, to prevent errors, the phase φ is calculated by(81)φ=-arctan2IN,RN+ωiτi, which takes the normalized components I and R.
Next to that, if the suggestions made by Townsend [25] are taken into account, then the above algorithm can be shortened. The problem is that the straightforward implementation runs two times over the whole data set, while calculating τ prior to the rest of the parameters. A keen refactoring of the equations above will minimize the computational cost. For a certain frequency ωi, the amplitudes (82)Aωi=2Ncτ·XC+sτ·XS2cτ2·CC+2cτ·sτ·CS+sτ2·SS+cτ·XS-sτ·XC2cτ2·SS+2cτ·sτ·CS+sτ2·CC can be calculated out of the parameters (83)XC=∑jyj·cosωi·tj,XS=∑jyj·sinωi·tj,CC=∑jcos2ωi·tj,SS=∑jsin2ωi·tj,CS=∑jcosωi·tj·sinωi·tj,τL=tan-12·CS2·CC-1,cτ=cosω·τL,sτ=sinωi·τLin one single loop. A vectorized R-code example is given below. Here the data and the corresponding frequencies form the matrix omega.x, which is processed successively.
# put everything in a matrix.
# One frequency per column.
# x correspondes to the time.
# y_ corresponds to the mean-free
# data vector.
omega.x <- x %⋆% t(omega)
co <- cos(omega.x); si <- sin(omega.x)
# use trigonometric identities
co2 <- co∧2; si2 <- 1 - co2
si <- sqrt(1-co2)
CC <- colSums(co2)
SS <- colSums(si2)
YCS <- colSums(y_ ⋆ co)
YSS <- colSums(y_ ⋆ si)
CS <- colSums(si⋆co)
tauL <- atan(2 ⋆ CS / (2⋆CC - 1) / (2⋆omega) )
ct <- cos(omega ⋆ tauL)
st <- sin(omega ⋆ tauL)
ct2 <- ct∧2; st2 <- 1-ct2
ctstCS <- 2⋆ct⋆st⋆CS
R <- (ct⋆YCS + st⋆YSS)
I <- (ct⋆YSS - st⋆YCS)
# a trick to reduce numeric error
# in atan2()
l <- sqrt(R∧2+I∧2)
A <- sqrt( 2/nt⋆ ( R∧2/(ct2 ⋆ CC + ctstCS + st2 ⋆ SS ) + I∧2/(ct2 ⋆ SS – ctstCS + st2 ⋆ CC) ) )
phi <- - omega⋆tauL
- atan2((I/l),(R/l))
The R-code illustrates how to use the specialties of the language. Instead of programming a for statement, the faster vector and matrix operations of R can be invoked.
3.8.2. Application
To show the power of the method, let us first assume a simple example. The function(84)yx=sin2π·7·x is going to be sampled with N=101 equally spaced samples. Remember the last point of the data vector equals the first point of the period of the signal. As discussed in the application section of Section 3.1, the violation of the periodicity would lead to an error in the result of the Fourier transform. In addition to that, approximately 30% of the data were deleted.
Subsequently, Figure 16 shows the fragmented signal and the corresponding periodogram, which is calculated with the spec.lomb(x, y, f) function.
Simple Lomb-Scargle periodogram. The open symbols in panel (a) display the input signal. The solid line shows the reconstruction out of the spectrum. Panel (b) focuses on the periodogram. Here, the solid line corresponds to the spectral amplitude and the black points indicate the results of a DFT, as if there was no gap in the data. The dashed line represents the false alarm probability p. Note the value of 1 is located at the bottom, whereas smaller values are displayed above.
Input data and signal reconstruction
Periodogram
The code above shows how Figure 16 can be created. Note that the frequency vector f contains 250 different frequencies to analyze the data. Compared to the single sided spectrum of the unfragmented data’s analytic signal representation (black dots in Figure 16(b)), the Lomb-Scargle periodogram produces large side band amplitudes to the left and right of the main amplitude. This is quite typical if the data is nonuniformly sampled or even patchy. The dashed line represents the false alarm probability, which tends to zero if the corresponding amplitude is significant.
The spectral package also provides a filter.lomb() function, with which the most significant amplitudes can be extracted for reconstruction. Provided the continuous sampling vector x.new, the result is a new data set in which the remaining gaps are filled.
A more complex example is given in Figure 17. Here the function(85)yx=sin2π·x+N0,Δx4+sin2π·20·x+N0,Δx4+N0,1 consists of two sin-terms, the location of sampling jitters, and the amplitude itself. In the simple test case here, normal distributed noise is assumed and the resulting data vectors xn and yn are N=101 elements in length. However, for reconstruction, the most significant amplitude values are selected again, so that the dashed line in Figure 17(a) will fill the gaps correspondingly.
Randomly sampled data with noise and gaps. The bold gray line in (a) is the discontinuous signal. The circles represent the sampling points, which are overlaid with some strong noise and jitter. The dashed line represents the reconstruction. In (b) the corresponding Lomb-Scargle periodogram is given. The black line is the amplitude spectrum and the dashed line indicates p. The black points show the amplitude maxima, which the filter.lomb() function chose for reconstruction.
Noisy input data and reconstruction
Periodogram
For further reading about nonuniformly sampled data, the adaptive approach introduced by Stoica et al. [27] is recommended. This method has a better signal to noise ratio but will have much more computational cost.
4. Conclusion
Today, spectral analysis is one of the key methods in data processing. It helps to filter signals, to accelerate calculations, or to identify processes. While the complex mathematics is already documented in the given references and the common text books, the present work focuses on the application of the basic spectral methods in one dimension. Hereby working principles of the introduced methods and their possible misunderstandings are of interest. Small examples support the explanations about the work of operation and show the “dos and don’ts.” The idea of the present work is to show how the complex underlying theory of spectral methods can be applied to solve real-world tasks. A dedicated tool supporting this intent is the spectral package in the statistic environment R, which was developed contemporaneously to this paper. Further work will cover the applications of spectral methods in two and more dimensions.
AppendixA. The Origin of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M422"><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mtext>s</mml:mtext></mml:mrow></mml:msub><mml:mo mathvariant="bold">/</mml:mo><mml:mn>2</mml:mn></mml:math></inline-formula>
To derive the band limitation, we have to assume a function y(t) for which the Fourier transform Y(ω) exists. There might be a natural maximum frequency ωmax≤ωB in the signal, so the integration boundaries of the back transform(A.1)yt=12π∫-ωBωBYω·eiωtdω can be limited to ωB. In the next step, this function is sampled at discrete time instances t=n·Ts, with a sampling frequency fs=1/Ts. It follows(A.2)yn·Ts=12π∫-ωBωBYω·eiωn/fsdω,=12π∫-2πfB2πfBYω·ei2πf/fsndω.By substituting dω↔2πdf, the modified (A.2) can be written as(A.3)yn·Ts=∫-fBfBY2πf·ei2πf/fsndf. The pointer eiφ is periodic with respect to 2π, so the span of Δf/fs must not exceed unity to reject duplication. It follows(A.4)Δffs≤1 so that the maximum integration boundaries(A.5)fB≤0.5·fs can be concluded in the first instance.
Due to the requirement that the frequency span must be less than the sampling frequency, the undersampling of functions becomes possible. This changes the meaning of a typical Nyquist limit to a limitation of bandwidth which is necessary to obtain uniqueness.
B. Properties of the Fourier Operator
In the following, the properties of the Fourier operator [5] are listed.
nth derivation: the function f(t) must be absolutely integrable and limt→±∞f(t)=0(B.5)Ffnt=iωnFft.
Integration: only if ∫-∞∞f(t)dt=0, then (B.6)F∫-∞tft=1iωFω.
Folding: only if the integrals ∫-∞∞f(t)2dt and ∫-∞∞g(t)2dt are existent, the two-sided folding of f(t) and g(t), (B.7)ft∗gt=∫-∞∞fτgt-τdτ,
can be calculated with the help of the Fourier transform. In the frequency domain, the folding operation reduces to a multiplication. This is expressed as follows:(B.8)Fft∗gt=Fft·Fgt.C. Derivation of the Discrete Fourier Transform
The derivation of the discrete Fourier transform (DFT) is described below in detail. The already introduced and sketched out properties of the DFT will become clear with the discrete formulation of the Fourier transform. We will see that the spectral information becomes periodic and in the 1D case redundant.
First, let y be a real-valued function of time, with(C.1)y=xt:t∈R⟶xt∈R, which is going to be sampled later on. Figure 18 illustrates the continuous function and its sampling points. To perform the sampling, it becomes necessary to define the Dirac function as a distribution as follows:(C.2)δt=∞,t=00,t≠0,(C.3)with 1=∫-∞∞δtdt.The Dirac pulse is described as δ(t≠0)=0 for all values of t except t=0. At this point, the Dirac function returns an infinite value, δ(t=0)→∞. However, the behavior (C.3) turns the Dirac function into a distribution. The finite value of the integral enables subsequent calculations to sample a point from the continuous function y=x(t). This is called “sifting” property and can be written as(C.4)∫-∞∞δtϕtdt=ϕ0.In conjunction with the Fourier transform (C.4), this leads to a necessary simplification for the derivation of the DFT.
Common band limited and real-valued function. The reference sampling points are equally spaced with the distance Ts.
Next to that, the continuous function x(t) is going to be sampled with the help of the sampling function ∑n=-∞∞δ(t-nTs). The sampling series(C.5)At=xt∑-∞∞δt-nTs=∑-∞∞xnTsδt-nTsevaluates x(t) at n reference points in time by multiplying the sampling function to x(t). Because of the properties of the Dirac pulse, the sum (C.5) converges to the value A(t)→∞.
In the next step, the sampling series A(t) is going to be inserted in the Fourier integral(C.6)Xω=∫-∞∞xte-iω·tdt.This transforms the function x(t) from time or spatial domain into frequency domain, expressed by the complex valued spectrum X(ω). The capital letter X(ω) supports the result, which is a bijective mapping of x(t)↔X(ω). At this point, both function x(t) and its spectrum X(ω) are still continuous. For real signals, x(t) is causal, real-valued, and finite in time, with(C.7)0<t≤T. This means the signal is defined in a certain range of positive t and stops existing after a maximum time T. With this behavior, the boundaries of integration in (C.6) change. The result is the discrete spectrum of the Fourier series(C.8)Xm=1T∫0Txte-im·ω0·tdtwith m∈N. Inserting the sampling series A(t) in (C.8), now discrete points enter the discrete spectrum:(C.9)Xm=1T∫0T∑n=-∞∞xnTsδt-nTs︸Ate-im·ω0·tdt.
The distribution character (C.3) and the sifting property of the Dirac function help to rearrange the equation into the DFT form [5, 28],(C.10)Xm=1T∑n=-∞∞xnTse-im·ω0·nTs. For a finite data set of N equidistant data points, ω0=2π/T, and a total length of T=N·Ts, this can be written as(C.11)Xm=1NTs∑n=0N-1xnTse-im·2πnTs/N·Ts=1N∑n=0N-1xne-im·2π·n/N.Here Ts denotes the inverse sampling frequency Ts=fs-1 and m∈N measures the normalized discrete frequencies on which the spectrum is being evaluated.
This discrete form of the Fourier transform provides several properties which define the spectrum X(m).
The spectrum is periodic with respect to 2π, as indicated in Figure 19. It follows that at least m=N frequencies are necessary to describe a unique mapping of x(n)↔X(m).
X(ω) can be interpreted as a line spectrum because it is only evaluated at ω=2π/N·Tsm discrete frequencies.
The main feature, which arises from (C.11), is the 2π periodic spectrum. Regarding the two-sided infinite integration boundaries of (C.6) and the following steps, it becomes clear that X(0⋯N) represents a finite long unique pattern of the spectrum. In the first instance, the repetition of this also enables negative frequencies, which become important in multidimensional spectral analysis. As a consequence of the symmetry around ω=0 (see Appendix A) and the periodicity, frequency components larger than half of the sampling frequency are projected into the lower half (0⋯π) of the spectrum. In case of a signal which is not band limited, an explicit mapping between a certain spectral component and the underlying physical process becomes almost impossible, until an additional information resolves this “undersampling” condition and enables a remapping. An illustrative example is given in Section 3.1.
DFT with 2π periodic spectrum. The dimensionless frequency axis ranges from 0 to 2π whereas its resolution ω0 is determined by the length of the total data set T=N·Ts. It follows that the Nyquist frequency can be found at π. In case that m≥N, which is equal to ω≥2π/Ts, the spectrum repeats. At m=0, the plot indicates the mean value of the analyzed data. An arbitrary higher frequency component is indicated at m=3.
D. Derivation of the Analytic Signal and the Hilbert Transform
Given a real-valued time-dependent signal s(t), the Fourier transform can be calculated by(D.1)Sω=12π∫-∞∞st·e-iω·tdt. The question is: what would happen if a function a(t) maps to a one-sided spectrum of s(t) consisting only of the positive frequencies in S(ω)? In terms of a back transform, this can be written in the form(D.2)at=22π∫0∞Aωeiω·tdω.Supplying the previous spectrum Sω=Aω to (D.2) will give(D.3)at=2·12π2∫0∞∫-∞∞st′·e-iω·t′dt′eiω·tdω=1π∫-∞∞∫0∞st′eiω·t-t′dt′dω. With the identity ∫0∞eiω·tdω=π·δ(t)+i/t, the analytic signal a(t) can be determined as follows:(D.4)at=1π∫-∞∞st′π·δt-t′+it-t′dt′=st+iπ∫-∞∞st′t-t′dt′︸Hst′.
Equation (D.4) shows the analytic signal definition for a(t). It consists of the real-valued signal s(t) and its complex extension in form of the Hilbert transform iHs(t).
The convolution (D.5)Hst=1π∫-∞∞st′t-t′dt′can be calculated in the frequency domain by applying the Fourier transform to the Hilbert transform [10, Chap. 3](D.6)FHst=-i·signω·Fst. Now, the reader sees that Hf(t) shifts the phase by 90 degrees. In that sense, the Hilbert transform represents an ideal phase shifter.
The analytic signal and the Hilbert transform have certain properties, which are discussed in detail in Section 3.3 and in [4, 10].
Note. The introduced identity(D.7)∫0∞eiω·tdω=π·δt+it is not so obvious to resolve. The following derivation will explain this in detail. First, the exponent of the left hand side is extended with -ε·ω, which is then taken in the limit to zero(D.8)∫0∞eiω·tdz=limε→0∫0∞eiω·t-ε·ωdz. This is now being successively reduced: (D.9)limε→0∫0∞eiω·t-ε·ωdω=limε→0∫0∞e-ω·-it+εdω,=limε→0-1-it+ε·e-ω·-it+ε0∞,=limε→0-1-it+ε·0-1,=limε→01-it+ε=limε→0it+iε.
The last term is now expanded by the conjugate denominator t-iε. It follows (D.10)limε→0it+iε=limε→0εt2+ε2︸→π·δt+itt2+ε2,=π·δt+it. The last term expresses the Poisson kernel, which is the solution of the Laplace equation in the upper half plane [29]. The expression(D.11)limε→01πεt2+ε2=δt also equals the Lorentz distribution and converges to the Dirac function in the limit of ε→0.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Core TeamR.SeilmayerM.Spectral: Common Methods of Spectral Data Analysis2016, R package version 1.0., http://CRAN.R-project.org/package=spectralSmithS. W.CohenL.BronštejnI. N.SemendjajewK. A.JerriA. J.The shannon sampling theorem—its various extensions and applications: a tutorial reviewScargleJ. D.Studies in astronomical time series analysis. II—statistical aspects of spectral analysis of unevenly spaced dataHuangN. E.ShenZ.LongS. R.WuM. C.ShihH. H.ZhengQ.YenN.-C.TungC. C.LiuH. H.The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysisVakmanD. E.VaĭnshteĭnL. A.Amplitude, phase, frequency—fundamental concepts of oscillation theoryDuoandikoetxea ZuazoJ.ShannonC. E.Communication in the Presence of NoiseWhittakerJ. M.GonzalezR. C.WoodsR. E.CormenT. H.SteinC.LeisersonC. E.RivestR. L.FrigoM.JohnsonS. G.The design and implementation of FFTW3DebnathL.ShahF. A.MarksR. J.WienerN.SkolnikM. I.CummingA.MarcyG. W.ButlerR. P.The lick planet search: detectability and mass thresholdsMathiasA.GrondF.GuardansR.SeeseD.CanelaM.DiebnerH. H.BaiocchiG.Algorithms for spectral analysis of irregularly sampled time seriesHockeK.KämpferN.Gap filling and noise reduction of unevenly sampled data by means of the Lomb-Scargle periodogramLombN. R.Least-squares frequency analysis of unequally spaced dataPenroseR.ToddJ. A.A generalized inverse for matricesTownsendR. H. D.Fast calculation of the lomb-scargle periodogram using graphics processing unitsZechmeisterM.KürsterM.The generalised lomb-scargle periodogram—a new formalism for the floating-mean and keplerian periodogramsStoicaP.LiJ.HeH.Spectral analysis of nonuniformly sampled data: a new approach versus the periodogramHoffmannR.SteinE. M.WeissG. L.