The variance of energy estimates for the product model

. A product model, in which { x ( t ) } , is the product of a slowly varying random window, { w ( t ) } , and a stationary random process, { g ( t ) } , is deﬁned. A single realization of the process will be deﬁned as x ( t ) . This is slightly different from the usual deﬁnition of the product model where the window is typically deﬁned as deterministic. An estimate of the energy (the zero order temporal moment, only in special cases is this physical energy) of the random process, { x ( t ) } , is deﬁned as Relationships for the mean and variance of the energy estimates, m 0 , are then developed. It is shown that for many cases the uncertainty ( 4 π times the product of rms duration, D t , and rms bandwidth, D f ) is approximately the inverse of the normalized variance of the energy. The uncertainty is a quantitative measure of the expected error in the energy estimate. If a transient has a signiﬁcant random component, a small uncertainty parameter implies large error in the energy estimate. Attempts to resolve a time/frequency spectrum near the uncertainty limits of a transient with a signiﬁcant random component will result in large errors in the spectral estimates.


Introduction
Temporal moments are often used to study the properties of nonstationary random events [3]. The energy, centroid, rms duration, and skewness are low order statistics describing the event. For example, these statistics can be used to characterize a mechanical shock or an earthquake. The temporal information can be used independently, or can be used to supplement other information, like the shock response spectra, the Fourier energy spectrum, and frequency domain moments. The combination of the rms duration and the rms bandwidth defines the uncertainty theorem that places fundamental limits on our ability to resolve an event into a time/frequency framework. When a nonstationary random event is resolved into a time/frequency framework the variance of the description is always of interest. If a product model is assumed, the variance of estimates of the moments can be estimated [3]. All these estimates depend on the variance of the lowest order statistic, the energy. The variance of energy estimates and its relationship to the uncertainty principle is explored in this paper.
The product model is used to study the normalized error of energy (the zero order temporal moment) estimates of nonstationary random events. Two measures are discussed, the statistical bandwidth and duration, and the rms bandwidth and duration. Comparisons of the two methods are then made. The statistical bandwidth and duration are fundamentally more correct, but are harder to estimate. The rms bandwidth and duration are easier to estimate, but are only approximately predict the variance in energy. The uncertainty is defined as the product of the rms duration and bandwidth. It is well known that the uncertainty must be equal to or greater than a known constant [2].
Estimating the energy in a band of frequencies is the typical method of estimating a spectrum. For waveforms with a random component, the uncertainty parameter has broader implications than just a timefrequency resolution problem, it limits the accuracy of temporal and frequency domain moment estimates (and functions of these moments). A small (a small multiple of the theoretical limit) uncertainty parameter implies large errors in the moment estimates.

Statistical bandwidth and duration of a stationary process
The normalized error in spectral estimates of a stationary random process has been studied extensively [1]. Let g(t) be a single-sample time-history record from a zero mean stationary (ergodic) random process, {g(t)}. The mean square value can be estimated over a finite time interval, T aŝ The normalized error is given approximately by where the statistical bandwidth, B s is given by G gg (f ), f > 0 is the single sided auto (power) spectral density of {g(t)}. Details of the derivation of Eqs (2) and (3) are given in [1].

The product model
This concept can be generalized when {g(t)} is multiplied by a slowly varying, with respect to {g(t)}, nonnegative window, {w(t)}.
This is generally referred to as a product model. The typical product model assumes {w(t)} is a deterministic function. In this paper {w(t)} is assumed to be nonnegative and slowing varying, with respect to the duration of the autocorrelation of {g(t)}, the underlying random process. {w(t)} and {g(t)} are assumed to be independent. {g(t)} is assumed to be a normal stationary random process. With no loss of generality let the mean square of {g(t)} be unity. The expected value, E[], of the fourth power of {w(t)} integrated over all time is constrained to be finite.
A realization of the random process is Because the mean square value of {g(t)} is unity, the expected value of the energy is The square of the expected value of the energy is To provide a convenient definition of bandwidth, the analytic function, z(t) is introduced. For a realization of the random process the associated analytic function is where H() denotes the Hilbert transform. The analytic function is used to provide a more convenient definition of rms bandwidth. The Fourier transform of the analytic signal is twice the Fourier transform of x for positive frequencies and zero for negative frequencies.
where X(f ) is the Fourier transform of x(t) and Z(f ) is the Fourier transform of z(t). An unbiased estimate for a single realization of the zero order temporal moment, m o , sometimes called the energy, is given by The variance in the estimate of the zero order temporal moment, m o , is given by For the product model this gives The expected value of E[|g(t)| 2 |g(s)| 2 ] is given by Bendat and Piersol [1].
where R gg (t−s) is the autocorrelation function of g(t), , which for our case is unity. Combining these results give Because w(t) varies slowly with respect to the dura- Let a normalized variance error be given by Separating the terms in Eq. (19) that involve the window from those involving the underlying random process leads to the following definitions. Define the statistical bandwidth, B s , as Define the statistical duration, T s , as Then substituting Eqs (20) and (21) into Eq. (19) gives Using the relationship results in , the second term on the right side is zero, and the error reduces to the form in the previous paper on the subject [4]. If the standard deviation of w 2 (t) is small when compared with µ w 2 (t) the second term on the right side will be small. The second term will also be small if the autocorrelation of the random window, {w(t)}, dies out rapidly, i.e. the duration of w 2 (t) is much longer than the duration of where µ 2 w 2 (t) is the deterministic part and ε w 2 (t) is the random part of the window random process. Equation (25) becomes The second term on the right is small if E[ε 2 w 2 (t)] µ 2 w 2 (t), or if ε w 2 (t) becomes uncorrelated for τ small relative to the total duration of the transient. However, ε w 2 (t) cannot vary too rapidly with respect to g(t) or the conditions for Eq. (18) will be violated.
The autocorrelation and the autospectral density of {g(t)} form a Fourier transform pair.
The one sided spectrum is Parseval's formula gives This yields an alternate form of the statistical bandwidth Noting that the mean square of {g(t)} is unity together with Parseval's formula gives This definition is the same as the definition of statistical bandwidth (Eq. (20)) for stationary random given earlier.
The statistical uncertainty, U s , is defined as Neglecting the second term on the right hand side, an estimate is given bŷ The second term in Eq. (34) is always positive, therefore Eq. (35) will be a upper bound on the statistical uncertainty.

Estimating the statistical bandwidth and duration
An estimate of the statistical bandwidth is given bŷ The autocorrelation function can be estimated from [1]. The motivation comes from the observation that a hard clipped signal has the same zero crossings as the original signal and hence should retain the frequency information.
To estimate the statistical duration an estimate of the window is needed. For a single realization, smoothing (a running average either equal or unequal weighting can be used) the square of x and taking the square root will result in an estimate of the window. The smoothing reduces the effects of the underlying random process, {g(t)} which varies rapidly with respect to the window, {w(t)}.
A simple running average has been satisfactory in most cases. An estimate of the statistical duration is thenT Using Eq. (35), an estimate of the error becomeŝ

Uncertainty and the rms duration and bandwidth
In the classical discussion [2] of the uncertainty theorem, the rms duration, D t , and rms bandwidth, D f , are defined. The rms duration is the normalized second moment about the centroid, t c . An estimate is given bŷ An estimate of the centroid, t c , is defined as the time for which the first moment is zero In an analogous fashion an estimate of the center frequency and the rms bandwidth can found where Z(f ) is the Fourier transform of z(t) and f c is the center frequency defined analogous to the temporal centroid. Using the analytic function to define the rms duration and bandwidth has an advantage when the waveforms are oscillatory. The spectrum of the analytic waveform is zero for the negative frequencies, which gives a more reasonable estimate of the center frequency and bandwidth. The uncertainty theorem states that the product of the rms duration and the rms bandwidth must be greater than a constant where D t is in seconds and D f is in Hertz. Call this product the uncertainty parameter, U , or simply the uncertainty an estimate is given bŷ A normalized error can now be defined as An advantage of using the rms duration and bandwidth over the statistical duration and bandwidth is that no knowledge of the window or the underlying stationary process is required. The product model is not required for their estimation.

Relationship between the normalized error using the statistical duration/bandwidth and the rms duration/bandwidth
Consider the special case where the window and the spectrum of the underlying random process have a Gaussian shape. Consider a window with a unity variance The rms duration is D t = 1/ √ 2. The statistical duration is T s = √ 2π. Let the spectral density of the underlying random process {g(t)} also have a unity variance then and The uncertainty factor is the variance in the energy estimate.

Other windows
Consider the cases where the second term on the right side of Eq. (26) can be neglected. Just as a duality exists between the rms duration and the rms bandwidth, a duality exists between the statistical duration and the statistical bandwidth. The relationship needs only to be studied in one domain (time or frequency). The relationships for several other common windows are listed in Table 1.
In summary, anytime D s B s ≈ 4πD t D f the inverse of the uncertainty parameter will approximate the Gaussian squared error in the variance of the energy estimates.
For a rectangular window with a rectangular spectrum Thus for a rectangular window with a rectangular spectrum the squared error in the variance of the energy estimate is within a few percent of the inverse of the uncertainty. This is also true for several other windows. Thus, uncertainty parameter and the variance of the energy estimates are related in an important manner.
The uncertainty is a quantitative measure of the error in the energy estimates for many random realizations involving the product model. The measure also provides guidance as to the resolution we can expect when performing a time/frequency characterization of a deterministic transient. If we are close to the uncertainty criteria (U and e r near unity), our estimates can be in error because either time and/or frequency will be dilated. If the waveform is also a realization of a random process, we will be in error because of poor energy estimates. As e r gets small (U larger), we are on safer ground both from the viewpoint of distortion of the waveform and the viewpoint of errors in the energy estimate.

Ensemble averaging
If multiple realizations of the process are available, ensemble averaging can reduce the errors. The errors will be reduced roughly by the factor 1/ √ n, where n is the number of realizations available [1,Section 8.5.3].

Conclusions
It is desirable to represent transient waveforms in a time/frequency framework. An optimum way to perform this decomposition, keeping in mind acceptable error bounds, is still not clear. It is clear that the uncertainty theorem and estimation errors in the moments (energy, centroid, center frequency, duration, bandwidth, etc.) are closely related and related to a bandwidth-time (BT) product. For example, we can talk about the energy in a short interval of time, but it is meaningful only for a broadband process. We can talk about the energy in a narrow band of frequencies, but it is meaningful only for long duration waveforms. If we have a band-limited time-limited waveform, we are limited in the time/frequency resolution we can achieve. Moreover, if the waveform has a random component, as we approach the theoretical BT limits the estimation errors will be come unacceptable.

Acknowledgment
Thanks are given to Tom Paez for his review and extremely useful comments during the many revisions.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE-AC04-94AL85000.

Appendix A: Example
For this example a deterministic Hanning window will be used. This will allow the estimates of the window to be compared with the actual window. The window was 2048 samples long. A sample rate of 10 000 samples/second was chosen. The random signal was a white random band-limited from 1-2 kHz. The time history is plotted as the solid line in Fig. A1. The magnitude of the analytic function is shown as the dotted line. The Hanning window is shown as the dash-dot line. The smoothed estimate of the window is shown as the dashed line. The energy is estimated at 0.15 g 2 s (E = 0.146). The uncertainty is over 100 times the requirement of the uncertainty theorem (U = 114). To estimate the statistical duration, the waveform was smoothed with a running average of 205 points (S = 205). The statistical duration is estimated as 0.108 s (Dse = 0.0108). The true statistical duration calculated from the window is 0.105 s (Ds = 0.105). The statistical bandwidth is estimated as 1150 Hz (Bs = 1150). The normalized rms duration is estimated 0.104 s (2π 1/2 rmsD = 0.104). The normalized rms bandwidth is estimated as 1090 Hz ((2π 1/2 rmsB = 1090). The estimated error in the energy from the statistical duration and bandwidth is 9.0% (es = 0.0899). The estimated error in the energy from the rms duration and bandwidth is 9.4% (er = 0.094). As can be seen the two estimates of error are in close agreement. This implies that the energy estimate has an rms error of about 9% for this transient with a total duration of 200 ms, a bandwidth of about 1 kHz, and an uncertainty 114 times the minimum possible. If we attempted to resolve time better than 0.2 s or divide the signal into bandwidths of less than 1 kHz, reducing the uncertainty, the error in the energy estimate would increase appropriately. Appendix B lists the code used to generate this example. %