Vertical resolution is an essential indicator of digital storage oscilloscope (DSO) and the key to improving resolution is to increase digitalizing bits and lower noise. Averaging is a typical method to improve signal to noise ratio (SNR) and the effective number of bits (ENOB). The existing averaging algorithm is apt to be restricted by the repetitiveness of signal and be influenced by gross error in quantization, and therefore its effect on restricting noise and improving resolution is limited. An information entropy-based data fusion and average-based decimation filtering algorithm, proceeding from improving average algorithm and in combination with relevant theories of information entropy, are proposed in this paper to improve the resolution of oscilloscope. For single acquiring signal, resolution is improved through eliminating gross error in quantization by utilizing the maximum entropy of sample data with further noise filtering via average-based decimation after data fusion of efficient sample data under the premise of oversampling. No subjective assumptions and constraints are added to the signal under test in the whole process without any impact on the analog bandwidth of oscilloscope under actual sampling rate.
1. Introduction
Bandwidth, sampling rate, and storage depth are three core indicators to evaluate the performance of digital storage oscilloscope (DSO). In addition, there is another indicator which is of great significance but always being ignored, that is, vertical resolution [1] (hereinafter referred to as resolution). Higher resolution means more refined waveform display and more precise signal measurement. The resolution of DSO depends on the digitalizing bits of analog-digital converter (ADC) and the noise and distortion level of oscilloscope itself. Common oscilloscopes generally adopt 8-bit or 12-bit ADC, and the key to achieving higher resolution is to lower noise under given digitalizing bits of ADC.
One method to reduce the effect of ADC-related system noise is to combine multiple ADCs in a parallel array. In such a system, the same analog signal is applied to all M ADCs and the output is digitally summed. The maximum signal to noise ratio (SNR) increases because the signal is correlated from channel to channel while the noise is not. In a parallel array, the SNR, therefore, increases by a factor of M assuming the noise is uncorrelated from channel to channel [2, 3]. Furthermore, decorrelation techniques are proposed in [4] to reduce the effect of correlated sampling noise introduced by clock jitter in all parallel ADCs.
In [5, 6], another method which is named stacked ADC is proposed to enhance resolution of ADC system. It uses multiple ADCs, each connected to the same radar IF through amplifier chains with different gain factors. After digital amplitude and phase equalization, the obtained SNR is much greater than that of an individual ADC. The two aforementioned methods will consume more ADCs and result in a high cost.
Another method to enhance SNR is averaging, which can increase the resolution of a measurement without resorting to the cost and complexity of using expensive multiple ADCs [7–10]. There are two common averaging modes for DSO, that is, successive capture averaging and successive sample averaging. The former averages the corresponding sampling points in the multiwaveforms acquired repeatedly one by one, and the latter averages the multiple adjacent sampling points in a single waveform acquired once one by one. Both of them have certain limitations on restricting noise and improving resolution. Since successive capture averaging is based on repetitive acquisition of signals, it is limited by the repetitiveness of the signals under test. Even though successive sample averaging is based on single acquisition of the signal under test, it sacrifices analog bandwidth while filtering noise. In addition, gross error in quantization such as irrelevant noise and quantization error caused by data mismatch is always included in the sampling data of oscilloscope due to the influence of environmental disturbance, clock jitter, transmission delay, and so forth. If not being handled first, the results of direct averaging of the acquired sample data with gross error will deviate from actual signal drastically.
Data fusion (also called information fusion) is a sample data processing method which is widely applied currently. In previous studies, a data fusion algorithm based on the estimation theory in batches of statistics theory is proposed in the [11] and is further improved in [12]. However, it is kind of subjective that both algorithms assume that sample data are characterized by normal distribution. The concept of entropy originates from physics to describe the disordered state of thermodynamic system. Entropy reflects the statistical property of system and is introduced into numerous research fields successively. In 1948, an American mathematician, Shannon, introduced the entropy in thermodynamics into information theory and proposed information entropy [13], to measure the uncertainty degree of information. Information entropy provided a new approach for data fusion [14].
An information entropy-based data fusion and average-based decimation filtering algorithm, proceeding from improving average algorithm and in combination with relevant theories of entropy, are proposed in this paper to improve the resolution of oscilloscope effectively. Additional horizontal sampling information is used to achieve higher vertical resolution under the premise of oversampling. Firstly, comparing with traditional averaging algorithm, this algorithm aims at the signal sample acquired once and therefore it is subject to no restrictions of the repetitiveness of signal. Secondly, the resolution of oscilloscope is improved through eliminating gross error in quantization due to noise and quantization error by utilizing the maximum entropy of sample data with further noise filtering via average-based decimation after data fusion of efficient sample data before averaging. No subjective assumptions and constraints are added to the signal under test in the whole process without any impact on the analog bandwidth of oscilloscope under actual sampling rate.
2. Common Averaging Theory2.1. Successive Capture Averaging
Successive capture averaging is a basic denoising signal processing technology for acquisition systems of most DSOs, which depends on repetitive triggering and acquisition of repetitive signals. Successive capture averaging averages the corresponding sampling points in these waveforms one by one by using the multiwaveforms acquisition repetitively, to form a single capture result after averaging, that is, output single waveform.
Figure 1 shows the schematic diagram of averaging of N successive acquisitions.
Averaging of N successive acquisitions.
The direct calculation method of successive capture averaging is to sum the corresponding sampling points in all acquisitions and then divide them by the number of acquisitions. The expression is given by
(1)AN(n+i)=1N∑j=1Nxj(n+i),
where i=0,±1,±2,…, AN(n+i) is the averaging result, N is the number of acquisitions to average, and xj(n+i) represents the corresponding sampling point at the moment n+i in the jth acquisition. Obviously, average value cannot be achieved in this algorithm until all N acquisitions are completed. If N is too large, the throughput rate of system will be affected remarkably. For users, the delay caused by averaging is unacceptable. For oscilloscope, the huge sample data will use out memory capacity rapidly.
Consequently, an improved exponential averaging algorithm is widely applied in successive capture averaging. Exponential averaging algorithm is shown in (2), which creates a new averaging result Aj(n+i) by utilizing a new sampling point xj(n+i) at moment n+i and the last averaging result Aj-1(n+i)(2)Aj(n+i)=[xj(n+i)+(p-1)Aj-1(n+i)]p=xj(n+i)p+[(p-1)p]Aj-1(n+i),
where =0,±1,±2,…. In (2), j represents the current number of acquisitions, Aj(n+i) is the new averaging result, Aj-1(n+i) is the last averaging result, xj(n+i) is the new sampling point, and p is the weighting coefficient. Assuming that N is the total number of acquisitions to average, if (j<N), then p=j; otherwise, p=N. Obviously, higher efficiency can be achieved in exponential averaging algorithm while calculating and storing the acquired and the averaged waveforms. Exponential averaging algorithm can not only update averaged results immediately after each acquisition and obtain the final same waveform as in direct averaging algorithm, but can also lower the requirements on memory capacity significantly.
No matter which algorithm is adopted, successive capture averaging can improve the vertical resolution of signal. This improvement is measured in bits, which is a function of N (number of acquisitions to average) [1]:
(3)RC=0.5log2N.
In (3), RC is the improved resolution. Since average algorithm is achieved by using fixed point mathematics in numerous oscilloscopes and the maximum number of acquisitions to average will not exceed 8192 generally after taking the real-time and memory capacity into account, therefore the maximum number of bits of total resolution is limited within 14.5. In fact, fixed point mathematics, noise, and dithering error can lower the maximum resolution to a certain extent.
Successive capture averaging can improve SNR, eliminate the noise unrelated to triggering, and improve vertical resolution. Meanwhile, successive capture averaging will not limit waveform bandwidth under ideal circumstances, which is an obvious advantage compared with other signal processing technologies. However, based on numerous triggering and repetitive quantization of signal, successive capture averaging is limited by the repetitiveness of the signal itself under test and consequently is only applicable to observing repetitive signal.
2.2. Successive Sample Averaging
Successive sample averaging, also known as boxcar filtering or moving average filtering, is another average algorithm widely applied in DSO’s acquisition system. Since it is based on the single acquisition of the signal under test, successive sample averaging is not influenced by the repetitiveness of the signal itself. In this averaging process, each output sampling point represents the average value of N successive output sampling points [15], which is shown as follows:
(4)AN(n+i)=1N∑j=0N-1xN(n+i+j),
where i=0,±1,±2,… and N is the number of sampling points to average.
Figure 2 shows the averaging principle of 3 successive sampling points.
Averaging principle of 3 successive sampling points.
For successive sample averaging, the sampling rate before and after averaging is equal. It eliminates noise and improves the vertical resolution of signal by reducing the bandwidth of DSO. This improvement is measured in bits, which is a function of N (number of sampling points to average) [1]:
(5)RS=0.5log2N,
where RS is the improved resolution. In essence, successive sample averaging is a low-pass filter function, and the 3 dB bandwidth is deducted in [9]
(6)BS=0.44SRN,
where BS is the bandwidth and SR is the sampling rate. This type of filter is with extremely sharp cut-off frequency and is consistent with the signal whose period is integral multiples of N/SR. Noise elimination is almost in direct proportion to the square root of number of sampling points to average. For instance, an average of 25 sampling points will reduce the magnitude of high-frequency noise to 1/5 of its original value. For DSO, successive sample averaging is often used to achieve variable bandwidth function.
It can be easily seen from (6) that even though successive sample averaging is based on single acquisition of the signal under test, it lowers the analog bandwidth under actual sampling rate while filtering noise, and therefore it is with poor practicability.
3. Resolution Improving Algorithm Based on Information Entropy- and Average-Based Decimation3.1. Data Fusion Based on Information Entropy
The information entropy-based data fusion researched in this paper aims to eliminate gross error in quantization and then obtain precise measuring results under the condition of oversampling. Firstly, the acquisition system of oscilloscope utilizes the maximum entropy method (MEM) to estimate the probability distribution of discrete sample data acquired under oversampling and then calculates the measuring uncertainty of sample according to its probability distribution to determine a confidence interval. Then the acquisition system discriminates gross error based on confidence interval and finally determines weight coefficient of fusion according to information entropy to achieve data fusion and obtain a precise measured value of the signal under test without any subjective assumptions and restrictions being added.
3.1.1. Distribution Estimation of Maximum Entropy
Entropy is an essential concept in thermodynamics. For isolated systems, entropy is growing constantly. The maximum entropy can determine the steady state of system. Similar conclusions can be discovered in information theory. In 1957, Jaynes proposed the maximum entropy theory proceeding from the maximum information entropy; that is, when we are deducing an unknown distribution pattern with only part of known information, we should select the probability distribution with the maximum entropy and in conformity with restriction conditions, and any other selection may mean the addition of other restrictions or changes to the original assumption conditions [16]. In other words, for circumstances with only the sample under test but lack of sufficient reasons to select some analysis distribution function, we can determine the form of the least tendentious measurand distribution through the maximum entropy [14].
Assuming that oversampling is implemented by the acquisition system of DSO at a high sampling rate N times of actual sampling rate and N discrete sample data are obtained at the high sampling rate, that is, x1,x2,…xN, the sample sequence after eliminating repetitive sample data is x1,x2,…xN′, with corresponding probability of occurrence of p(x1),p(x2),…p(xN′), and the probability distribution p(xi) of sample can be estimated through the maximum discrete entropy. Based on the information entropy defined by Shannon [13], the information entropy of discrete random variable x is as follows [17]:
(7)H(x)=-k∑i=1N′p(xi)lnp(xi),
where p(xi) denotes the probability distribution to be estimated of sample xi and meets the restriction conditions below:
(8)s.t.∑i=1N′p(xi)=1,s.t.p(xi)≥0,(i=1,2,…,N′),s.t.∑i=1N′p(xi)gj(xi)=E(gj),(j=1,2,…,M).
In (8), gj(xi)(j=1,2,…,M) is the statistical moment function with order j and E(gj) is the desired value of gj(xi). Lagrange multiplier methods can be used to solve this problem. Since k is a positive constant, take k=1 for convenience to constitute the Lagrangian function L(x,λ), as is shown in
(9)L(x,λ)=-∑i=1N′p(xi)lnp(xi)+(λ0+1)[∑i=1N′p(xi)-1]+∑j=1Mλj[∑i=1N′p(xi)gj(xi)-E(gj)],
where λj(j=0,1,…,M) is the Lagrangian coefficient. Partial derivative should be obtained for p(xi) and λj, respectively, and the equation set is
(10)∂L(xi,λj)∂p(xi)=0,i=1,2,…N′,∂L(xi,λj)∂λj=0,j=0,1,2,…M,
probability distribution function is
(11)p(xi)=exp[λ0+∑j=1Mλjgj(xi)],
and the corresponding maximum entropy can be given by
(12)H(x)max=-λ0-∑j=1MλjE(gj).
For given sample data, the expectation and variance of sample sequence can be chosen as expectation function; therefore, the distribution estimation based on maximum entropy is to estimate its probability distribution according to the entropy of discrete random variable and to achieve the maximum entropy (H(x)max) by adjusting probability distribution model (p(xi)) under the condition of ensuring the statistical property of sample [16].
3.1.2. Gross Error Discrimination
Traditional criteria on gross error discrimination (e.g., 3σ criterion, Grubbs criterion, and Dixon criterion) are based on mathematical statistics. The probability distribution of sample data needs to be known in case these algorithms are applied to deal with sample data. However, probability distribution is rarely known in advance in actual measurement. Statistical property cannot be satisfied if few sets of sample data are obtained during measurement, and therefore the precision for dealing with gross error will be influenced [18]. A new algorithm on gross error discrimination is proposed in this paper, which calculates the measuring uncertainty of sample sequence through the probability distribution of the maximum discrete entropy and then determines confidence interval based on uncertainty to discriminate gross error.
In [14], for continuous random variable x, if x- is expectation and the estimated probability density function based on MEM is f(x), then the measurement uncertainty is expressed by
(13)u=∫ab(x-x-)2f(x)dx.
The measurement uncertainty calculation method of discrete random variable x can be deduced thereout. If x- is expectation and the estimated probability distribution estimated by MEM is p(xi), then the uncertainty of sample should be calculated after eliminating repetitive sample data and can be given by
(14)u=∑i=1N′(xi-x-)2p(xi).
The confidence interval is [x--u,x-+u], and then judge whether gross errors are contained in the sample based on confidence interval. The data outside this confidence interval is considered as that with gross error and should be eliminated from sample sequence with a new sample sequence being constituted to fulfill data fusion.
3.1.3. Effective Data Fusion
For DSO, sampling aims at obtaining the information related to the signal under test. As the measurement of information quantity, information entropy is used to determine the level of uncertainty, and therefore it can be used to fuse the sample data acquired. To reduce the uncertainty of fusion results, small weight coefficient should be distributed to the sample with large uncertainty, while large weight coefficient should be distributed to the sample with small uncertainty.
As mentioned above, oversampling is implemented by the acquisition system of DSO at a high sampling rate N times of actual sampling rate, and N discrete sample data are obtained under a high sampling rate after eliminating repetitive sample data, that is, x1,x2,…xN′, and N′′ samples x1,x2,…xN′′ are obtained after eliminating gross errors. The information quantity provided by each sample xi is denoted by self-information quantity I(xi). Information entropy is the average uncertainty of the samples, and therefore the ratio of self-information quantity to information entropy can be used to measure the uncertainty of each sample in all the samples. The weight coefficient of fusion is in inverse proportion to self-information quantity. The detailed algorithms are as follows.
Utilize MEM to estimate the maximum entropy distribution and the maximum entropy of sample data, and then figure out the self-information quantity of each sample defined by
(15)ωi=I(xi)H(x)max,i=1,2,…N′′,
where
(16)I(xi)=-lnp(xi).
Define the weight coefficient of fusion with normalization processing:
(17)qi=1/ωi∑i=1N′′1/ωi,i=1,2,…N′′.
Fuse data:
(18)xf=∑i=1N′′qixi,
where the number of data used in data fusion, that is, N′′, equals the number of samples remained after eliminating repetitive data and gross errors.
Replace gross error with data fusion results xf, to constitute a new sample sequence: if k gross errors are eliminated from N data, add kxf in the sample sequence, and thus obtain N new samples x1′,x2′,…xN′ without gross error under a high sampling rate.
3.2. Average-Based Decimation Filtering
The maximum sampling rate of ADC in DSO is generally much higher than the actually required sampling rate of measured signal spectra. Thus, oversampling has an advantage of filtering digital signal to improve the effective resolution of displayed waveforms and reduce the undesired noise. Therefore, under the premise of oversampling, the vertical resolution at the actual sampling rate can be increased by adopting average-based decimation filtering algorithm. To be specific, the DSO can carry out oversampling at high sampling rate that is N times of the actual sampling rate corresponding to the time base selected by users and then apply the information entropy-based data fusion algorithm mentioned in the former section to N sampling points at high sampling rate to exclude gross errors, fuse data of effective samples, average after creating new sample sequences, and finally decimate sampling points at the actual sampling rate. Average-based decimation under N times oversampling is given by
(19)AN(n+i)=1N∑j=0N-1xN(n+iN+j),
where =0,±1,±2,….
Figure 3 shows the average-based decimation principle when N=3.
Average-based decimation principle of 3 times oversampling.
The resolution improved by average-based decimation filtering is measured in bits, which is the function of N (the number of samples to average or oversampling factor):
(20)RH=0.5log2N=0.5log2(SMSR).
In (20), RH is the improved resolution, SM is the high sampling rate, and SR is the actual sampling rate. The −3 dB bandwidth after average is
(21)BH=0.44SR,
where BH denotes the bandwidth and SR represents the actual sampling rate. It can be seen that improved vertical resolution and analog bandwidth vary with the maximum sampling rate and actual sampling rate of oscilloscopes.
Table 1 lists the ideal values of improved resolutions and analog bandwidth of the oscilloscope with maximum sampling rate of 1 GSa/s and 8-bit ADC adopting average-based decimation algorithm under oversampling.
Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling.
Sampling rate
Oversampling factor
Total resolution
−3 dB bandwidth
1 GSa/s
1
8.0 bits
440 MHz
500 MSa/s
2
8.5 bits
220 MHz
250 MSa/s
4
9.0 bits
110 MHz
100 MSa/s
10
9.7 bits
44 MHz
50 MSa/s
20
10.2 bits
22 MHz
25 MSa/s
40
10.7 bits
11 MHz
10 MSa/s
100
11.3 bits
4.4 MHz
1 MSa/s
1000
13.0 bits
440 kHz
100 kSa/s
10000
14.6 bits
44 kHz
Values in Columns 3 and 4 of Table 1 are ideal and the improvement of resolution is directly proportional to N; that is to say, when N increases by 4 times, the resolution can be improved by 1 bit. In reality, the maximum N falls into the range of 10,000 since it is limited by real-time performance and the memory capacity. Moreover, the fixed point mathematics and noise will also lower the highest resolution to some extent. Therefore, it would better not be expected that resolution can be improved by over 4 to 6 bits. It should be also noted that the improvement of resolution depends on dynamic signals as well. For those signals whose conversion results always deviate between different codes, resolution can be always improved. For steady-state signals, only when the noise amplitude is more than 1 or 2 ADC LSBs, the improvement of resolution can be obvious. Fortunately, signals in actual world are always in the case.
Generally speaking, when measured signals are characterized by single pass or repeat at low speeds, conventional successive capture averaging cannot be adopted and thus average-based decimation under oversampling can be used as an alternative. To be specific, average-based decimation under oversampling is especially applicable in the following two situations.
Firstly, if noise in signals is obviously high (what is more, it is not required to measure noise), average-based decimation under oversampling can be adopted to “clear” noise.
Secondly, average-based decimation can be adopted to improve the measurement resolution when high-precision measurement of waveforms is required even if the noise in signals is not loud.
According to the comparison between (6) and (20), it can be easily seen that, in the conventional successive sampling point averaging algorithm, the bandwidth is directly proportional to actual sampling rate and inversely proportional to the number of sample points to average. When the actual sampling rate is given, the bandwidth dramatically decreases as the number of sample points to average increases. However, when average-based decimation under oversampling is adopted, bandwidth is only directly proportional to the actual sampling rate but has nothing to do with the number of sample points to average (oversampling factor). When the actual sampling rate is given, the bandwidth is determined accordingly without other additional loss. Besides, Nyquist frequency increases by N times by adopting N times oversampling. Therefore, another advantage of average-based decimation is reducing aliasing.
3.3. Processing Example
A group of sets of sample data acquired by DSO is used as an example to illustrate the processing procedure of the algorithm proposed in the paper. The oscilloscope works at the time base of 500 ns/div and the corresponding actual sampling rate is 100 MSa/s. The oscilloscope carries out 10-time oversampling at 1 GSa/s to obtain 10 original discrete sets of sample data at high sampling rate, that is, x1,x2,…x10, which are shown in Table 2. In the samples, there are obvious glitches or gross errors caused by ADC quantization errors.
10 sets of original sample data obtained by oversampling.
Number
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
Sample data
120
121
121
123
124
125
63
79
129
131
The expectation and variance of the 10 sample data are
(22)μ=x-=110∑i=110xi=113.6,σ2=19∑i=110(xi-x-)2=530.4889.
Exclude one repeated data 121, and then constraint conditions met by the sample probability distribution are
(23)∑i=1N′p(xi)=∑i=19p(xi)=1,∑i=1N′xip(xi)=∑i=19xip(xi)=μ=113.6,∑i=1N′(xi-μ)2p(xi)=∑i=19(xi-μ)2p(xi)=σ2=530.4889.
According to MEM and Lagrangian function, the calculated Lagrangian coefficients are λ0=-4.3009, λ1=0.01648, and λ2=0.000452, respectively. The expressions of estimated maximum entropy probability distribution, the maximum entropy, and self-information quantity are given by
(24)p(xi)=exp[λ0+∑j=1mλjgj(xi)]=exp[-4.3009+0.01648xi+0.000452(xi-113.6)2],H(x)max=-λ0-∑j=1mλjE(gj)=2.1893,I(xi)=-lnp(xi)=4.3009-0.01648xi-0.000452(xi-113.6)2.
Corresponding probability and self-information quantity of each sample data are shown in Table 3.
Probability and self-information quantity of each set of sample data.
Number
Sample data
Probability
Self-information quantity
x1
120
0.0998
2.3048
x2
121
0.1021
2.2821
x3
123
0.1071
2.2339
x4
124
0.1099
2.2085
x5
125
0.1128
2.1822
x6
63
0.1217
2.1054
x7
79
0.0856
2.4579
x8
129
0.1265
2.0678
x9
131
0.1346
2.0052
According to the distribution of the maximum entropy, the uncertainty of measurement is
(25)u=∑i=1N′(xi-x-)2p(xi)=∑i=19(xi-113.6)2p(xi)=23.033,
and then confidence interval is
(26)[x--u,x-+u]=[90.567,136.633].
It can be judged that in Table 2, x7 and x8 are gross errors in quantization, so they should be excluded from the sample sequence. At the same time, the repeated sample x3 in Table 2 should also be excluded, so the remaining 7 sets of effective sample data form a new sample sequence (i.e., x1,x2,…x7) for data fusion, and the calculation results are shown in Table 4.
Fusion weight coefficient of 7 sets of effective sample data.
Number
Sample data
Self-information quantity
Ratio of information quantity
Weight coefficient of fusion
x1
120
2.3048
1.0528
0.1350
x2
121
2.2821
1.0424
0.1364
x3
123
2.2339
1.0204
0.1393
x4
124
2.2085
1.0088
0.1409
x5
125
2.1822
0.9968
0.1426
x6
129
2.0678
0.9445
0.1505
x7
131
2.0051
0.9159
0.1552
According to (15)–(18), the result of data fusion is
(27)xf=124.893.
Replace the sample data x7 and x8 in Table 2 with data fusion result to form a new sample sequence containing 10 sets of new sample data without gross errors at high sampling rate, that is, x1′,x2′,…x10′, as shown in Table 5.
10 New sample data containing data fusion result.
Number
x1′
x2′
x3′
x4′
x5′
x6′
x7′
x8′
x9′
x10′
Sample data
120
121
121
123
124
125
124.893
124.893
129
131
Finally, average 10 sets of new sample data at high sampling rate (1 GSa/s) to achieve 1 sampling point at actual sampling rate (100 MSa/s); that is,
(28)A=110∑i=110xi′=124.3786.
When averaging the 10 sets of original sample data (x1,x2,…x10) obtained through oversampling directly, the average value is
(29)A′=110∑i=110xi=113.6.
Else, when averaging the remaining 8 sets of sample data excluding gross errors x7 and x8 from 10 sets of sample data (x1,x2,…x10) obtained through oversampling, the average value is
(30)A′′=18(x1+x2+x3+x4+x5+x6+x9+x10)=124.25.
According to the maximum entropy theory, the result 124.3786 obtained by the algorithm proposed in the paper is the precise measurement of unknown signal obtained from sample data without any subjective hypotheses and constraints.
Similarly, based on information entropy theory, for each group of original sample data (x1,x2,…x10) obtained through oversampling (1 GSa/s), gross errors excluded, data fusing, and average-based decimating can be adopted to obtain precise measurement data of a complete waveform at the actual sampling rate (100 MSa/s).
4. Experiment and Result Analysis
In order to verify the effectiveness and superiority of vertical resolution improved by the algorithm mentioned in the paper, we utilize 8-bit ADC model provided by Analog Device corporation to establish the acquisition system of oscilloscope, conduct simulation experiments with conventional average algorithm, direct decimation algorithm and the algorithm mentioned in the paper, and finally estimate and compare performances of all algorithms.
The oscilloscope works at the time base of 500 ns/div and the corresponding actual sampling rate is 100 MSa/s. The frequency of input sine wave is fi=1MSa/s. In order to simulate quantization errors caused by noise interference and clock jitter, data mismatched samples are randomly added to the ideal ADC sampling model, so the acquired sample sequence includes gross errors in quantization.
Experiment 1.
Sampling rate fs=100MSa/s. Time-domain waveform and signal spectrum obtained from sampling results without any processing are shown in Figures 4 and 5, respectively.
After calculation in Figure 5, SNR=28.0776dB and ENOB=4.3717bits.
Time-domain waveform of originally acquired signals at 100 MSa/s.
Spectrum of originally acquired signals at 100 MSa/s.
Experiment 2.
Sampling rate fs=100MSa/s. Signal spectrum obtained by applying successive capture averaging to sampling results with N=10 is shown in Figure 6.
After calculation in Figure 6, SNR=43.0485dB and ENOB=6.8585bits.
Signal spectrum of successive capture averaging (N=10).
Experiment 3.
Sampling rate fs=100MSa/s. Signal spectrum obtained by adopting successive sample averaging to sampling results with N=10 is shown in Figure 7.
After calculation in Figure 7, SNR=44.2538dB and ENOB=7.0588bits.
Signal spectrum of successive sample averaging (N=10).
Experiment 4.
Oversampling is adopted with the sampling rate fs=1GSa/s. Signal spectrum obtained by directly applying 10 times decimation to sampling results is shown in Figure 8.
After calculation in Figure 8, SNR=28.8353dB and ENOB=4.4976bits.
Signal spectrum of 10 times direct decimation.
Experiment 5.
Oversampling is adopted with the sampling rate fs=1GSa/s. Signal spectrum obtained by conducting 10 times average-based decimation on sampling results is shown in Figure 9.
After calculation in Figure 9, SNR=46.6242dB and ENOB=7.4525bits.
Signal spectrum of 10 times average-based decimation.
Experiment 6.
Oversampling is adopted with the sampling rate fs=1GSa/s. The time-domain waveform and signal spectrum obtained by adopting information entropy-based algorithm mentioned in the paper to sampling results to excluding gross errors, fusing data, creating new sample sequence, and then conducting 10 times average-based decimation are shown in Figures 10 and 11, respectively.
After calculation in Figure 11, SNR=59.6071dB and ENOB=9.6092bits.
Time-domain waveform obtained with data fusion and 10 times average-based decimation.
Signal spectrum obtained with data Fusion and 10 times average-based decimation.
Table 6 compares the experiment results of the above-mentioned 6 methods.
Comparisons of experiment results.
Experiment number
Method
SNR (dB)
Effective number of bits (bits)
−3 dB bandwidth (MHz)
1
Nonoversampling,original signal
28.0776
4.3717
43
2
Nonoversampling,successive capture averaging
43.0485
6.8585
43
3
Nonoversampling,successive sample averaging
44.2538
7.0588
4.3
4
Oversampling,10 times direct decimation
28.8353
4.4976
43
5
Oversampling,10 times average-based decimation
46.6242
7.4525
43
6
Oversampling,information entropy-based data fusion and 10 times average-based decimation
59.6071
9.6092
43
According to Table 6, at the sampling rate of 100 MSa/s, the conventional successive capture averaging algorithm and successive sample averaging algorithm increase the ENOB by about 2.49 and 2.69 bits when processing the sinusoidal samples including quantization errors, respectively. However, successive sample averaging algorithm also causes the decrease of the bandwidth terribly. At the sampling rate of 1 GSa/s, the ENOB provided by the average-based decimation algorithm is about 2.95 bits higher than that provided by the direct decimation algorithm. On this basis, the algorithm of information entropy-based data fusion and average-based decimation proposed in the paper can further increase ENOB by about 2.16 bits to achieve total ENOB of 9.61 bits. Compared with the theoretical digitalizing bits of 8-bit ADC, the actual ENOB (resolution) has totally increased by about 1.61 bits, which is very close to the theoretically improved results RH=0.5log210≈1.66bits in (20), and at the same time, no loss of analog bandwidth at the actual sampling rate is caused.
5. Conclusion
This paper proposes a decimation filtering algorithm based on information entropy and average to realize the goal of raising the vertical resolution of DSO. Based on oversampling and for single acquiring signal, utilize the maximum entropy of sample data to eliminate gross error in quantization, fuse the remaining efficient sample data, and conduct average-based decimation to further filter the noise, and then the DSO resolution can be improved. In order to verify the effectiveness and superiority of the algorithm, comparison experiments are conducted using different algorithms. The results show that the improved resolution of the algorithm proposed in the paper is nearly identical with the theoretical deduction. What is more, no subjective hypotheses and constraints on the detected signals are added during the whole processing and no impacts on the analog bandwidth of DSO at the actual sampling rate are exerted.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (nos. 61301263 and 61301264), the Specialized Research Fund for the Doctoral Program of Higher Education of China (no. 20120185130002), and the Fundamental Research Fund for the Central University of China (A03007023801217 and A03008023801080).
TektronixImprove the vertical resolution of digital phosphor oscilloscopesReederR.LooneyM.HandJ.Pushing the state of the art with multichannel A/D convertersSeifertE.NaudaA.Enhancing the dynamic range of analog-to-digital converters by reducing excess noiseProceedings of the IEEE Pacific RIM Conference on Communications, Computers and Signal ProcessingJune 1989Victoria, Canada5745762-s2.0-002491502510.1109/PACRIM.1989.48429LauritzenK. C.TalisaS. H.PeckerarM.Impact of decorrelation techniques on sampling noise in radio-frequency applicationsGregers-HansenV.BrockettS. M.CahillP. E.A stacked a-to-d converter for increased radar signal processor dynamic rangeProceedings of the IEEE International Radar ConferenceMay 20011691742-s2.0-0034995697DuncanS. R.Gregers-HansenV.McConnellJ. P.A stacked analog-to-digital converter providing 100 dB of dynamic rangeProceedings of the IEEE International Radar Conference20053136Silicon LaboratoriesImproving ADC resolution by oversampling and averagingImproving ADC resolution by oversampling and averaging, Application Note AN118, 2013, http://www.silabs.com/Support%20Documents/TechnicalDocs/an118.pdfLembeyeY.Pierre KeradecJ.CauffetG.Improvement in the linearity of fast digital oscilloscopes used in averaging modeBishopC.KungC.Effects of averaging to reject unwanted signals in digital sampling oscilloscopesProceedings of the 45 Years of Support Innovation - Moving Forward at the Speed of Light (AUTOTESTCON '10)September 2010Orlando, Fla, USAIEEE1410.1109/AUTEST.2010.56135452-s2.0-78649565764FagerC.AnderssonK.Improvement of oscilloscope based RF measurements by statistical averaging techniquesProceeding of the IEEE MTT-S International Microwave Symposium DigestJune 2006San Francisco, Calif, USA1460146310.1109/MWSYM.2006.2495652-s2.0-34250322725LuoZ. H. L.ZhangQ. J.FangeQ. C. H.Method of data fusion applied in intelligent instrumentsCaiF.-N.LiuQ.-X.Single sensor data fusion and analysis of effectivenessShannonC. E.A mathematical theory of communicationZhuM. J.GuoB. J.Study on evaluation of measurement result and uncertainty based on maximum entropy methodTanY.ChuA.LuM.CunninghamB. T.Distributed feedback laser biosensor noise reductionJaynesE. T.Information theory and statistical mechanicsThomasM. C.ThomasJ. A.LiD. H.LiZ. H.Processing of gross error in small samples based on measurement information theory