A Nonlinear Calibration Method Based on Sinusoidal Excitation and DFT Transformation for High-Precision Power Analyzers

For most high-precision power analyzers, the measurement accuracy may be a ﬀ ected due to the nonlinear relationship between the input and output signal. Therefore, calibration before measurement is important to ensure accuracy. However, the traditional calibration methods usually have complicated structures, cumbersome calibration process, and di ﬃ cult selection of calibration points, which is not suitable for situations with many measurement points. To solve these issues, a nonlinear calibration method based on sinusoidal excitation and DFT transformation is proposed in this paper. By obtaining the e ﬀ ective value data of the current sinusoidal excitation from the calibration source, the accurate calibration process can be done, and the calibration e ﬃ ciency can be improved e ﬀ ectively. Firstly, through Fourier transform, the phase value at the initial moment of the fundamental frequency is calculated. Then, the mapping relationship between the sampling value and the theoretical calculation value is established according to the obtained theoretical discrete expression, and a cubic spline interpolation method is used to further reduce the calibration error. Simulations and experiments show that the calibration method presented in this paper achieves high calibration accuracy, and the results are compensation value after calibration with a deviation of ±3 × 10 − 4 .


Introduction
The calibration of a high-precision power analyzer is a key function in the signal measurement process. The calibration accuracy directly affects the accuracy and reliability of the subsequent measurement of voltage, current, power, harmonics, and other parameters [1]. There is a strict linear relationship between the input and output of ideal instrument, and no time lag or distortion exists. However, in the practical engineering scenario of power measurement, the relationship between input and output is always nonlinear due to the inherent and unchangeable characteristics of analog channel and sensor probe. In order to compensate or eliminate the nonlinearity of the instrument, the entire system needs to be nonlinearly calibrated; thus, the correctness of subsequent parameter calculations can be ensured [2][3][4]. The key to calibration is how to establish a mapping relationship between sampling theoretical values and actual values for each sampling point. This mapping relationship is essentially a math-ematical relationship expression which needs to be designed and adjusted according to actual situation [5].
Among the common calibration methods, the hardware compensation method is usually a method that uses both digital and analog circuits for compensation. Li [6] proposed an accurate online calibration system for current transformers, and the accuracy can reach 0.05 level. Luo [7] designed an improved calibration system based on direct current (DC) negative feedback for the calibration of current transformers. The calibration uncertainty of this system reaches 0.038% within the measuring range. The hardware compensation method circuit [8,9] is usually complicated, and the circuit design process costs much. Besides, the calibration range is small, and the accuracy cannot be guaranteed within the entire range. In addition, the design of the hardware circuit and the zero drift of the electronic device will also reduce the accuracy of the calibration [10,11]. Therefore, this method is generally not used in scenes that require precise calibration.
The principle of the mathematical model calibration algorithm is utilizing a limited number of sample information to establish a mathematical model of the measurement signal according to the principle of minimum error. In article [12], Jin put forward a calibration method based on OC-SVM. This method can detect the change points in the time series and obtain better accuracy using less training data. However, it is difficult to establish a corresponding mathematical model for nonlinear systems. Wang, Kong [13], and others classified the errors of the acoustic vector sensor array and designed an optimization model and error selfcalibration algorithm for the acoustic vector sensor array. This algorithm can perform quite well in parameter estimation, but when the mathematical model is established, the iterative calculation of coefficients still needs much work. Therefore, this method is generally not used in actual projects which require a large number of data calculation [14].
The nonlinear segmented calibration method divides the uncalibrated data into segments and then linearizes these sections. Wang, Peng [15], and Chengxian [16] both chose this method to do calibration work, because it can achieve high accuracy though the accuracy often depends on the experience of the calibrator. Moreover, if the calibration results fail to meet the standard, it is necessary to perform the segmented calibration again. In most cases, the calibration efficiency is not high enough; thus, the calibration workload is relatively large. To solve this problem, a calibration algorithm based on discrete Fourier transform (DFT) is provided in paper [17,18]. This method directly carries on the Fourier transform processing to the sampled data sequence, which has the advantages of fast operation speed and less computation. However, the algorithm is mainly suitable for harmonic measurement, and due to the frequency spectrum leakage under the frequency shift condition, the algorithm error is large.
In this paper, a nonlinear calibration method based on sinusoidal excitation and DFT transform is presented. This method uses the initial phase calculated by DFT to establish the relationship of the original dense set and then establishes the mapping relationship between the actual sampling value and the theoretical calibration value in the selected calibration interval. After that, by interpolating the data, an ideal calibration curve is obtained.

Fundamental Knowledge of the Proposed Method
2.1. Algorithm Analysis. To implement this algorithm, it is necessary to determine the mapping relationship between the original sampled value and the theoretical value. For the purpose of determining the mapping relationship, the signal is sampled with a fixed sampling rate f s . Then, Fourier transform is performed on the obtained discrete sequence of sampling points to calculate the phase φ 0 at the initial moment of the fundamental frequency. Next, the initial phase can be used to calculate the theoretical discrete expression of the original signal. The theoretical value corresponding to the sampled value is calculated through the theoretical discrete expression, and the mapping relationship between the original sampled values and the theoretical values is established. Then, determine the minimum calibration interval. In order to make the calibration interval include the maximum range of the signal amplitude, the interval can be calibrated from the trough of the signal to the peak of the wave, which is half a period. According to the initial phase φ 0 and the set standard source effective value A m , the mapping relationship between the theoretical value and the actual sampling value in the calibration interval is established, and the denoising process is performed. Finally, a smooth calibration curve is obtained by spline interpolation for the calibration points mapped in the two-dimensional coordinate system, and we can obtain the theoretical valued of other sampling points from the calibration curve. The overall flow of the algorithm is shown in Figure 1. Take a power signal as an example for specific description. Performing fixed frequency sampling on the measured original signal, the sampling rate f s is 25600 Hz, the sampling period N is 10, and the number of sampling points in each period is 512; then, the fundamental frequency of the original signal is f b = 1/T s × M = f s /M, and it is 50 Hz.
The calculation result Xð f b Þ is a complex number, which can be expressed as Xð f b Þ = X R ð f b Þ + X I ð f b Þ × i. The expressions for the real and imaginary parts are , According to the real and imaginary parts of the complex number, the initial phase φ 0 = arctan ðX I ð f b Þ/X R ð f b ÞÞ of the current signal with the fundamental frequency f b can be calculated, and the continuous expression of the original signal is Since the sampling points are discrete, continuous expressions need to be converted into discrete expressions. The discrete sequence fx k g is sampled and extracted at time t; so, the relational expression between time t and subscript k is According to formula (4), convert the continuous 2 Journal of Sensors expression of the signal into a theoretical discrete expression: where A m is the amplitude of the waveform output by the calibration source. Since there is a mapping relationship betweeny k and k and x k and k in the theoretical discrete expression, there is also a one-to-one correspondence between y k and x k . The fx k , y k g mapping relationship of N × M sampling points can be obtained in N sampling periods, and the original dense set is established.

Calibration of Calibration Curve.
In order to ensure that the maximum range of the measured signal can be covered, the calibration interval is determined to be ½A m × cos ðπÞ, A m × cos ð2πÞ according to the theoretical discrete expression of the original signal, which is the maximum range. Next, take the left end point A m × cos ðπÞ of the calibration interval as the starting point and calculate the subscript k 0 corresponding to the sampling point. The calculation formula of k 0 is as follows: The k 0 calculated by the above formula is not necessarily a positive integer. If k 0 is a positive integer, then the sampled value x k 0 of the sampling point is recorded as x , 0 , and the theoretical value y k 0 is calculated by the theoretical discrete expressiony k 0 = A m × cos ð2πf b × T s × k 0 + φ 0 Þ and recorded as y , 0 . If k 0 is not an integer, the sampling value x k 0 of the nearest sampling point from the starting point is recorded as x , 0 , and the theoretical value y k 0 is recorded as y , 0 . According to formula (6) and the relationship between signal fundamental frequency f b and fixed frequency sampling frequency f s , f b = f s /M, as the frequency f b increases, the sampling point M decreases; so, the subscript of the sampling point k 0 also decreases. According to formula (5), when the amplitude A m and f b change, the corresponding theoretical value of calibration y k will change, which means that the calibration coefficient will change. The initial phase φ 0 is obtained by DFT calculation, which will not affect the calibration process.
Taking A m × cos ðπÞ as the starting point, calculate the subscript k of subsequent sampling points in the calibration interval. Because x , 0 corresponds to the point x k 0 with the subscript k 0 in the original sequence fx k g, the original sequence fx k g starts from the subscript k 0 and takes a sampling point every ΔM points as the new sequence fx , g. The points in are marked as x , 1 , x , 2 ⋯ x , k ⋯ x , M/△M , and then the original sequence fx i gand the new sequence fx , i g have the following mapping relationship: Substituting the expression of x , k in formula (7) into the theoretical discrete expression, the corresponding y , k is M/ΔM calibration points can be taken in each cycle, and N × M/ΔM sampling points are taken as calibration points in a total of Nsampling cycles, and a mapping relationship of fx , k , y , k g is established. In N sampling cycles, the sampling points in each cycle are repeated periodically and the sampling points in each half cycle in a cycle are mirror symmetrical. Therefore, it is necessary to average the repeated 2N sampling points. The process of averaging can be regarded as the process of smoothing and removing noise. The calculation formula is as follows: After averaging, the mapping relationship of fx ;; k , y ;; k g with subscript k from 0 to M/2ΔM − 1 totaling M/2ΔM sampling points is obtained. The obtained M/2ΔM sampling points are the required calibration points.
There are many ways to establish the calibration relationship for fx ;; k , y ;; k g of M/2ΔM sampling points. The available methods include straight line fitting, polynomial fitting, and interpolation. Straight line fitting can only guarantee the continuity in the interval, but cannot guarantee the smoothness in the calibration curve. In the polynomial curve fitting, if there is a large deviation of some data points, the fitting accuracy will decrease as the order increases. So, the spline interpolation method is used in this article.

Cubic Spline
Interpolation. The spline interpolation method is a method that draws a curve of all points in the form of variable splines [19][20][21]. Every two adjacent points can determine the polynomial of each segment; so, the spline interpolation is composed of a series of polynomials. Cubic spline interpolation is a widely used spline interpolation method, and each segment is a cubic polynomial. This method has several advantages, the piece-wise low-order interpolation polynomials are easier to solve and that can improve the smoothness of the interpolation function as good as high-order spline interpolation. Meanwhile, the compensation effect at adjacent frequency points is better than straight line fitting.
The calculation method of cubic spline interpolation used in this paper is explained below. The mapping between 3 Journal of Sensors the original value and the spline value is as follows: The cubic spline function SðxÞ is a piece-wise cubic equation, with intervals and n + 1 data points. The cubic equation for each interval obeys the following conditions: (1) In each interval½x i , x i+1 , SðxÞ = S i ðxÞ is a cubic polynomial (2) The first derivative S′ðxÞ and the second derivative S ′ ′ðxÞ of the cubic spline function SðxÞ are continuous in ½a, b, and SðxÞ is smooth and continuous Therefore, the cubic polynomial created for each interval can be written as The derivation process of calculating these unknown coefficients a i , b i , c i , and d i is as follows: (4) According to the differential continuity of the spline Similarly, according to (5) m i = S i ′ ′ðx i Þ = 2c i , b i , c i , and d i can be expressed by m i , then using the Cubic spline interpolation method to get the following results: (6) Substitute b i , c i , and d i into formula (14): When there are n − 1 equations and n + 1 unknown m values to be solved, two additional formulas are needed to solve this equation. Therefore, the boundary conditions are used to limit the differential values of the two endpoints x 0 and x n [22], that is, the second-order differential S ′ ′ = 0, which is expressed as m 0 = 0 and m n = 0. The equation to be solved can be expressed as So, the following step is to solve the equation to get m i and then calculate the values of all unknown parameters b i , c i , and d i using m i , and the expression of the spline curve S i ðxÞ can be finally obtained.
According to the interpolation method, a spline curve is drawn for the mapping relationship of fx ;; k , y ;; k g of M/2ΔM points after averaging. The sequence fx ;; i g contains the maximum range of the sampled value, which means the abscissa 4 Journal of Sensors of the spline curve's range of the sampled value is maximized as well. There are still work to be done to deal with other sampling points to be calibrated: first, determine which interval of the spline curve ½x ;; i , x ;; i+1 the sampling value falls within and then substitute the sampling value y ;; i into the corresponding piece-wise function S i ðxÞ to calculate the corresponding theoretical value.

Simulation.
The key point of this calibration method is to accurately establish the mapping relationship between the measured signal sampling value and the theoretical value.
The realization process of this method has been theoretically deduced above. Now, carry out a simulation experiment on this method and compare the final calibration curve obtained by the calibration algorithm proposed above with the calibration curve of y k and x k of the given hypothesis. Calculate the errors of the two calibration curves. If the error meets the accuracy requirements, which means the calibration curve obtained by the algorithm is close enough to the real calibration curve, so the calibration algorithm can be considered feasible.
Assuming that the frequency of the given original signal is f = 50Hz, the amplitude of the signal A m is 100, the number of sampling points per period N is 512, the initial phase φ 0 is given as 60 ∘ , and the theoretical discrete expression of the original signal is y k = 100 × cos ðπk/256 + π/3Þ. According to the characteristics of the sensor, the calibration The calibration curve of the available calibration relationship is shown in Figure 2: Knowing the relationship between the theoretical discrete expression y k and k and also the actual nonlinear relationship between the sampled value x k and the theoretical value y k , x k can be inversely deduced according to x k = f −1 ðy k Þ. The simulated waveform of the original sequence of x k is shown in Figure 3, the abscissa represents time, and the ordinate represents amplitude: Using the method mentioned above, the mapping relationship between the sampling value x k and the theoretical value y k is established through the theoretical discrete expression, and the 32 sampling points obtained are calibrated, as shown in Figure 4: Use cubic spline interpolation to make a continuous smooth calibration curve SðxÞ from 32 calibration points. Figure 5 shows the calibration curve obtained according to cubic spline interpolation.
Compared with the calibration curve y = f ðxÞ given in Figure 2, the calibration curve made by spline interpolation is very close to the given calibration curve.
Substitute all the sampled values in a period into the calibration curve and the actual calibration curve, respectively, and calculate the relative error between them. The abscissa of Figure 6 is the sampled value, and the ordinate is the error of the true value minus the compensation value after calibration, which can be seen from the figure is that the relative error is less than ±3 × 10 −4 . Therefore, the calibration algorithm proposed in this paper is feasible.
3.2. Software Verification. The principle and simulation of the calibration algorithm based on sinusoidal excitation and DFT transform are described above. The calibration method firstly sets the standard source output frequency f b and the sine signal with the maximum value of A m , collects the original signal sequence fx k g, calculates the initial time phase φ 0 of fx k g, and obtains the discrete expression of the original signal according to φ 0 . Then, determine the calibration interval of the original signal and establish the mapping relationship between the sampling value and the theoretical value in the calibration interval. Finally, for the processed calibration points, use cubic spline interpolation to make a calibration curve and substitute all the sample values to be calibrated into the calibration curve to calculate the theoretical value. Figure 7 is a specific flow chart of the algorithm.

Experiment Result and Analysis
To decide whether the accuracy of the calibration algorithm on the high-precision power analyzer meets the design requirements, an experimental test platform is built. By  Journal of Sensors comparing the measurement data of the power analyzer equipped with our algorithm and other high-precision testing equipment, the analysis results are obtained. The specific experimental platform of this project is shown in Figure 8. On the right is the Fluke standard source 6003A as standard input, on the left is the power analyzer equipped with this calibration method, and on the lower left is Yokogawa's WT1800 high-precision power analyzer. The actual product is shown below. Connect the input signal of the standard source to the power analyzer to be tested and the Yokogawa power meter, respectively, and compare the measurement data of the two.
Before the measurement experiment, the high-precision power analyzer needs to be calibrated. The frequency measurement range of the power meter is 10 Hz-1 kHz, the voltage measurement range is 0.1 V-1000 V, and the current range is 0.1A-80A. The voltage measurement accuracy is 0.2% of range, and current measurement accuracy is 0.1% of range add current sensor accuracy. With fluke standard source as input, the instrument is calibrated separately with two methods, the traditional segmented calibration method and the calibration method proposed in this article. After calibration, the measured signals of the two are shown and compared in the following Tables 1-4.
As can be seen from Table 1, the effect of the two calibration methods on voltage measurement is basically the same, because the linearity of the voltage sensor in the actual project is better. The nonlinearity of current sensors is usually poor, and the calibration method proposed in this article is usually used in the current calibration process to achieve high-accuracy. From Tables 2-4, the proposed calibration method is obviously better than the traditional segmented calibration method when measuring large current and high frequency signals. When using this method to measure voltage, the error of the measured value is the largest at 950 V, the error is ð951:52 − 950Þ/950 = 0:16% < 0:2%, and the voltage accuracy meets the requirements. When the current is     7 Journal of Sensors measured at 55 A, the error of the current sensor has exceeded 2%, but the maximum error shown in Tables 2-4 is ð55 − 54:24Þ/55 = 1:38%, which meets the requirements of current accuracy. The comparison of experimental results verifies that the calibration method proposed in this paper is effective in the application of nonlinear systems and has high calibration accuracy.

Conclusion
In this paper, a nonlinear calibration algorithm based on sinusoidal excitation and DFT transformation is proposed. This algorithm overcomes the shortcomings of traditional methods, like it is difficult to determine the segment turning point and segment range in old methods, and multiple manual calibrations are cumbersome; also, the calibration accuracy can decline within the overall measurement range. In addition, this method only needs to obtain the effective value data of the current calibration source. Even if the segment turning point is increased, the calibration of the instrument can be accurately completed by obtaining the effective value data only once, which not only improves the calibration accuracy but also avoids repeated operation of the calibration source, thus greatly improves the calibration efficiency. The simulation experiment verifies the feasibility and accuracy of the algorithm, and the voltage and current parameters are measured by a high-precision power analyzer equipped with the algorithm. The experimental results show that the measured values of the voltage and current after calibration within the range meet the accuracy requirements.

Data Availability
The sampled data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
The authors declare that they have no conflicts of interest.