Research on Amplifier Performance Evaluation Based on δ-Support Vector Regression

and Applied Analysis 3 Table 1: Result of comparative experiment. TRSN TESN FN Method Parameter (σ, ξ, C) SVN TRMSE TEMSE 259 × 100 59 × 100 8 δ-SVR (0.1, 0.01, 6) 1095 2.4291e − 026 2.0009e − 031 259 × 100 59 × 100 8 LSSVR (0.1, 0.01, 6) 1689 3.3619e − 019 3.9276e − 023 259 × 100 59 × 100 8 ε-SVR (0.1, 0.01, 6) 1407 2.4967e − 018 2.0138e − 015 Note 2. SVNdenotes the number of support vectors, TESNdenotes the number of testing support vectors, TRSNdenotes the number of training support vectors, FN denotes the number of the data features, TEMSE denotes the testing data mean square error and TDMSE denotes the training data mean square error. 3. Simulation 3.1. Data Processing. Before the evaluation system, the data should be processed firstly. In this experiment, experiment data obtained based on the college analog electronics technique and the experiment data, eight indexes, such as gain, transmission band, cut-off frequency, lower cut-off frequency, maximum undistorted output amplitude, maximum undistorted power output, input sensitivity, and noise voltage are obtained by precise instrument evaluation in two years. For the following experiments, a lot of preprocessing should be done. We set the number of data sample to be 259 × 100, and this is recorded data set R. And a normalization data scheme, denoted by (9), is employed to settle the strangeness value in the data set:


Introduction
With the popularization and complication of electronic equipment, many analog electronic functions have been replaced with digital equivalents; however, there still exists a need to use amplifiers [1].Actually, all of the electronic circuits, such as voice signals conversion, conversion, and sensor signals microprocession and conversion, are not out of the amplifiers [2].At the same time, the existence of circuit nonlinearities and component tolerances, noise, and the lack of training data make the performance detection or diagnosis of amplifier very complex [3][4][5].At the same time, it looks that the performance evaluation or the detection of amplifiers has become increasingly important in the age full of electronic products world.Lots of reasons such as physical damage, manufacturing technique, aging, radiation, temperature changes, and power surges can all make the performance change.Via this performance or detection system, the further electronic products status can be forecasted, and some disaster faults can be avoided.Then the electronic systems can be in good condition on the right time.To this issue, some researchers have paid attention to the fault diagnosis, performance evaluation, and so on about amplifiers [6].And they are not in the early stage of development, but the technique still developed slowly for complication development of electronic equipment complex.Some researchers focus on the data-driven method and lots of literatures [7][8][9][10] had attempted to use it.The same things happened to the robust control [11][12][13][14][15][16][17].
With the technique development of the control strategy, much control theory such as neural network, fuzzy logic, genetic algorithm, and so forth, which offer enough develop space for amplifiers performance evaluation [18][19][20].And support vector machine (SVM) has been extensively applied and researched.Zhang and Yu [21] focused on the requirements for amplifier performance evaluation method's portability and low cost.The support vector regression (SVR) evaluation strategy was firstly proposed, and this evaluation scheme has also inherited the evaluation precision simultaneously.However, the need of large number of support vectors is the largest defect and this has been the major cause of its own being promoted and applied.Focusing on this issue, some literatures also discussed deeply [22,23], and especially, the issue about the number of the support vectors required in the evaluation system concerned.-SVR is concerned by a lot 2 Abstract and Applied Analysis of researchers regarding its ability to generalize, realize SRM, and generate sparse solutions [24].
This work, researched on the literature [24][25][26], proposed an amplifier evaluation strategy based on -SVR, presenting the superiority of -SVR about reducing support vector number.Moreover the modified RBF kernel function is also adopted which is constructed from an original kernel by removing the last coordinate and adding the linear term with the last coordinate.To demonstrate the effect, a typical circuit Sallen-Key low pass filter is employed.Considering the eight performance indexes of amplifiers, the testing is on.

Least Square Support Vector Regression
2.1.Normal LSSVR.Support vector machine (SVM) is originally developed by Vapnik [27] for solving nonlinear classification problems, and it has also been widely used in the regression problems [28] where w is the normal vector of the hyperplane,  is the offset,  = [ 1 , . . .,   ]  represents the prediction residual vector,  ∈  + is the regularization parameter, and (⋅) is the mapping from the input space to the feature space.
In practice, (1) is obtained by optimizing the Lagrangian where  is the Lagrangian multiplier vector.The conditions for optimality are described by Eliminating the vectors w and , the following linear equation set is defined: where (  ,   ) is the kernel function on the paired input vectors {(  ,   ),  = 1, . . ., }.The commonly used kernel function is the RBF which is defined by After obtaining the solution  via (4), for any new testing sample, then we have the predicted value 2.2.-SVR.Aiming the same set of {(  ,   )}  =1 , the th training data is similarly mapped to   ∈ .And in this paper we employ the -SVR scheme proposed in [24] as follows.Note 1.The definition of SVC, which can be found in [24], is omitted here.
Generally speaking, simple linear kernel function should not solve the problem about a longer time of testing new examples.To overcome this issue, many literatures have discussed much.One of them presented a novel scheme which employed a new kernel type in which the last coordinate is placed only inside a linear term [29].Based on this idea, [24] proposed a new kernel is constructed from an original kernel by removing the last coordinate and adding the linear term with the last coordinate.And here the most popular kernel RBF is employed and defined by where  and  are here  + 1 dimensional vectors.The proposed method of constructing new kernels always generates a function fulfilling Mercer's condition.And the explicit form for -SVR is defined by where   = ( 1 , . . .,   ) and   (⋅) is the original kernel from which the new one was constructed (8).

Simulation
3.1.Data Processing.Before the evaluation system, the data should be processed firstly.In this experiment, experiment data obtained based on the college analog electronics technique and the experiment data, eight indexes, such as gain, transmission band, cut-off frequency, lower cut-off frequency, maximum undistorted output amplitude, maximum undistorted power output, input sensitivity, and noise voltage are obtained by precise instrument evaluation in two years.
For the following experiments, a lot of preprocessing should be done.
We set the number of data sample to be 259 × 100, and this is recorded data set .And a normalization data scheme, denoted by (9), is employed to settle the strangeness value in the data set: where    and    are the th components of the input vector before and after normalization, respectively, and  max  and  min  are the maximum and minimum values of all the components of the input vector before the normalization, respectively.Completing data preprocessing via 0-1 normalization method, the noise has been reduced obviously.
After the above data selection and data normalization, there are 200 × 100 samples selected randomly to be the training samples, and the rest parts are to be a test sample.During this testing, in order to achieve performance comparison and analysis, another two evaluation schemes LSSVR and -SVR are also carried out while the amplifier performance evaluation is on with the modified -SVR method.At the same time, several parameters need to be introduced firstly.First of all, it is necessary to denote three parameters, namely, error insensitive zone (), penalty factor , and kernel specific parameters .Then the parameters selection is another key issue.Several researchers had discussed the problem regarding the choice of , , and  [30,31].The penalty factor  controls the smoothness or flatness of the approximation function.Whatever the penalty factor  is to be set big or small, the result would not be satisfied.If we set the value  to be large, the objective is only to minimize the empirical risk, which makes the learning machine more complex.On the contrary, if we set the value  to be small, the objective is to cause the errors to be excessively tolerated yielding a learning machine with poor approximation [32].In this experiment, LSSVR models have been constructed with  and  varied starting from  = 6 and  = 0.01 which are the empirical values given by [32].Via some testing, the parameters  and  have been varied over a specific corresponding range in order to obtain better coefficient of correlation value, and the correlation value, denoted by , is determined by (10).The kernel specific parameter  is restricted since the value shown in Table 1 gives the better prediction for these models.The other necessary parameters of these three evaluation schemes are shown in Table 1.Only the proposed evaluation scheme adopts the modified RBF (7), and another two evaluation methods employ the popular RBF kernel function.The adopted , , and  values for the four models are shown in Table 1.Consider where   and   are the actual and predicted values, respectively, and   and   are mean of actual and predicted  values corresponding to  patterns.And the mean square error is denoted as follows: where   is the real value,   is the predicted value, and  is a testing sample number.

Preparing before Simulation.
For proving this proposed amplifiers performance evaluation, a typical circuit Sallen-Key low pass filter is employed, which is shown in Figure 1 [33], to be the testing object.Aiming at eight indexes of the amplifiers, the training data set is confirmed.Thus, the sample point (, ) and the correspondingly training set  = {( 1 ,  1 ), ( 2 ,  2 ), . . ., (  ,   )} can be defined.

Simulation Experiment.
After the above data preprocessing, simulation experiments are to be done.For validating, the superiority of the -SVR has significantly improved on the number of support vectors and has the best evaluation performance at the same time; the other two evaluation schemes, LSSVR and -SVR, are also employed here.
The sharp contrast of the well performance evaluation and improving on the number of support vectors with the three methods are presented in Figures 2, 3, 4, and 5.We take 6.2-second testing time as a period to be in comparison.Via this testing comparing, we can see clearly that the three   methods all have well performance evaluation ability, but the proposed -SVR scheme has more ability to improve the number of support vectors.For further explanation of the issue, Tables 1 and 2 have all given out the same things to prove the evaluation precision and the ability to improve on the number of support vectors.Moreover, the precise instrument method is utilized in this experiment for proving the well performance of the evaluation.

Conclusion
Considering the demand of lower computer calculation and complexity, a novel amplifiers performance evaluation strategy is presented based on -SVR.The modified kernel function RBF is employed.The modified RBF is constructed from an original kernel by removing the last coordinate and  adding the linear term with the last coordinate.Experiments reveal the superiority of the -SVR, which needs a small amount of support vectors, compared with the other two methods LSSVR and -SVR.The performance evaluation precision by the three schemes is also verified via this experiment.
(i) Every training example   is duplicated; an output value   is translated by a value of a parameter  ≥ 0 for an original training example and translated by − for a duplicated training example.(ii) Every training example   is converted to a classification example by incorporating the output as an additional feature and setting class 1 for original training examples and class -1 for duplicated training examples.(iii) Support vector classification (SVC) is run with the classification mappings.(iv) The solution of SVC is converted to a regression form.

Figure 2 :
Figure 2: Local regression curve of gain 20 lg|A u | with the three methods.

Figure 3 :
Figure 3: Local regression curve of output amplitude (  ) with the three methods.

Figure 4 :
Figure 4: Local regression curve of noise voltage (  ) with the three methods.

Figure 5 :
Figure 5: Local regression curve of input sensitivity (  ) with the three methods.

problem is min w.𝜉.𝑏
. Here suppose the training data {(  ,   )}  =1 , where   is the input with  dimension and   ∈  is its corresponding target; the normal LSSVR soft case optimization   = w   (x  ) +  +   , . . ., ,

Table 1 :
Result of comparative experiment.Note 2. SVN denotes the number of support vectors, TESN denotes the number of testing support vectors, TRSN denotes the number of training support vectors, FN denotes the number of the data features, TEMSE denotes the testing data mean square error and TDMSE denotes the training data mean square error.

Table 2 :
Result of comparative assessment.