MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 539430 10.1155/2014/539430 539430 Research Article Selecting the Optimal Combination Model of FSSVM for the Imbalance Datasets Qin Chuandong 1 Zhao Huixia 2 Shao Cheng 1 School of Mathematics and Information Science North National University Yinchuan 750021 China 2 Business School North National University Yinchuan 750021 China 2014 1632014 2014 20 10 2013 27 01 2014 09 02 2014 16 3 2014 2014 Copyright © 2014 Chuandong Qin and Huixia Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Imbalanced data learning is one of the most active and important fields in machine learning research. The existing class imbalance learning methods can make Support Vector Machines (SVMs) less sensitive to class imbalance; they still suffer from the disturbance of outliers and noise present in the datasets. A kind of Fuzzy Smooth Support Vector Machines (FSSVMs) are proposed based on the Smooth Support Vector Machine (SSVM) of O. L. Mangasarian. SSVM can be computed by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm or the Newton-Armijo algorithm easily. Two kinds of fuzzy memberships and three smooth functions can be chosen in the algorithms. The fuzzy memberships consider the contribution rate of each sample to the optimal separating hyperplane. The polynomial smooth functions can make the optimization problem more accurate at the inflection point. Those changes play the active effects on trials. The results of the experiments show that the FSSVMs can gain the better accuracy and the shorter time than the SSVMs and some of the other methods.

1. Introduction

Learning from imbalanced datasets is an important and ongoing issue in machine learning research. The classification problem with imbalanced training data corresponds to domains for which one class is represented by a large number of instances while the other is represented by only a few. There are many such problems in the real world . Conventional classifiers, which are trained with an imbalanced dataset, can produce a model that is biased toward the majority class. It has a low performance on the minority class. These methods can be broadly divided into two categories, namely, external methods and internal methods. External methods involve preprocessing of training datasets in order to make them balanced, such as randomly undersampling , randomly oversampling , while internal methods deal with modifications of the learning algorithms in order to reduce their sensitiveness to class imbalance, such as Synthetic Minority Oversampling Technique (SMOTE) . In addition, a genetic algorithm based sampling has been proposed in , and Z-SVM has been proposed in .

The general SVM considers all the training examples uniformly. It is sensitive to outliers and noise in the datasets . They exist in most of the real world. A fuzzy membership technique  is introduced to SVM and assigned a different fuzzy membership values (weights) for the different examples. It can reflect the importance of each sample in the algorithms and reduce the effect of outliers and noise. However, the Fuzzy Support Vector Machine (FSVM) can still be influenced by the imbalanced problem. Considering those factors, we define the imbalanced adjustment factor and two kinds of fuzzy membership functions in the models. On the other hand, three smooth functions are applied to the SSVM models which can change the differentiability and make the model be computed by Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm  or the Newton-Armijo algorithm easily .

The rest of this paper is organized as follows. Section 2 briefly reviews the SSVM learning theory and its smooth functions, and Section 3 defines the two fuzzy membership functions. In Section 4, we present the FSSVM algorithm, and in Section 5 we discuss the two algorithms. Section 6 is the experiment results. Finally, Section 7 concludes the paper.

2. SSVM and Its Smooth Function

Given an unknown independent and identically distributed dataset: S = { ( x 1 , y 1 ) , , ( x n , y n ) , x i R n , y i { - 1 , 1 } } , SSVM is a variant of the SVM learning algorithm which was originally proposed in . The reformation of SVM can be expressing as follows: (1) min ( w , b , ξ ) R n + 1 + m 1 2 ( w T w + b 2 ) + C 2 ξ 2 subject to D ( A w - e b ) + ξ e .

Here ξ is given by ξ = ( e - D ( A w - e b ) ) + , and the sign function ( · ) + replaces negative components of a vector by zeros. Thus, we can replace ξ in (1) by ( e - D ( A w - e ) ) + and convert the SVM problem (1) into an equivalent SVM, which is an unconstrained optimization problem as follows: (2) min ( w , b , ξ ) R n + 1 + m ν 2 ( e - D ( A w - e b ) ) + 2 2 + 1 2 ( w T w + b 2 ) . This is a strongly convex minimization problem without any constraints and it has a unique solution. However, the objective function in (2) is not twice differentiable which precludes the use of the fast Newton method. We thus apply the smoothing techniques and approximately replace x + by an accurate sigmoid smooth function: (3) p s ( x , k ) = x + 1 k log ( 1 + ε - k x ) , k > 0 . In order to gain the more accurate smooth function, some researchers  proposed several polynomial smooth functions in his paper as follows: (4) p 2 ( x ) = { x , x 1 k , k 4 x 2 + 1 2 x + 1 4 k , 1 k < x < 1 k , k > 0 0 , x - 1 k , p 4 ( x ) = { x , x 1 k , - 1 16 k ( k x + 1 ) 3 ( k x + 3 ) , - 1 k < x < 1 k 0 , x - 1 k . Obviously, function p 2 ( x , k ) is a piecewise continuous function and first-order differentiable about x 2 . Function p 4 ( x , k ) is another piecewise continuous function and twice-order differentiable about x 4 . The sigmoid function is arbitrary-order differentiable. But the smoothness of those functions is different.

Lemma 1.

For a given x and k , the features of the smooth functions can be gained: (5) p s ( x , k ) p 2 ( x , k ) p 4 ( x , k ) x + .

Lemma 2.

For a given x and k , the relation of the above smooth functions and the x + can be gained the following inequality: (6) p s ( x , k ) 2 - x + 2 ( ( log 2 ) 2 + 2 log 2 ) 1 k 2 0 . 6927 1 k 2 , p 2 ( x , k ) 2 - x + 2 1 11 k 2 0 . 0909 1 k 2 , p 4 ( x , k ) 2 - x + 2 1 19 k 2 0 . 0526 1 k 2 .

The proofs of Lemmas 1 and 2 can be found in paper . From these features, we can gain the advantages and disadvantages of each smooth function: the sigmoid function is arbitrary-order differentiable, but it is inaccuracy at the inflection point; function p 2 ( x , k ) is first-order differentiable. It is more accurate than that of the sigmoid function, but not as good as the function p 4 ( x , k ) ; function p 4 ( x , k ) is twice-order differentiable and it is the most accurate at the inflection point among the functions. But for the quadratic convergence, the speed of iterations of p 4 ( x , k ) is quick, but it may lead to the expensive computation. The convergence of the three functions are relative to the parameter k . In order to gain the better effectiveness, we apply those smooth functions and the fuzzy memberships to the SSVM. The comparison of those smooth functions graph can be seen from Figure 1, where ε in the sigmoid function is the base of natural logarithm and k is the smooth factor. These SSVMs algorithms with smooth functions can be showed as follows generally: (7) min ( w , b , ξ ) 1 2 ( w T w + b 2 ) + C 2 p ( e - D ( A w - e b ) , k ) 2 2 . Formula (7) is an unconstrained optimization problem, which can be computed by the gradient descent algorithms, but the disturbing of some noise and outliers in datasets was not considered. In order to improve the accuracy of the SSVMs and increase the complexity of the algorithms not too much, we introduce a kind of fuzzy membership to the SSVMs.

The graph of the three smooth functions and the sign function.

3. Fuzzy Membership for the Imbalanced Dataset

In order to deal with the problem of outliers and noise, we introduce a kind of fuzzy membership technique which can consider the effect of the noise and outliers in the imbalanced datasets. The r + and r - are assigned to reflect the unbalancedness. Therefore, a positive-class example is given a membership value in the [ 0 , r + ] interval, while a negative-class example is given a membership value in the [ 0 , r - ] interval. We assign r + = 1 and r - = r to show the imbalanced ratio, where r is the minority-to-majority class ratio. According to this assignment of values, a positive-class example can take a membership value in the [ 0 , 1 ] interval, and the negative-class example can take the value in the [ 0 , r ] interval, where r < 1 .

At the same time, we define the function f ( x i ) based on the distance from the actual separating hyperplane to x i , which is found by training a normal SVM model on the imbalanced dataset. The examples closer to the actual separating hyperplane are treated as more informative and assigned higher membership values. The membership function can define as follows.

Train a normal SVM model with the original imbalanced dataset.

Find the functional margin d i h of each example x i . The functional margin is proportional to the geometric margin of a training example with respect to the hyperplane: (8) d i h = y i ( w · ϕ ( x i ) + b ) .

Define the linear-decaying function and the exponential-decaying functions as follows: (9) f lin h ( x i ) = 1 - d i h max ( d i h + Δ ) f exp h ( x i ) = 2 1 + exp ( β d i h ) , β [ 0 , 1 ] .

Here Δ is a small positive number. For the imbalanced dataset, we define the fuzzy membership of an example as s i = f ( x i ) · r and apply the two fuzzy memberships to the SSVM.

4. SSVM with the Fuzzy Membership (FSSVM)

After preprocessing dataset, we can gain dataset with the fuzzy membership as follow: (10) T = { ( x 1 , y 1 , s 1 ) , ( x 2 , y 2 , s 2 ) · · · ( x m , y m , s m ) } .

Based on the SSVM classifiers of (7), the optimization problem of FSSVM in the higher feature space F is given by the following model: (11) min ( w , b , ξ ) R n + 1 + m 1 2 ( w T w + b 2 ) + C 2 S 2 ξ 2 subject to D ( w φ ( A ) - e b ) + ξ e , where A is the dataset, ξ is the slack variable, and S is the fuzzy membership matrix. At a solution of problem (11), ξ is given by the following plus function: (12) ξ = ( e - D ( w φ ( A ) - e b ) ) + . According to the smoothness and the differentiability, we replace the plus function with the above smooth function, respectively. At the same time, in order to find a better separation of classes, the data are first transformed into a higher dimensional feature space by a mapping function φ . As an important property of SVMs, it is not necessary to know the mapping function φ ( x ) explicitly. By defining the kernel function K : = k ( A , A T ) = φ ( A ) · φ ( A T ) in the feature space F : x i φ ( x i ) , we gain the FSSVMs model as the following: (13) min ( u , b , ξ ) 1 2 ( u T u + b 2 ) + C S 2 2 p s ( e - D ( K ( A , A T ) D u - e b ) , k ) 2 2 , (14) min ( u , b , ξ ) 1 2 ( u T u + b 2 ) + C S 2 2 p 2 ( e - D ( K ( A , A T ) D u - e b ) , k ) 2 2 , (15) min ( u , b , ξ ) 1 2 ( u T u + b 2 ) + C S 2 2 p 4 ( e - D ( K ( A , A T ) D u - e b ) , k ) 2 2 , where K ( A , A T ) is a kernel map from R m × n × R n × m to R m × m . We note that this problem, which is capable of generating highly nonlinear separating surfaces, still retains the strong convexity and differentiability properties for any arbitrary kernel. Hence we can apply the BFGS algorithm to solve the problems (13) ~ (15) and the Newton-Armijo algorithm to solve (13) and (15). We call them F S s SVM , F S 2 SVM , and F S 4 SVM in the following experiments. We turn our attention now to the algorithms.

5. Algorithms

In this section, we introduce the BFGS algorithm and Newton-Armijo algorithm for the above unconstraint optimizations (13)~(15).

5.1. BFGS Algorithm

If the objection function is the first-order differentiable, we can use BFGS algorithm to compute the unconstrained optimization problem according to the following algorithm.

Set H = I , x 0 , ε 1 > 0 , let k 0 . Set constant s , ρ , β where s > 0 , σ ( 0 , 0.5 ) , β ( 0 , 1 ) .

Compute g ( k ) . If g ( k ) ε 1 , stop, take x * = x k , otherwise compute the descent direction d k from d k = - H k g ( k ) .

Compute the iteration step α k with the linear search. Let α k = β m k s , m k is the smallest positive integral which make the inequality: f ( x ( k ) + β m s d ( k ) ) f ( x ( k ) ) + σ β m g ( k ) T d ( k ) let:   x ( k + 1 ) = x ( k ) + α k d ( k ) , δ ( k ) = α k d ( k ) , let: s k = s k + 1 - s k , y ( k ) = g ( k + 1 ) - g ( k ) .

From H k to H k + 1 : If y k T s k 0 , let H k + 1 = H k , otherwise: H k + 1 = H k - H k y ( k ) H k y ( k ) T / y ( k ) H k y ( k ) + s ( k ) s ( k ) T / s ( k ) T y ( k ) .

Let: k k + 1 , go to step (ii).

5.2. Newton-Armijo Algorithm

If the objection function is the twice-order differentiable, we can use Newdon-Armijo algorithm to compute the unconstrained optimization problem according to the following algorithm.

Set the initial point x 1 , and ε 1 > 0 , let k 0 .

Let k k + 1 compute g ( k ) , if g ( k ) ε , then take x * = x ( k ) , stop, otherwise go to (iii).

Compute G k = G ( x ( k ) ) and the descent direction d k from G k d k = - g ( k ) .

Armijo step-size, choose a step-size λ i R such that f ( x ( i ) ) - f ( x ( i ) + λ i d ( i ) ) - δ λ i ( g ( i ) ) T d ( i ) where λ i = { 1 , 1 / 2 , 1 / 4 , } , δ ( 0 , 0.5 ) .

Set x ( i + 1 ) = x ( i ) + λ i d ( i ) , i = i + 1 , go to step (ii).

6. Numerical Experiment 6.1. The Parameters Selection

Before our experiments, there are three important parameters to be selected. The first parameter is the smooth factor k . Some researchers  gave them the upper in the each optimization problem as follows: (16) k p s ( m , ε ) 0.0927 m 2 ε , k p 2 ( m , ε ) 0.0909 m 2 ε , k p 4 ( m , ε ) 0.0526 m 2 ε , where m is the number of sample and ε is the accuracy. After gaining the upper value, we select the weight parameters C and the kernel parameters v by using 5-fold cross validation. This whole training and testing procedure is repeated 5 times with different training and testing partitions; finally, the results on the testing partitions are averaged and reported. The figure of 5-fold cross validation can be found in Figure 2. The optimal value of β is chosen from the range β = { 0. 1 , , 1 } and the Δ in linearly decaying function is set 1 0 - 6 .

The 5-fold cross validation.

6.2. Evaluation Criterion and Datasets

According to the proposed FSSVMs, the accuracy is not suitable for the high imbalanced datasets. We employ the geometric mean of the ac c + , ac c -  to evaluate the performances of algorithms in our experiments: (17) G means = ac c + × ac c - , where ac c + = T P / n + and ac c - = T N / n - indicate the accuracy rate of the positives and the negative, respectively. The impact of the change of ac c + or ac c - value on the G value depends on the size of ac c + or ac c - value: the smaller ac c + or ac c - value, the bigger the change of the G value. That means the more minority class samples have wrong points, the greater the cost of misclassification gain.

We demonstrate the effectiveness of the selected FSSVMs with five benchmark real-word imbalanced datasets from the UCI machine learning repository . These real-world datasets contain some outliers and noisy examples and the features can be found in Table 1.

Details of the imbanlanced selected for our study.

Dataset Pos Neg Total Imb. ratio Total class. Pos class.
Abanole 103 4074 4177 49 29 15
Yeast 51 1433 1484 32.3 10 5
Satimage 626 5089 6435 9 29 4
Pima 268 500 768 1.8 2 1
Haberman 81 225 306 2.5 2 2
6.3. Experimental Results

For the differentiability of the smooth functions is different, some of the first-order differentiable FSSVMs or SSVMs are designed to test under the BFGS algorithm and the two-order differentiable ones are tested under the Newton-Armijo algorithm. Among the methods, there are three kinds of smooth functions and two kinds of decaying fuzzy membership functions to be chosen. For brevity, We use subscript operator l , e , s , p 2 , p 4 to stand for the linear-decaying function, exponential-decaying function, p s ( x , k ) ,    p 2 ( x , k ) , and p 4 ( x , k ) in the FSSVMs. The result of experiments can be found in Tables 2 and 3. From the tables, it is clear that the G-means of the FSSVMs are better than those of the SSVMs and the normal SVM for the five datasets. Those show that the two fuzzy memberships and the imbalanced factor play an active role in the FSSVMs. On the other hand, the smooth functions change the constraints of the optimization into the unconstraint ones, which make the BFGS algorithm and the Newton-Armijo algorithm be used in the computational process. Because the features and the imbalanced ratios of the datasets are discriminating, the effectiveness of tests is different: for the higher imbalanced ratio datasets, it is better to select the linear decaying FS P 4 SVM and the BFGS algorithm, and the fuzzy membership function can well weaken the effect of the outliers and noise; for the lower imbalanced ratio datasets, it is better to choose the exponential decaying FS P 4 SVM and the Newton-Armijio algorithm. It seem that the smooth function p 4 ( x , k ) increases the complexity of the optimization; in fact, it makes the speed of convergence become faster in Newton-Amijio algorithm. Obviously, the running time (exclusive the time of the preprocessing normal SVM) of the Newton algorithm is much smaller than that of the other SVMs.

FSSVMs compare with SSVM and the normal SVM on five datasets for G-means (%) and time (s) (exclusive the time of the preprocessing normal SVM). All of the SSVMs are based on the BFGS algorithm and the two kinds of fuzzy membership functions and three kinds of smooth functions are used in the FSSVMs.

SSVM FSSVM
Data Linear decaying Exponential decaying
N-SVM S s -SVM S p 2 -SVM S p 4 -SVM FS s -SVM FS p 2 -SVM FS p 4 -SVM FSVM FS s -SVM FS p 2 -SVM FS p 4 -SVM FSVM
Abanole 19.39 17.92 18.73 20.45 72.72 71.43 73.26 29.82 65.25 64.08 70.16 28.99
14.32 1.03 1.16 1.29 1.03 1.78 1.94 4.06 1.18 2.02 2.82 4.56
Yeast 67.68 58.62 59.46 61.36 83.23 84.67 86.60 71.80 83.54 82.70 84.44 70.80
56.23 11.56 12.24 9.56 11.24 13.46 10.20 20.48 7.88 9.08 6.26 23.48
Satimage 81.05 82.66 81.36 81.84 90.10 89.26 91.28 89.28 90.56 93.06 91.64 89.68
25.91 14.16 20.24 21.62 22.90 23.02 23.03 26.24 16.86 21.40 21.71 28.24
Pima 68.86 69.08 68.92 70.04 72.64 73.20 71.46 69.88 72.80 71.58 72.47 69.64
3.54 2.12 2.28 2.36 2.46 3.12 2.46 2.86 2.46 3.24 3.24 3.88
Haberman 42.46 48.66 49.24 45.88 61.56 62.21 62.46 62.47 64.28 63.48 63.48 62.86
1.20 0.26 0.28 0.36 0.46 0.32 0.46 1.34 0.46 0.28 0.28 1.56

FSSVMs compare with SSVM and the normal SVM on five datasets for G-means (%) and time (s) (exclusive the time of the preprocessing normal SVM). All of the SSVMs are based on the Newton-Armijo algorithm and the two kinds of fuzzy membership functions and two kinds of smooth functions are used in the FSSVMs.

SSVM FSSVM
Data Linear decaying Exponential decaying
N-SVM S s -SVN S p 4 -SVM FS s -SVM FS p 4 -SVM FSVM FS s -SVM FS p 4 -SVM FSVM
Abanole 19.39 17.80 19.84 71.86 72.96 29.92 70.45 70.16 31.24
14.32 0.89 0.98 0.56 0.28 1.56 0.28 0.24 1.26
Yeast 67.68 58.46 62.06 80.40 85.54 72.06 82.54 84.20 71.46
56.23 8.06 4.56 8.24 5.20 14.48 3.88 3.26 12.48
Satimage 81.05 81.86 82.54 90.32 91.46 90.28 93.72 94.40 91.28
25.91 12.24 16.45 15.56 17.28 28.24 11.86 11.71 26.42
Pima 68.86 69.24 70.56 71.88 72.28 70.64 72.43 73.68 71.04
3.54 1.12 1.36 0.86 0.46 1.88 0.46 0.24 1.68
Haberman 42.46 52.20 50.56 61.24 62.60 60.26 64.96 66.48 61.48
1.20 0.12 0.36 0.16 0.11 0.56 0.16 0.08 0.24

Table 4 presents some of the best FSSVMs which are compared with some other methods for the five datasets. From the results, we can observe that most of the FSSVMs methods obtained better classification results than the results given by the undersampling, oversampling, SMOTE, ADASYN, Z-SVM, and the normal SVM training. But for the different datasets, the optimal method is different and it needs to select the best combination among the fuzzy memberships functions and smooth functions.

The best G-means (%) of the proposed FSSVMs compare with other class imbalanced methods.

Methods Abanole Yeast Satimage Haberman Pima
NSVM 19.39 58.80 81.05 42.46 68.86
Under. 73.19 83.31 89.13 62.06 72.51
Over. 72.73 83.77 88.75 64.49 72.77
SMOTE 72.69 83.23 87.64 62.33 73.08
ADASYN 62.91 82.46 88.21 63.76 66.25
Z-SVM 56.26 78.89 86.43 62.89 74.94
F l S p 4 SVM 73.26 73.68
F e S s SVM 85.54
F e S p 4 SVM 94.40 66.48
7. Conclusions

Choosing a proper fuzzy membership function and a proper smooth function are quite important to solve classification problem with FSSVM. A kind of FSSVM with two fuzzy functions and three smooth functions for nonlinear classification has been proposed in this paper, which can use the BFGS algorithm or Newton-Armijo algorithm to compute. Experiments results confirm that it is effective according to the datasets features to choose fuzzy membership function and smooth function. It can achieve better performance on reducing the disturbance of outliers and noise than some existing methods in imbalanced datasets.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The project is supported by the Natural Science Ningxia Foundation (NZ13095).

Wu G. Chang E. Y. KBA: kernel boundary alignment considering imbalanced data distribution IEEE Transactions on Knowledge and Data Engineering 2005 17 6 786 795 2-s2.0-20844441675 10.1109/TKDE.2005.95 Maglogiannis I. Zafiropoulos E. Anagnostopoulos I. An intelligent system for automated breast cancer diagnosis and prognosis using SVM based classifiers Applied Intelligence 2009 30 1 24 36 2-s2.0-57849144724 10.1007/s10489-007-0073-z Cai S. Zhang R. Liu L. Zhou D. A method of salt-affected soil information extraction based on a support vector machine with texture features Mathematical and Computer Modelling 2010 51 11-12 1319 1325 2-s2.0-77950860439 10.1016/j.mcm.2009.10.037 He H. Garcia E. A. Learning from imbalanced data IEEE Transactions on Knowledge and Data Engineering 2009 21 9 1263 1284 2-s2.0-68549133155 10.1109/TKDE.2008.239 Matías J. M. Taboada J. Ordóñez C. González-Manteiga W. Partially linear support vector machines applied to the prediction of mine slope movements Mathematical and Computer Modelling 2010 51 3-4 206 215 2-s2.0-72149130269 10.1016/j.mcm.2009.08.036 Weiss G. Mining with rarity: a unifying framework ACM SIGKDD Explorations Newsletter 2004 6 1 7 19 Chawla N. V. Bowyer K. W. Hall L. O. Kegelmeyer W. P. SMOTE: synthetic minority over-sampling technique Journal of Artificial Intelligence Research 2002 16 321 357 2-s2.0-0346586663 Zou S. Huang Y. Wang Y. Wang J. Zhou C. SVM learning from imbalanced data by GA sampling for protein domain prediction Proceedings of the 9th International Conference for Young Computer Scientists (ICYCS '08) November 2008 982 987 2-s2.0-58349100486 10.1109/ICYCS.2008.72 Chawla N. Japkowicz N. Kolcz A. Editorial:Special issue on learning from imbalanced data sets ACM SIGKDD Explorations Newsletter 2004 6 1 1 6 Huang H.-P. Liu Y.-H. Fuzzy support vector machines for pattern recognition and data mining International Journal of Fuzzy Systems 2002 4 3 826 835 MR1933593 Yuan Y. X. A modified BFGS algorithm for unconstrained optimization IMA Journal of Numerical Analysis 1991 11 3 325 332 10.1093/imanum/11.3.325 MR1118959 ZBL0733.65039 Yuan Y. X. Byrd R. H. Non-quasi-Newton updates for unconstrained optimization Journal of Computational Mathematics 1995 13 2 95 107 MR1330554 ZBL0823.65062 Mangasarian O. L. Musicant D. R. Lagrangian support vector machines Journal of Machine Learning Research 2001 1 3 161 177 10.1162/15324430152748218 MR1875836 ZBL0997.68108 Yuan Y.-B. Yan J. Xu C.-X. Polynomial smooth support vector machine (PSSVM) Chinese Journal of Computers 2005 28 1 9 17 2-s2.0-14744304987 He H. Bai Y. Garcia E. A. Li S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning Proceedings of the International Joint Conference on Neural Networks (IJCNN '08) June 2008 1322 1328 2-s2.0-56349089205 10.1109/IJCNN.2008.4633969 http://archive.ics.uci.edu/ml/