Imbalanced data learning is one of the most active and important fields in machine learning research. The existing class imbalance learning methods can make Support Vector Machines (SVMs) less sensitive to class imbalance; they still suffer from the disturbance of outliers and noise present in the datasets. A kind of Fuzzy Smooth Support Vector Machines (FSSVMs) are proposed based on the Smooth Support Vector Machine (SSVM) of O. L. Mangasarian. SSVM can be computed by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm or the Newton-Armijo algorithm easily. Two kinds of fuzzy memberships and three smooth functions can be chosen in the algorithms. The fuzzy memberships consider the contribution rate of each sample to the optimal separating hyperplane. The polynomial smooth functions can make the optimization problem more accurate at the inflection point. Those changes play the active effects on trials. The results of the experiments show that the FSSVMs can gain the better accuracy and the shorter time than the SSVMs and some of the other methods.
1. Introduction
Learning from imbalanced datasets is an important and ongoing issue in machine learning research. The classification problem with imbalanced training data corresponds to domains for which one class is represented by a large number of instances while the other is represented by only a few. There are many such problems in the real world [1–3]. Conventional classifiers, which are trained with an imbalanced dataset, can produce a model that is biased toward the majority class. It has a low performance on the minority class. These methods can be broadly divided into two categories, namely, external methods and internal methods. External methods involve preprocessing of training datasets in order to make them balanced, such as randomly undersampling [4], randomly oversampling [5], while internal methods deal with modifications of the learning algorithms in order to reduce their sensitiveness to class imbalance, such as Synthetic Minority Oversampling Technique (SMOTE) [6]. In addition, a genetic algorithm based sampling has been proposed in [7], and Z-SVM has been proposed in [8].
The general SVM considers all the training examples uniformly. It is sensitive to outliers and noise in the datasets [9]. They exist in most of the real world. A fuzzy membership technique [10] is introduced to SVM and assigned a different fuzzy membership values (weights) for the different examples. It can reflect the importance of each sample in the algorithms and reduce the effect of outliers and noise. However, the Fuzzy Support Vector Machine (FSVM) can still be influenced by the imbalanced problem. Considering those factors, we define the imbalanced adjustment factor and two kinds of fuzzy membership functions in the models. On the other hand, three smooth functions are applied to the SSVM models which can change the differentiability and make the model be computed by Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [11] or the Newton-Armijo algorithm easily [12].
The rest of this paper is organized as follows. Section 2 briefly reviews the SSVM learning theory and its smooth functions, and Section 3 defines the two fuzzy membership functions. In Section 4, we present the FSSVM algorithm, and in Section 5 we discuss the two algorithms. Section 6 is the experiment results. Finally, Section 7 concludes the paper.
2. SSVM and Its Smooth Function
Given an unknown independent and identically distributed dataset: S={(x1,y1),…,(xn,yn),xi∈Rn,yi∈{-1,1}}, SSVM is a variant of the SVM learning algorithm which was originally proposed in [13]. The reformation of SVM can be expressing as follows:
(1)min(w,b,ξ)∈Rn+1+m12(wTw+b2)+C2ξ2subjecttoD(Aw-eb)+ξ≥e.
Here ξ is given by ξ=(e-D(Aw-eb))+, and the sign function (·)+ replaces negative components of a vector by zeros. Thus, we can replace ξ in (1) by (e-D(Aw-e))+ and convert the SVM problem (1) into an equivalent SVM, which is an unconstrained optimization problem as follows:
(2)min(w,b,ξ)∈Rn+1+mν2∥(e-D(Aw-eb))+∥22+12(wTw+b2).
This is a strongly convex minimization problem without any constraints and it has a unique solution. However, the objective function in (2) is not twice differentiable which precludes the use of the fast Newton method. We thus apply the smoothing techniques and approximately replace x+ by an accurate sigmoid smooth function:
(3)ps(x,k)=x+1klog(1+ε-kx),k>0.
In order to gain the more accurate smooth function, some researchers [14] proposed several polynomial smooth functions in his paper as follows:
(4)p2(x)={x,x≥1k,k4x2+12x+14k,1k<x<1k,k>00,x≤-1k,p4(x)={x,x≥1k,-116k(kx+1)3(kx+3),-1k<x<1k0,x≤-1k.
Obviously, function p2(x,k) is a piecewise continuous function and first-order differentiable about x2. Function p4(x,k) is another piecewise continuous function and twice-order differentiable about x4. The sigmoid function is arbitrary-order differentiable. But the smoothness of those functions is different.
Lemma 1.
For a given x and k, the features of the smooth functions can be gained:
(5)ps(x,k)≥p2(x,k)≥p4(x,k)≥x+.
Lemma 2.
For a given x and k, the relation of the above smooth functions and the x+ can be gained the following inequality:
(6)ps(x,k)2-x+2≤((log2)2+2log2)1k2≈0.69271k2,p2(x,k)2-x+2≤111k2≈0.09091k2,p4(x,k)2-x+2≤119k2≈0.05261k2.
The proofs of Lemmas 1 and 2 can be found in paper [14]. From these features, we can gain the advantages and disadvantages of each smooth function: the sigmoid function is arbitrary-order differentiable, but it is inaccuracy at the inflection point; function p2(x,k) is first-order differentiable. It is more accurate than that of the sigmoid function, but not as good as the function p4(x,k); function p4(x,k) is twice-order differentiable and it is the most accurate at the inflection point among the functions. But for the quadratic convergence, the speed of iterations of p4(x,k) is quick, but it may lead to the expensive computation. The convergence of the three functions are relative to the parameter k. In order to gain the better effectiveness, we apply those smooth functions and the fuzzy memberships to the SSVM. The comparison of those smooth functions graph can be seen from Figure 1, where ε in the sigmoid function is the base of natural logarithm and k is the smooth factor. These SSVMs algorithms with smooth functions can be showed as follows generally:
(7)min(w,b,ξ)12(wTw+b2)+C2∥p(e-D(Aw-eb),k)∥22.
Formula (7) is an unconstrained optimization problem, which can be computed by the gradient descent algorithms, but the disturbing of some noise and outliers in datasets was not considered. In order to improve the accuracy of the SSVMs and increase the complexity of the algorithms not too much, we introduce a kind of fuzzy membership to the SSVMs.
The graph of the three smooth functions and the sign function.
3. Fuzzy Membership for the Imbalanced Dataset
In order to deal with the problem of outliers and noise, we introduce a kind of fuzzy membership technique which can consider the effect of the noise and outliers in the imbalanced datasets. The r+ and r- are assigned to reflect the unbalancedness. Therefore, a positive-class example is given a membership value in the [0,r+] interval, while a negative-class example is given a membership value in the [0,r-] interval. We assign r+=1 and r-=r to show the imbalanced ratio, where r is the minority-to-majority class ratio. According to this assignment of values, a positive-class example can take a membership value in the [0,1] interval, and the negative-class example can take the value in the [0,r] interval, where r<1.
At the same time, we define the function f(xi) based on the distance from the actual separating hyperplane to xi, which is found by training a normal SVM model on the imbalanced dataset. The examples closer to the actual separating hyperplane are treated as more informative and assigned higher membership values. The membership function can define as follows.
Train a normal SVM model with the original imbalanced dataset.
Find the functional margin dih of each example xi. The functional margin is proportional to the geometric margin of a training example with respect to the hyperplane:
(8)dih=yi(w·ϕ(xi)+b).
Define the linear-decaying function and the exponential-decaying functions as follows:
(9)flinh(xi)=1-dihmax(dih+Δ)fexph(xi)=21+exp(βdih),β∈[0,1].
Here Δ is a small positive number. For the imbalanced dataset, we define the fuzzy membership of an example as si=f(xi)·r and apply the two fuzzy memberships to the SSVM.
4. SSVM with the Fuzzy Membership (FSSVM)
After preprocessing dataset, we can gain dataset with the fuzzy membership as follow:
(10)T={(x1,y1,s1),(x2,y2,s2)···(xm,ym,sm)}.
Based on the SSVM classifiers of (7), the optimization problem of FSSVM in the higher feature space F is given by the following model:
(11)min(w,b,ξ)∈Rn+1+m12(wTw+b2)+C2S2ξ2subjecttoD(wφ(A)-eb)+ξ≥e,
where A is the dataset, ξ is the slack variable, and S is the fuzzy membership matrix. At a solution of problem (11), ξ is given by the following plus function:
(12)ξ=(e-D(wφ(A)-eb))+.
According to the smoothness and the differentiability, we replace the plus function with the above smooth function, respectively. At the same time, in order to find a better separation of classes, the data are first transformed into a higher dimensional feature space by a mapping function φ. As an important property of SVMs, it is not necessary to know the mapping function φ(x) explicitly. By defining the kernel function K:=k(A,AT)=φ(A)·φ(AT) in the feature space F:xi→φ(xi), we gain the FSSVMs model as the following:
(13)min(u,b,ξ)12(uTu+b2)+CS22∥ps(e-D(K(A,AT)Du-eb),k)∥22,(14)min(u,b,ξ)12(uTu+b2)+CS22∥p2(e-D(K(A,AT)Du-eb),k)∥22,(15)min(u,b,ξ)12(uTu+b2)+CS22∥p4(e-D(K(A,AT)Du-eb),k)∥22,
where K(A,AT) is a kernel map from Rm×n×Rn×m to Rm×m. We note that this problem, which is capable of generating highly nonlinear separating surfaces, still retains the strong convexity and differentiability properties for any arbitrary kernel. Hence we can apply the BFGS algorithm to solve the problems (13)~(15) and the Newton-Armijo algorithm to solve (13) and (15). We call them FSsSVM, FS2SVM, and FS4SVM in the following experiments. We turn our attention now to the algorithms.
5. Algorithms
In this section, we introduce the BFGS algorithm and Newton-Armijo algorithm for the above unconstraint optimizations (13)~(15).
5.1. BFGS Algorithm
If the objection function is the first-order differentiable, we can use BFGS algorithm to compute the unconstrained optimization problem according to the following algorithm.
Set H=I, x0, ε1>0, let k⇐0. Set constant s, ρ, β where s>0, σ∈(0,0.5), β∈(0,1).
Compute g(k). If ∥g(k)∥≤ε1, stop, take x*=xk, otherwise compute the descent direction dk from dk=-Hkg(k).
Compute the iteration step αk with the linear search. Let αk=βmks, mk is the smallest positive integral which make the inequality: f(x(k)+βmsd(k))≤f(x(k))+σβmg(k)Td(k) let: x(k+1)=x(k)+αkd(k), δ(k)=αkd(k), let: sk=sk+1-sk, y(k)=g(k+1)-g(k).
From Hk to Hk+1: If ykTsk≤0, let Hk+1=Hk, otherwise: Hk+1=Hk-Hky(k)Hky(k)T/y(k)Hky(k)+s(k)s(k)T/s(k)Ty(k).
Let: k⇐k+1, go to step (ii).
5.2. Newton-Armijo Algorithm
If the objection function is the twice-order differentiable, we can use Newdon-Armijo algorithm to compute the unconstrained optimization problem according to the following algorithm.
Set the initial point x1, and ε1>0, let k⇐0.
Let k⇐k+1 compute g(k), if ∥g(k)∥≤ε, then take x*=x(k), stop, otherwise go to (iii).
Compute Gk=G(x(k)) and the descent direction dk from Gkdk=-g(k).
Armijo step-size, choose a step-size λi∈R such that f(x(i))-f(x(i)+λid(i))≥-δλi(g(i))Td(i) where λi={1,1/2,1/4,…}, δ∈(0,0.5).
Set x(i+1)=x(i)+λid(i), i=i+1, go to step (ii).
6. Numerical Experiment6.1. The Parameters Selection
Before our experiments, there are three important parameters to be selected. The first parameter is the smooth factor k. Some researchers [14] gave them the upper in the each optimization problem as follows:
(16)kps(m,ε)≤0.0927m2ε,kp2(m,ε)≤0.0909m2ε,kp4(m,ε)≤0.0526m2ε,
where m is the number of sample and ε is the accuracy. After gaining the upper value, we select the weight parameters C and the kernel parameters v by using 5-fold cross validation. This whole training and testing procedure is repeated 5 times with different training and testing partitions; finally, the results on the testing partitions are averaged and reported. The figure of 5-fold cross validation can be found in Figure 2. The optimal value of β is chosen from the range β={0. 1,…,1} and the Δ in linearly decaying function is set 10-6.
The 5-fold cross validation.
6.2. Evaluation Criterion and Datasets
According to the proposed FSSVMs, the accuracy is not suitable for the high imbalanced datasets. We employ the geometric mean of the acc+, acc- [15] to evaluate the performances of algorithms in our experiments:
(17)Gmeans=acc+×acc-,
where acc+=TP/n+ and acc-=TN/n- indicate the accuracy rate of the positives and the negative, respectively. The impact of the change of acc+ or acc- value on the G value depends on the size of acc+ or acc- value: the smaller acc+ or acc- value, the bigger the change of the G value. That means the more minority class samples have wrong points, the greater the cost of misclassification gain.
We demonstrate the effectiveness of the selected FSSVMs with five benchmark real-word imbalanced datasets from the UCI machine learning repository [16]. These real-world datasets contain some outliers and noisy examples and the features can be found in Table 1.
Details of the imbanlanced selected for our study.
Dataset
Pos
Neg
Total
Imb. ratio
Total class.
Pos class.
Abanole
103
4074
4177
49
29
15
Yeast
51
1433
1484
32.3
10
5
Satimage
626
5089
6435
9
29
4
Pima
268
500
768
1.8
2
1
Haberman
81
225
306
2.5
2
2
6.3. Experimental Results
For the differentiability of the smooth functions is different, some of the first-order differentiable FSSVMs or SSVMs are designed to test under the BFGS algorithm and the two-order differentiable ones are tested under the Newton-Armijo algorithm. Among the methods, there are three kinds of smooth functions and two kinds of decaying fuzzy membership functions to be chosen. For brevity, We use subscript operator l, e, s, p2, p4 to stand for the linear-decaying function, exponential-decaying function, ps(x,k), p2(x,k), and p4(x,k) in the FSSVMs. The result of experiments can be found in Tables 2 and 3. From the tables, it is clear that the G-means of the FSSVMs are better than those of the SSVMs and the normal SVM for the five datasets. Those show that the two fuzzy memberships and the imbalanced factor play an active role in the FSSVMs. On the other hand, the smooth functions change the constraints of the optimization into the unconstraint ones, which make the BFGS algorithm and the Newton-Armijo algorithm be used in the computational process. Because the features and the imbalanced ratios of the datasets are discriminating, the effectiveness of tests is different: for the higher imbalanced ratio datasets, it is better to select the linear decaying FSP4SVM and the BFGS algorithm, and the fuzzy membership function can well weaken the effect of the outliers and noise; for the lower imbalanced ratio datasets, it is better to choose the exponential decaying FSP4SVM and the Newton-Armijio algorithm. It seem that the smooth function p4(x,k) increases the complexity of the optimization; in fact, it makes the speed of convergence become faster in Newton-Amijio algorithm. Obviously, the running time (exclusive the time of the preprocessing normal SVM) of the Newton algorithm is much smaller than that of the other SVMs.
FSSVMs compare with SSVM and the normal SVM on five datasets for G-means (%) and time (s) (exclusive the time of the preprocessing normal SVM). All of the SSVMs are based on the BFGS algorithm and the two kinds of fuzzy membership functions and three kinds of smooth functions are used in the FSSVMs.
SSVM
FSSVM
Data
Linear decaying
Exponential decaying
N-SVM
Ss-SVM
Sp2-SVM
Sp4-SVM
FSs-SVM
FSp2-SVM
FSp4-SVM
FSVM
FSs-SVM
FSp2-SVM
FSp4-SVM
FSVM
Abanole
19.39
17.92
18.73
20.45
72.72
71.43
73.26
29.82
65.25
64.08
70.16
28.99
14.32
1.03
1.16
1.29
1.03
1.78
1.94
4.06
1.18
2.02
2.82
4.56
Yeast
67.68
58.62
59.46
61.36
83.23
84.67
86.60
71.80
83.54
82.70
84.44
70.80
56.23
11.56
12.24
9.56
11.24
13.46
10.20
20.48
7.88
9.08
6.26
23.48
Satimage
81.05
82.66
81.36
81.84
90.10
89.26
91.28
89.28
90.56
93.06
91.64
89.68
25.91
14.16
20.24
21.62
22.90
23.02
23.03
26.24
16.86
21.40
21.71
28.24
Pima
68.86
69.08
68.92
70.04
72.64
73.20
71.46
69.88
72.80
71.58
72.47
69.64
3.54
2.12
2.28
2.36
2.46
3.12
2.46
2.86
2.46
3.24
3.24
3.88
Haberman
42.46
48.66
49.24
45.88
61.56
62.21
62.46
62.47
64.28
63.48
63.48
62.86
1.20
0.26
0.28
0.36
0.46
0.32
0.46
1.34
0.46
0.28
0.28
1.56
FSSVMs compare with SSVM and the normal SVM on five datasets for G-means (%) and time (s) (exclusive the time of the preprocessing normal SVM). All of the SSVMs are based on the Newton-Armijo algorithm and the two kinds of fuzzy membership functions and two kinds of smooth functions are used in the FSSVMs.
SSVM
FSSVM
Data
Linear decaying
Exponential decaying
N-SVM
Ss-SVN
Sp4-SVM
FSs-SVM
FSp4-SVM
FSVM
FSs-SVM
FSp4-SVM
FSVM
Abanole
19.39
17.80
19.84
71.86
72.96
29.92
70.45
70.16
31.24
14.32
0.89
0.98
0.56
0.28
1.56
0.28
0.24
1.26
Yeast
67.68
58.46
62.06
80.40
85.54
72.06
82.54
84.20
71.46
56.23
8.06
4.56
8.24
5.20
14.48
3.88
3.26
12.48
Satimage
81.05
81.86
82.54
90.32
91.46
90.28
93.72
94.40
91.28
25.91
12.24
16.45
15.56
17.28
28.24
11.86
11.71
26.42
Pima
68.86
69.24
70.56
71.88
72.28
70.64
72.43
73.68
71.04
3.54
1.12
1.36
0.86
0.46
1.88
0.46
0.24
1.68
Haberman
42.46
52.20
50.56
61.24
62.60
60.26
64.96
66.48
61.48
1.20
0.12
0.36
0.16
0.11
0.56
0.16
0.08
0.24
Table 4 presents some of the best FSSVMs which are compared with some other methods for the five datasets. From the results, we can observe that most of the FSSVMs methods obtained better classification results than the results given by the undersampling, oversampling, SMOTE, ADASYN, Z-SVM, and the normal SVM training. But for the different datasets, the optimal method is different and it needs to select the best combination among the fuzzy memberships functions and smooth functions.
The best G-means (%) of the proposed FSSVMs compare with other class imbalanced methods.
Methods
Abanole
Yeast
Satimage
Haberman
Pima
NSVM
19.39
58.80
81.05
42.46
68.86
Under.
73.19
83.31
89.13
62.06
72.51
Over.
72.73
83.77
88.75
64.49
72.77
SMOTE
72.69
83.23
87.64
62.33
73.08
ADASYN
62.91
82.46
88.21
63.76
66.25
Z-SVM
56.26
78.89
86.43
62.89
74.94
FlSp4SVM
73.26
—
—
—
73.68
FeSsSVM
—
85.54
—
—
—
FeSp4SVM
—
—
94.40
66.48
—
7. Conclusions
Choosing a proper fuzzy membership function and a proper smooth function are quite important to solve classification problem with FSSVM. A kind of FSSVM with two fuzzy functions and three smooth functions for nonlinear classification has been proposed in this paper, which can use the BFGS algorithm or Newton-Armijo algorithm to compute. Experiments results confirm that it is effective according to the datasets features to choose fuzzy membership function and smooth function. It can achieve better performance on reducing the disturbance of outliers and noise than some existing methods in imbalanced datasets.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The project is supported by the Natural Science Ningxia Foundation (NZ13095).
WuG.ChangE. Y.KBA: kernel boundary alignment considering imbalanced data distributionMaglogiannisI.ZafiropoulosE.AnagnostopoulosI.An intelligent system for automated breast cancer diagnosis and prognosis using SVM based classifiersCaiS.ZhangR.LiuL.ZhouD.A method of salt-affected soil information extraction based on a support vector machine with texture featuresHeH.GarciaE. A.Learning from imbalanced dataMatíasJ. M.TaboadaJ.OrdóñezC.González-ManteigaW.Partially linear support vector machines applied to the prediction of mine slope movementsWeissG.Mining with rarity: a unifying frameworkChawlaN. V.BowyerK. W.HallL. O.KegelmeyerW. P.SMOTE: synthetic minority over-sampling techniqueZouS.HuangY.WangY.WangJ.ZhouC.SVM learning from imbalanced data by GA sampling for protein domain predictionProceedings of the 9th International Conference for Young Computer Scientists (ICYCS '08)November 20089829872-s2.0-5834910048610.1109/ICYCS.2008.72ChawlaN.JapkowiczN.KolczA.Editorial:Special issue on learning from imbalanced data setsHuangH.-P.LiuY.-H.Fuzzy support vector machines for pattern recognition and data miningYuanY. X.A modified BFGS algorithm for unconstrained optimizationYuanY. X.ByrdR. H.Non-quasi-Newton updates for unconstrained optimizationMangasarianO. L.MusicantD. R.Lagrangian support vector machinesYuanY.-B.YanJ.XuC.-X.Polynomial smooth support vector machine (PSSVM)HeH.BaiY.GarciaE. A.LiS.ADASYN: Adaptive synthetic sampling approach for imbalanced learningProceedings of the International Joint Conference on Neural Networks (IJCNN '08)June 2008132213282-s2.0-5634908920510.1109/IJCNN.2008.4633969http://archive.ics.uci.edu/ml/