next up previous
Next: Under-sampling and SMOTE Combination Up: SMOTE: Synthetic Minority Over-sampling Previous: Minority over-sampling with replacement

SMOTE

We propose an over-sampling approach in which the minority class is over-sampled by creating ``synthetic'' examples rather than by over-sampling with replacement. This approach is inspired by a technique that proved successful in handwritten character recognition [30]. They created extra training data by performing certain operations on real data. In their case, operations like rotation and skew were natural ways to perturb the training data. We generate synthetic examples in a less application-specific manner, by operating in ``feature space'' rather than ``data space''. The minority class is over-sampled by taking each minority class sample and introducing synthetic examples along the line segments joining any/all of the k minority class nearest neighbors. Depending upon the amount of over-sampling required, neighbors from the k nearest neighbors are randomly chosen. Our implementation currently uses five nearest neighbors. For instance, if the amount of over-sampling needed is 200%, only two neighbors from the five nearest neighbors are chosen and one sample is generated in the direction of each. Synthetic samples are generated in the following way: Take the difference between the feature vector (sample) under consideration and its nearest neighbor. Multiply this difference by a random number between 0 and 1, and add it to the feature vector under consideration. This causes the selection of a random point along the line segment between two specific features. This approach effectively forces the decision region of the minority class to become more general.

Algorithm SMOTE, on the next page, is the pseudo-code for SMOTE. Table 4.2 shows an example of calculation of random synthetic samples. The amount of over-sampling is a parameter of the system, and a series of ROC curves can be generated for different populations and ROC analysis performed.

The synthetic examples cause the classifier to create larger and less specific decision regions as shown by the dashed lines in Figure 3(c), rather than smaller and more specific regions. More general regions are now learned for the minority class samples rather than those being subsumed by the majority class samples around them. The effect is that decision trees generalize better. Figures 4 and 5 compare the minority over-sampling with replacement and SMOTE. The experiments were conducted on the mammography dataset. There were 10923 examples in the majority class and 260 examples in the minority class originally. We have approximately 9831 examples in the majority class and 233 examples in the minority class for the training set used in 10-fold cross-validation. The minority class was over-sampled at 100%, 200%, 300%, 400% and 500% of its original size. The graphs show that the tree sizes for minority over-sampling with replacement at higher degrees of replication are much greater than those for SMOTE, and the minority class recognition of the minority over-sampling with replacement technique at higher degrees of replication isn't as good as SMOTE.


 \begin{algorithm}
{SMOTE}[$T$,~$N$,~$k$] {
 \qinput{Number of minority class sam...
 ...eturn \qcom{{\it{End of Populate.}}}\newline
 End of Pseudo-Code.\end{algorithm}


 
Table 1: Example of generation of synthetic examples (SMOTE).
Consider a sample (6,4) and let (4,3) be its nearest neighbor.
(6,4) is the sample for which k-nearest neighbors are being identified.
(4,3) is one of its k-nearest neighbors.
Let:
f1_1 = 6  f2_1 = 4  f2_1 - f1_1 = -2
f1_2 = 4  f2_2 = 3  f2_2 - f1_2 = -1
The new samples will be generated as
(f1',f2') = (6,4) + rand(0-1) * (-2,-1)
rand(0-1) generates a random number between 0 and 1.
 



  
Figure 4: Comparison of decision tree sizes for replicated over-sampling and SMOTE for the Mammography dataset
\begin{figure}
\centerline{
\psfig {figure=ismsize.eps,width=4.0in}
}\end{figure}


  
Figure 5: Comparison of % Minority correct for replicated over-sampling and SMOTE for the Mammography dataset
\begin{figure}
\centerline{
\psfig {figure=ismperf.eps,width=4.0in}
}\end{figure}


next up previous
Next: Under-sampling and SMOTE Combination Up: SMOTE: Synthetic Minority Over-sampling Previous: Minority over-sampling with replacement
Nitesh Chawla (CS)
6/2/2002