Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification of class-imbalanced data

Similar presentations


Presentation on theme: "Classification of class-imbalanced data"— Presentation transcript:

1 Classification of class-imbalanced data
Advisor: Dr.vahidipour Zahra salimian Shaghayegh jalali Dec 2017

2 Class Imbalance

3 Class Imbalance Problem

4 What is the Class Imbalance Problem?
where the total number of a class of data (positive) is far less than the total number of another class of data (negative). This problem is extremely common in practice and can be observed in various disciplines including fraud detection, anomaly detection, medical diagnosis, oil spillage detection, facial recognition, etc

5 Why is it a problem? Most machine learning algorithms and works best when the number of instances of each classes are roughly equal. When the number of instances of one class far exceeds the other, problems arise.

6 Solutions to imbalanced learning
Sampling methods Cost-sensitive methods Kernel and Active Learning methods

7 Sampling methods Oversampling Under-sampling
by adding more of the minority class so it has more effect on the machine learning algorithm Under-sampling by removing some of the majority class so it has less effect on the machine learning algorithm

8 Oversampling By oversampling, just duplicating the minority classes could lead the classifier to overfitting to a few examples, which can be illustrated below:

9 On the left hand side is before oversampling, where as on the right hand side is oversampling has been applied. On the right side, The thick positive signs indicate there are multiple repeated copies of that data instance. The machine learning algorithm then sees these cases many times and thus designs to overfit to these examples specifically, resulting in a blue line boundary as above.

10 Under-sampling By undersampling, we could risk removing some of the majority class instances which is more representative, thus discarding useful information. This can be illustrated as follows:

11 Here the green line is the ideal decision boundary we would like to have, and blue is the actual result. On the left side is the result of just applying a general machine learning algorithm without using undersampling. On the right, we undersampled the negative class but removed some informative negative class, and caused the blue decision boundary to be slanted, causing some negative class to be classified as positive class wrongly.

12

13

14 methods require a performance measure to be specified a priori before learning.
An alternative is to use a so-called threshold-moving method that a posteriori changes the decision threshold of a model to counteract the imbalance, thus has a potential to adapt to the performance measure of interest. Surprisingly, little attention has been paid to the potential of combining bagging ensemble with threshold-moving. our method preserves the natural class distribution of the data resulting in well calibrated posterior probabilities.

15 Cost-Sensitive Methods
Utilize cost-sensitive methods for imbalanced learning Considering the cost of misclassification Instead of modifying data…

16 Cost-Sensitive Learning Framework

17 Cost-Sensitive Dataspace Weighting with Adaptive Boosting

18 Cost-Sensitive Dataspace Weighting with Adaptive Boosting

19 Cost-Sensitive Decision Trees
Cost-sensitive adjustments for the decision threshold The final decision threshold shall yield the most dominant point on the ROC curve Cost-sensitive considerations for split criteria The impurity function shall be insensitive to unequal costs Cost-sensitive pruning schemes The probability estimate at each node needs improvement to reduce removal of leaves describing the minority concept Laplace smoothing method and Laplace pruning techniques

20 Cost-Sensitive Neural Network
Four ways of applying cost sensitivity in neural networks Modifying probability estimate of outputs Applied only at testing stage Maintain original neural networks Altering outputs directly Bias neural networks during training to focus on expensive class Modify learning rate Set η higher for costly examples and lower for low-cost examples Replacing error-minimizing function Use expected cost minimization function instead

21 Kernel-based learning framework
Based on statistical learning and Vapnik-Chervonenkis (VC) dimensions Problems with Kernel-based support vector machines (SVMs) Support vectors from the minority concept may contribute less to the final hypothesis Optimal hyperplane is also biased toward the majority class To minimize the total error Biased toward the majority

22 Thank you


Download ppt "Classification of class-imbalanced data"

Similar presentations


Ads by Google