Presentation is loading. Please wait.

Presentation is loading. Please wait.

On Sample Selection Bias and Its Efficient Correction via Model Averaging and Unlabeled Examples Wei Fan Ian Davidson.

Similar presentations


Presentation on theme: "On Sample Selection Bias and Its Efficient Correction via Model Averaging and Unlabeled Examples Wei Fan Ian Davidson."— Presentation transcript:

1 On Sample Selection Bias and Its Efficient Correction via Model Averaging and Unlabeled Examples Wei Fan Ian Davidson

2 A Toy Example Two classes: red and green red: f2>f1 green: f2<=f1

3 Unbiased and Biased Samples Not so-biased sampling Biased sampling

4 Effect on Learning Unbiased 97.1%Biased 92.1%Unbiased 96.9%Biased 95.9%Unbiased 96.405%Biased 92.7% Some techniques are more sensitive to bias than others. One important question: How to reduce the effect of sample selection bias?

5 Ubiquitous Loan Approval Drug screening Weather forecasting Ad Campaign Fraud Detection User Profiling Biomedical Informatics Intrusion Detection Insurance etc 1.Normally, banks only have data of their own customers 2.Late payment, default models are computed using their own data 3.New customers may not completely follow the same distribution. 4.Is the New Century sub-prime mortgage bankcrupcy due to bad modeling?

6 Bias as Distribution Think of sampling an example (x,y) into the training data as an event denoted by random variable s s=1: example (x,y) is sampled into the training data s=0: example (x,y) is not sampled. Think of bias as a conditional probability of s=1 dependent on x and y P(s=1|x,y) : the probability for (x,y) to be sampled into the training data, conditional on the examples feature vector x and class label y.

7 Categorization From Zadrozny04 No Sample Selection Bias P(s=1|x,y) = P(s=1) Feature Bias P(s=1|x,y) = P(s=1|x) Class Bias P(s=1|x,y) = P(s=1|y) Complete Bias No more reduction

8 Alternatively, consider D of the size can be sampled exhaustively from the universe of examples. Bias for a Training Set How P(s=1|x,y) is computed Practically, for a given training set D P(s=1|x,y) = 1: if (x,y) is sampled into D P(s=1|x,y) = 0: otherwise

9 Realistic Datasets are biased? Most datasets are biased. Unlikely to sample each and every feature vector. For most problems, it is at least feature bias. P(s=1|x,y) = P(s=1|x)

10 Effect on Learning Learning algorithms estimate the true conditional probability True probability P(y|x), such as P(fraud|x)? Estimated probabilty P(y|x,M): M is the model built. Conditional probability in the biased data. P(y|x,s=1) Key Issue: P(y|x,s=1) = P(y|x) ? At least for those sampled examples.

11 Appropriate Assumptions More good training examples in feature bias than both class bias and complete bias. good: P(y|x,s=1) = P(y|x) beware: it is incorrect to conclude that P(y|x,s=1) = P(y|x) unless under some restricted situations that can rarely happen. For class bias and complete bias, it is hard to derive anything. It is hard to make any more detailed claims without knowing more about Both the sampling process The true function.

12 Categorizing into the exact type is difficult. You dont know what you dont know. Not that bad, since the key issue is the number of examples with bad conditional probability. Small Large

13 Small Solutions Posterior weighting Class Probability Integration Over Model Space Averaging of estimated class probabilities weighted by posterior Removes model uncertainty by averaging

14 Prove that the expected error of model averaging is less than any single model combined. What this says: Compute many models in different ways Dont hang on one tree

15 Large Solutions When too many base modelss estimates are off track, the power of model averaging will be limited. In this case, we need to smartly use unlabeled example that are unbiased. Reasonable assumption: unlabeled examples are usually plenty and easier to get.

16 How to Use Them Estimate joint probability P(x,y) instead of just conditional probability, i.e., P(x,y) = P(y|x)P(x) Makes no difference use 1 model, but Multiple models

17 Examples of How This Works P 1 (+|x) = 0.8 and P 2 (+|x) = 0.4 P 1 (-|x) = 0.2 and P 2 (+|x) = 0.6 model averaging, P(+|x) = (0.8 + 0.4) / 2 = 0.6 P(-|x) = (0.2 + 0.6)/2 = 0.4 Prediction will be –

18 But if there are two P(x) models, with probability 0.05 and 0.4 Then P(+,x) = 0.05 * 0.8 + 0.4 * 0.4 = 0.2 P(-,x) = 0.05 * 0.2 + 0.4 * 0.6 = 0.25 Recall with model averaging: P(+|x) = 0.6 and P(-|x)=0.4 Prediction is + But, now the prediction will be – instead of + Key Idea: Unlabeled examples can be used as weights to re-weight the models.

19 Improve P(y|x) Use a semi-supervised discriminant learning procedure (Vittaut et al, 2002) Basic procedure: Use learned models to predict unlabeled examples. Use a random sample of predicted unlabeled examples to combine with labeled training data Re-train the model Repeat until the predictions on unlabeled examples remain stable.

20 Experiments Feature Bias Generation Sort the according to feature values chop off the top.

21 Class Bins Randomly generate prior class probability distribution P(y). Just the number, such as P(+)=0.1 and P(-)=0.9 Sample without replacement from class bins Class Bias Generation + - + -

22 Complete Bias Generation Recall: the probability to sample an example (x,y) is dependent on both x and y. Easiest simulation: Sample (x,y) without replacement from the training data.

23 Feature Bias

24 Datasets Adult: 2 classes SJ: 3 classes SS: 3 classes Pendig: 10 classes ArtiChar: 10 classes Query: 4 classes Donation: 2 classes, cost-sensitive Credit Card: 2 classes, cost-sensitive

25 Winners and Losers Single model *never wins* Under feature bias winners: model averaging *with or without* improved conditional probability using unlabeled examples Joint probability averaging with *uncorrelated* P(y|x) and P(x) models (details in paper) Under class bias winners: Joint probability averaging with *correlated* P(y|x) and *improved* P(x) models. Under complete bias: Model averaging with improved P(y|x)

26 Summary According to our definition, sample selection bias is ubiquitous. Categorization of sample selection bias into 4 types is useful for analysis, but hard to use in practice. In practice, the key question is: relative number of examples with inaccurate P(y|x). Small: use model averaging of conditional probabilities of several models Medium: use model averaging of improved conditional probabilities Large: use joint probability averaging of uncorrelated conditional probability and feature probability

27

28 When the number is small Prove in the paper that the expected error of model averaging is less than any single model combined. What this says: Compute model in different ways Dont hang yourself on one tree


Download ppt "On Sample Selection Bias and Its Efficient Correction via Model Averaging and Unlabeled Examples Wei Fan Ian Davidson."

Similar presentations


Ads by Google