Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Asymmetric Distributions to Improve Text Classifier Probability Estimates Paul N. Bennett Computer Science Dept. Carnegie Mellon University SIGIR.

Similar presentations


Presentation on theme: "Using Asymmetric Distributions to Improve Text Classifier Probability Estimates Paul N. Bennett Computer Science Dept. Carnegie Mellon University SIGIR."— Presentation transcript:

1 Using Asymmetric Distributions to Improve Text Classifier Probability Estimates Paul N. Bennett Computer Science Dept. Carnegie Mellon University SIGIR 2003

2 Abstract  Text classifiers that give probability estimates are more readily applicable in a variety of scenarios.  The quality of estimates is crucial  Review: a variety of standard approaches to converting scores (and poor probability estimates) from text classifier to high quality estimates

3 Cont’d  New models: motivated by the intuition that the empirical score distributions for the “extremely irrelevant”, “hard to discriminate”, and “obviously relevant” are often significantly different.

4 Problem Definition & Approach  Difference from earlier approaches –Asymmetric parametric models suitable for use when little training data is available –Explicitly analyze the quality of probability estimates and provide significance tests –Target text classifier outputs where a majority of the previous literature targeted the output of search engine

5 Problem Definition

6 Cont’d  There are two general types of parametric approaches: –Fit the posterior function directly, i.e., there is one function estimator that performs a direct mapping of the score s to the probability P(+|s(d)) –Break the problem down as shown in the the gray box. An estimator for each of the class- conditional densities (p(s|+) and p(s|-)) is produced, then Bayes’ rule and the class priors are used to obtain the estimate for P(+|s(d))

7 Motivation for Asymmetric Distributions  Using standard Gaussians fails to capitalize on a basic characteristic commonly seen  Intuitively, the area between the modes corresponds to the hard examples, which are difficult for this classifier to distinguish, while areas outside the modes are the extreme examples that are usually easily distinguished

8 Cont’d

9  Ideally, there will exist scores θ - and θ + such that all examples with score greater than θ + are relevant and all examples with scores less than θ - are irrelevant.  The distance |θ - - θ + | corresponds to the margin in some classifiers, and an attempt is often made to maximize this quantity.  Because text classifiers have training data to use to separate the classes, the final behavior of the score distributions is primarily a factor of the amount of training data and the consequent separation in the classes achieved.

10 Cont’d  Practically, some examples will fail between θ - and θ +, and it is often important to estimate the probabilities of these examples well (since they correspond to the “hard” examples)  Justifications can be given for both why you may find more and less examples between θ - and θ + than outside of them, but there are few empirical reasons to believe that the distributions should be symmetric.  A natural first candidate for an asymmetric distribution is to generalize a common symmetric distribution, e.g. the Laplace or the Gaussian

11 Asymmetric Laplace

12 Asymmetric Gaussian

13 Gaussians v.s. Asymmetric Gaussian

14 Parameter Estimation  Two choices: –(1) Use numerical estimation to estimate all three parameters at once. –(2) Fix the value of θ, and estimate the other tow given our choice of θ, then consider alternate values of θ.  Because of the simplicity of analysis in the latter alternative, we choose this method.

15 Asymmetric Laplace MLEs

16 Asymmetric Gaussian MLEs

17 Methods Compared  Gaussians  Asymmetric Gaussians  Laplace Distributions  Asymmetric Laplace Distributions  Logistic Regression  Logistic Regression with Noisy Class Labels

18 Data  MSN Web Directory –A large collection of heterogeneous web pages that have been hierarchically classified. –13 categories used, train/test = 50078/10024  Reuters –The Reuters 21578 corpus. –135 classes, train/test = 9603/3299  TREC-AP –A collection of AP news stories from 1988 to 1990. –20 categories, train/test = 142791/66992

19 Performance Measures  Log-loss –For a document d with class, log-loss is defined as where if a = b and 0 otherwise.  Squared error  Error –How the methods would perform if a false positive was penalized the same as a false negative.

20 Results & Discussion

21 Cont’d  A. Laplace, LR+Noise, and LogReg quite clearly outperform the other methods.  LR+Noise and LogReg tend to perform slightly better than A. Laplace at some tasks with respect to log-loss and squared error.  However, A. Laplace always produces the least number of errors for all the tasks

22 Goodness of Fit – naive Bayes

23 Cont’d -- SVM

24 LogOdds v.s. s(d) – naive Bayes

25 Cont’d -- SVM

26 Gaussian v.s. Laplace  The asymmetric Gaussian tends to place the mode more accurately than a symmetric Gaussian.  However, the asymmetric Gaussian distributes too much mass to the outside tails while failing to fit around the mode accurately enough.  A. Gaussian is penalized quite heavily when outliers present.

27 Cont’d  The asymmetric Laplace places much more emphasis around the mode.  Even in cases where the test distribution differs from the training distribution, A. Laplace still yields a solution that gives a better fit than LogReg.


Download ppt "Using Asymmetric Distributions to Improve Text Classifier Probability Estimates Paul N. Bennett Computer Science Dept. Carnegie Mellon University SIGIR."

Similar presentations


Ads by Google