Presentation is loading. Please wait.

Presentation is loading. Please wait.

WEB BAR 2004 Advanced Retrieval and Web Mining Lecture 17.

Similar presentations


Presentation on theme: "WEB BAR 2004 Advanced Retrieval and Web Mining Lecture 17."— Presentation transcript:

1 WEB BAR 2004 Advanced Retrieval and Web Mining Lecture 17

2 Today’s Topics Text classification Logistic regression Support vector machines

3 Intro to Logistic Regression Naïve Bayes and Logistic Regression are probabilistic models. Naïve Bayes is a generative model. Logistic regression is a discriminative model. directly maximizes classification accuracy.

4 Intro to Logistic Regression A generative model predicts probability that document d will be generated by a source c Naïve Bayes model: Parameters, i.e. P(w |c)’s, are fit to optimally predict generation of d

5 One source model for each class c Choose class c with largest value of: Generative model Our criterion is: how likely is it that this model generated the document? Classification accuracy is optimized indirectly and imperfectly. Accuracy and MAP are different goals, and will in general lead to different predictions. Classify Text w/ Gen. Model

6 For binary classification, we have: Naïve Bayes

7 Naïve Bayes: Directly model probability of generating class conditional on words w Logistic regression: The discriminative alternative: Logistic Regression

8 Logistic regression: Tune parameters β w to optimize conditional likelihood (class probability predictions) What a statistician would probably tell you to use if you said you had a categorical decision problem (like text categorization)

9 The Logit-Based Model The simplest model for optimizing predictive accuracy is linear regression. Why don’t we use linear regression:? p =  +  X + e Normal assumption does not work for probabilities. Need to transform the input variables and predicted variable to be able to apply regression. The transformation is the logit: Logit(p) = ln[p/(1-p)] =  +  X + e Apply also to input variables: logit(p) = a+b1*logit(F1) + b2*logit(F2) +... + bn*logit(Fn) (where p≈class-probability and Fi≈word i) Model: The logit of a predicted probability is the (weighted) sum of the logits of the probabilities associated with each of the features An instance of a generalized linear model where one response is conditioned on all features

10 Logit and Logistic Logit: ln[p/(1-p)] =  +  X The slope coefficient (  ) is interpreted as the rate of change in the log odds as X changes A more intuitive interpretation of the logit is the “odds ratio”. Since [p/(1-p)] = exp(  +  X) exp(…) is the effect of the independent variable on the odds of having a certain classification Logistic(X) = 1/ (1+exp(-  -  X))

11 1 1 Logit and logistic transforms logit(p) = ln(p/[1-p]) logistic(x) = 1/(1+e -x ) logit logistic

12 Classification Compute vector representation X of document Compute z =  +  X Dot product of weight vector  with vector representation X Beta defines a hyperplane as before P(C) = logistic(z) = 1/(1+e -z ) P(C) is the probability that the document is in the class If we use a good method to estimate beta, then this will be a ‘’good’’ probability (as opposed to Naïve Bayes)

13 Training a Logistic Regression Model Training consists of computing parameters  and  MLE – maximum likelihood estimation MLE is a statistical method for estimating the coefficients of a model that maximizes some likelihood Here the likelihood function (L) measures the probability of observing the particular set of class labels (C, not C) that occur in the training data LogR MLE is normally done by some form of iterative fitting algorithm, or a gradient descent procedure such as CG Expensive for large models with many features

14 LR & NB: Same Parameters! - Binary or raw TF weighting - Optimized differently

15 Performance Early results with LogR were disappointing, because people didn’t understand the means to regularize (smooth) LogR to cope with sparse data Done right, LogR outperforms NB in text classification NB optimizes parameters to predict words, LogR optimizes to predict class LogR seems as good as SVMs (or any known text cat method – Tong & Oles 2001) though less studied and less trendy than SVMs.

16 Support Vector Machines

17 Recall: Which Hyperplane? In general, lots of possible solutions Support Vector Machine (SVM) finds an optimal solution.

18 Support Vector Machine (SVM) Support vectors Maximize margin SVMs maximize the margin around the separating hyperplane. The decision function is fully specified by a subset of training samples, the support vectors. Quadratic programming problem Seen by many as most successful current text classification method

19 w: hyperplane normal x i : data point i y i : class of data point i (+1 or -1) Constraint optimization formalization: (1) (2) maximize margin: 2/||w|| Maximum Margin: Formalization

20 Key differentiator of SVMs is reliance on support vectors Conceptually: only what is close to the decision boundary should matter. Why is margin determined by support vectors only? Support vectors

21 Quadratic Programming Quadratic programming setup explains two properties of SVMs 1. Classification is determined by support vectors only 2. The notion of kernel

22 Quadratic Programming Most  i will be zero. Non-zero  i are support vectors. One can show that hyperplane normal w with maximum margin is:  i : lagrange multipliers x i : data point i y i : class of data point i (+1 or -1) Where the  i are the solution to maximizing:

23 Non-Separable Case Now we know how to build a separator for two linearly separable classes What about classes whose exemplary docs are not linearly separable?

24 Not Linearly Separable Find a line that penalizes points on “the wrong side”.

25 Penalizing Bad Points Define distance for each point with respect to separator ax + by = c: (ax + by) - c for red points c - (ax + by) for green points. Negative for bad points.

26 Classification with SVMs Given a new point (x1,x2), can score its projection onto the hyperplane normal: Compute score: w x + b In 2 dims: score = w 1 x 1 +w 2 x 2 +b. Set confidence threshold t. 3 5 7 Score > t: yes Score < -t: no Else: don’t know

27 SVMs: Predicting Generalization We want the classifier with the best generalization (best accuracy on new data). What are clues for good generalization? Large training set Low error on training set Capacity/variance (number of parameters in the model, expressive power of model) SVMs give you an explicit bound on error on new data based on these.

28 Capacity/Variance: VC Dimension Theoretical risk boundary: Risk = mean error rate  – the model (defined by its parameters) R emp - empirical risk, l - #observations, h – VC dimension, the above holds with prob. (1-η) VC dimension/Capacity: max number of points that can be shattered A set can be shattered if the classifier can learn every possible labeling. VC = Vapnik-Chervonenkis Dimension

29 Capacity of Hyperplanes?

30 Exercise Suppose you have n points in d dimensions, labeled red or green. How big need n be (as a function of d) in order to create an example with the red and green points not linearly separable? E.g., for d=2, n  4.

31 Capacity/Variance: VC Dimension Theoretical risk boundary: R emp - empirical risk, l - #observations, h – VC dimension, the above holds with prob. (1-η) Important theoretical property Not very often used in practice

32 SVM Kernels Recall: We’re maximizing: Observation: data only occur in dot products. We can map data into a very high dimensional space (even infinite!) as long as kernel computable. For mapping function Ф, compute kernel K(i,j) = Ф(xi)∙Ф(xj) Example:

33 Kernels Why use kernels? Make non-separable problem separable. Map data into better representational space Common kernels Linear Polynomial Radial basis function (infinite space)

34 Results for Kernels (Joachims)

35 Performance of SVM SVM are seen as best-performing method by many. Statistical significance of most results not clear. There are many methods that perform about as well as SVM. Example: regularized logistic regression (Zhang&Oles) Example of a comparison study: Yang&Liu

36 Yang&Liu: SVM vs Other Methods

37 Yang&Liu: Statistical Significance

38 Summary Support vector machines (SVM) Choose hyperplane based on support vectors Support vector = “critical” point close to decision boundary Kernels: powerful and elegant way to define similarity metric Bound on “risk” (expected error on test set) Best performing text classifier? Partly popular due to availability of svmlight Svmlight is accurate and fast – and free (for research) Logistic regression (LR) Traditional statistical technique for classification Does not work “out of the box” due to high dimensionality of text Robust/regularized versions perform as well as SVM? No equivalent to svmlight available (Degree-1) SVMs and LR are linear classifiers.

39 Resources Foundations of Statistical Natural Language Processing. Chapter 16. MIT Press. Manning and Schuetze. Trevor Hastie, Robert Tibshirani and Jerome Friedman, "Elements of Statistical Learning: Data Mining, Inference and Prediction" Springer- Verlag, New York. A Tutorial on Support Vector Machines for Pattern Recognition (1998) Christopher J. C. Burges R.M. Tong, L.A. Appelbaum, V.N. Askman, J.F. Cunningham. Conceptual Information Retrieval using RUBRIC. Proc. ACM SIGIR 247-253, (1987). S. T. Dumais, Using SVMs for text categorization, IEEE Intelligent Systems, 13(4), Jul/Aug 1998 S. T. Dumais, J. Platt, D. Heckerman and M. Sahami. 1998. Inductive learning algorithms and representations for text categorization. Proceedings of CIKM ’98, pp. 148-155. re-examination of text categorization methods (1999) Yiming Yang, Xin Liu 22nd Annual International SIGIR Tong Zhang, Frank J. Oles: Text Categorization Based on Regularized Linear Classification Methods. Information Retrieval 4(1): 5-31 (2001) ‘Classic’ Reuters data set: http://www.daviddlewis.com /resources /testcollections/reuters21578/


Download ppt "WEB BAR 2004 Advanced Retrieval and Web Mining Lecture 17."

Similar presentations


Ads by Google