Introduction to Information Retrieval Introduction to Information Retrieval Lecture 15: Text Classification & Naive Bayes 1.

Slides:



Advertisements
Similar presentations
Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Mustafa Cayci INFS 795 An Evaluation on Feature Selection for Text Clustering.
Language Models Naama Kraus (Modified by Amit Gross) Slides are based on Introduction to Information Retrieval Book by Manning, Raghavan and Schütze.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Visual Recognition Tutorial
Evaluating Search Engine
Hinrich Schütze and Christina Lioma Lecture 12: Language Models for IR
Chapter 7 Retrieval Models.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 13: Text Classification & Naive.
Bayes Rule How is this rule derived? Using Bayes rule for probabilistic inference: –P(Cause | Evidence): diagnostic probability –P(Evidence | Cause): causal.
Classification and risk prediction
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Lecture 13-1: Text Classification & Naive Bayes
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 12: Language Models for IR.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval.
Introduction to Bayesian Learning Bob Durrant School of Computer Science University of Birmingham (Slides: Dr Ata Kabán)
Introduction to Bayesian Learning Ata Kaban School of Computer Science University of Birmingham.
Naïve Bayes Classification Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata August 14, 2014.
Scalable Text Mining with Sparse Generative Models
Text Categorization Moshe Koppel Lecture 2: Naïve Bayes Slides based on Manning, Raghavan and Schutze.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Processing of large document collections Part 3 (Evaluation of text classifiers, applications of text categorization) Helena Ahonen-Myka Spring 2005.
Information Retrieval and Web Search Introduction to Text Classification (Note: slides in this set have been adapted from the course taught by Chris Manning.
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Bayesian Networks. Male brain wiring Female brain wiring.
Processing of large document collections Part 2 (Text categorization, term selection) Helena Ahonen-Myka Spring 2005.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 11 9/29/2011.
Text Classification, Active/Interactive learning.
How to classify reading passages into predefined categories ASH.
1 Bins and Text Categorization Carl Sable (Columbia University) Kenneth W. Church (AT&T)
Naive Bayes Classifier
Spam Filtering. From: "" Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There.
1 Statistical NLP: Lecture 9 Word Sense Disambiguation.
Evaluating What’s Been Learned. Cross-Validation Foundation is a simple idea – “ holdout ” – holds out a certain amount for testing and uses rest for.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Processing of large document collections Part 3 (Evaluation of text classifiers, term selection) Helena Ahonen-Myka Spring 2006.
Classification Techniques: Bayesian Classification
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
Active learning Haidong Shi, Nanyi Zeng Nov,12,2008.
Information Retrieval Lecture 4 Introduction to Information Retrieval (Manning et al. 2007) Chapter 13 For the MSc Computer Science Programme Dell Zhang.
Machine Learning Tutorial-2. Recall, Precision, F-measure, Accuracy Ch. 5.
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten’s and E. Frank’s “Data Mining” and Jeremy Wyatt and others.
CpSc 881: Information Retrieval. 2 Using language models (LMs) for IR ❶ LM = language model ❷ We view the document as a generative model that generates.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture Probabilistic Information Retrieval.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
KNN & Naïve Bayes Hongning Wang
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
Introduction to Information Retrieval Probabilistic Information Retrieval Chapter 11 1.
Text Classification and Naïve Bayes Formalizing the Naïve Bayes Classifier.
Lecture 13: Language Models for IR
Naive Bayes Classifier
Lecture 15: Text Classification & Naive Bayes
Data Mining Lecture 11.
Machine Learning. k-Nearest Neighbor Classifiers.
Text Categorization Rong Jin.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Parametric Methods Berlin Chen, 2005 References:
Information Retrieval
INF 141: Information Retrieval
Naïve Bayes Text Classification
NAÏVE BAYES CLASSIFICATION
Presentation transcript:

Introduction to Information Retrieval Introduction to Information Retrieval Lecture 15: Text Classification & Naive Bayes 1

Introduction to Information Retrieval 2 A text classification task: spam filtering 2 From: ‘‘’’ Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook. Change your life NOW ! ================================================= Click Below to order: ================================================= How would you write a program that would automatically detect and delete this type of message?

Introduction to Information Retrieval 3 Formal definition of TC: Training 3 Given:  A document set X  Documents are represented typically in some type of high- dimensional space.  A fixed set of classes C = {c 1, c 2,..., c J }  The classes are human-defined for the needs of an application (e.g., relevant vs. nonrelevant).  A training set D of labeled documents with each labeled document ∈ X × C Using a learning method or learning algorithm, we then wish to learn a classifier ϒ that maps documents to classes: ϒ : X → C

Introduction to Information Retrieval 4 Formal definition of TC: Application/Testing 4 Given: a description d ∈ X of a document Determine: ϒ (d) ∈ C, that is, the class that is most appropriate for d

Introduction to Information Retrieval 5 Topic classification 5

Introduction to Information Retrieval 6 Examples of how search engines use classification 6  Language identification (classes: English vs. French etc.)  The automatic detection of spam pages (spam vs. nonspam)  Topic-specific or vertical search – restrict search to a “vertical” like “related to health” (relevant to vertical vs. not)

Introduction to Information Retrieval 7 Classification methods: Statistical/Probabilistic 7  This was our definition of the classification problem – text classification as a learning problem  (i) Supervised learning of a the classification function ϒ and (ii) its application to classifying new documents  We will look at doing this using Naive Bayes  requires hand-classified training data  But this manual classification can be done by non-experts.

Introduction to Information Retrieval 8 Derivation of Naive Bayes rule 8 We want to find the class that is most likely given the document: Apply Bayes rule Drop denominator since P(d) is the same for all classes:

Introduction to Information Retrieval 9 Too many parameters / sparseness 9  There are too many parameters, one for each unique combination of a class and a sequence of words.  We would need a very, very large number of training examples to estimate that many parameters.  This is the problem of data sparseness.

Introduction to Information Retrieval 10 Naive Bayes conditional independence assumption 10 To reduce the number of parameters to a manageable size, we make the Naive Bayes conditional independence assumption: We assume that the probability of observing the conjunction of attributes is equal to the product of the individual probabilities P(X k = t k |c).

Introduction to Information Retrieval 11 The Naive Bayes classifier 11  The Naive Bayes classifier is a probabilistic classifier.  We compute the probability of a document d being in a class c as follows:  n d is the length of the document. (number of tokens)  P(t k |c) is the conditional probability of term t k occurring in a document of class c  P(t k |c) as a measure of how much evidence t k contributes that c is the correct class.  P(c) is the prior probability of c.  If a document’s terms do not provide clear evidence for one class vs. another, we choose the c with highest P(c).

Introduction to Information Retrieval 12 Maximum a posteriori class 12  Our goal in Naive Bayes classification is to find the “best” class.  The best class is the most likely or maximum a posteriori (MAP) class c map :

Introduction to Information Retrieval 13 Taking the log 13  Multiplying lots of small probabilities can result in floating point underflow.  Since log(xy) = log(x) + log(y), we can sum log probabilities instead of multiplying probabilities.  Since log is a monotonic function, the class with the highest score does not change.  So what we usually compute in practice is:

Introduction to Information Retrieval 14 Naive Bayes classifier 14  Classification rule:  Simple interpretation:  Each conditional parameter log is a weight that indicates how good an indicator t k is for c.  The prior log is a weight that indicates the relative frequency of c.  The sum of log prior and term weights is then a measure of how much evidence there is for the document being in the class.  We select the class with the most evidence.

Introduction to Information Retrieval 15 Parameter estimation take 1: Maximum likelihood 15  Estimate parameters and from train data: How?  Prior:  N c : number of docs in class c; N: total number of docs  Conditional probabilities:  T ct is the number of tokens of t in training documents from class c (includes multiple occurrences)  We’ve made a Naive Bayes independence assumption here:

Introduction to Information Retrieval 16 The problem with maximum likelihood estimates: Zeros 16 P(China|d) ∝ P(China) ・ P( BEIJING |China) ・ P( AND |China) ・ P( TAIPEI |China) ・ P( JOIN |China) ・ P(WTO|China)  If WTO never occurs in class China in the train set:

Introduction to Information Retrieval 17 The problem with maximum likelihood estimates: Zeros (cont) 17  If there were no occurrences of WTO in documents in class China, we’d get a zero estimate:  → We will get P(China|d) = 0 for any document that contains WTO!  Zero probabilities cannot be conditioned away.

Introduction to Information Retrieval 18 To avoid zeros: Add-one smoothing 18  Before:  Now: Add one to each count to avoid zeros:  B is the number of different words (in this case the size of the vocabulary: |V | = M)

Introduction to Information Retrieval 19 To avoid zeros: Add-one smoothing 19  Estimate parameters from the training corpus using add-one smoothing  For a new document, for each class, compute sum of (i) log of prior and (ii) logs of conditional probabilities of the terms  Assign the document to the class with the largest score

Introduction to Information Retrieval 20 Naive Bayes: Training 20

Introduction to Information Retrieval 21 Naive Bayes: Testing 21

Introduction to Information Retrieval 22 Exercise 22  Estimate parameters of Naive Bayes classifier  Classify test document

Introduction to Information Retrieval 23 Example: Parameter estimates 23 The denominators are (8 + 6) and (3 + 6) because the lengths of text c and are 8 and 3, respectively, and because the constant B is 6 as the vocabulary consists of six terms.

Introduction to Information Retrieval 24 Example: Classification 24 Thus, the classifier assigns the test document to c = China. The reason for this classification decision is that the three occurrences of the positive indicator CHINESE in d 5 outweigh the occurrences of the two negative indicators JAPAN and TOKYO.

Introduction to Information Retrieval 25 Generative model 25  Generate a class with probability P(c)  Generate each of the words (in their respective positions), conditional on the class, but independent of each other, with probability P(t k |c)  To classify docs, we “reengineer” this process and find the class that is most likely to have generated the doc.

Introduction to Information Retrieval 26 Evaluating classification 26  Evaluation must be done on test data that are independent of the training data (usually a disjoint set of instances).  It’s easy to get good performance on a test set that was available to the learner during training (e.g., just memorize the test set).  Measures: Precision, recall, F 1, classification accuracy

Introduction to Information Retrieval 27 Constructing Confusion Matrix c

Introduction to Information Retrieval 28 Precision P and recall R 28 P = TP / ( TP + FP) R = TP / ( TP + FN)

Introduction to Information Retrieval 29 A combined measure: F 29  F 1 allows us to trade off precision against recall.   This is the harmonic mean of P and R:

Introduction to Information Retrieval 30 Averaging: Micro vs. Macro 30  We now have an evaluation measure (F 1 ) for one class.  But we also want a single number that measures the aggregate performance over all classes in the collection.  Macroaveraging  Compute F 1 for each of the C classes  Average these C numbers  Microaveraging  Compute TP, FP, FN for each of the C classes  Sum these C numbers (e.g., all TP to get aggregate TP)  Compute F 1 for aggregate TP, FP, FN

Introduction to Information Retrieval 31 Micro- vs. Macro-average: Example