Presentation is loading. Please wait.

Presentation is loading. Please wait.

Text Classification – Naive Bayes

Similar presentations


Presentation on theme: "Text Classification – Naive Bayes"— Presentation transcript:

1 Text Classification – Naive Bayes
September 11, 2014 Credits for slides: Allan, Arms, Manning, Lund, Noble, Page.

2 Generative and Discriminative Models: An analogy
The task is to determine the language that someone is speaking Generative approach: is to learn each language and determine as to which language the speech belongs to Discriminative approach: is to determine the linguistic differences without learning any language – a much easier task!

3 Taxonomy of ML Models Generative Methods Discriminative Methods
Model class-conditional pdfs and prior probabilities “Generative” since sampling can generate synthetic data points Popular models Gaussians, Naïve Bayes, Mixtures of multinomials Mixtures of Gaussians, Mixtures of experts, Hidden Markov Models (HMM) Discriminative Methods Directly estimate posterior probabilities No attempt to model underlying probability distributions Generally, better performance Logistic regression, SVMs (Kernel methods) Traditional neural networks, Nearest neighbor Conditional Random Fields (CRF)

4 Summary of Basic Probability Formulas
Product rule: probability of a conjunction of two events A and B Sum rule: probability of a disjunction of two events A and B Bayes theorem: the posterior probability of A given B Theorem of total probability: if events A1, …, An are mutually exclusive with

5 Generative Probabilistic Models
Assume a simple (usually unrealistic) probabilistic method by which the data was generated. For categorization, each category has a different parameterized generative model that characterizes that category. Training: Use the data for each category to estimate the parameters of the generative model for that category. Testing: Use Bayesian analysis to determine the category model that most likely generated a specific test instance.

6 Bayesian Methods Learning and classification methods based on probability theory. Bayes theorem plays a critical role in probabilistic learning and classification. Build a generative model that approximates how data is produced Use prior probability of each category given no information about an item. Categorization produces a posterior probability distribution over the possible categories given a description of an item.

7 Bayes Theorem

8 Bayes Classifiers for Categorical Data
Task: Classify a new instance x based on a tuple of attribute values into one of the classes cj  C The most probable class given the attribute values attributes Example Color Shape Class 1 red circle positive 2 3 square negative 4 blue values

9 Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

10 Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

11 Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. Therefore, all conditional probabilities can also be calculated. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

12 Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. Therefore, all conditional probabilities can also be calculated. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

13 Joint Distribution The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values: P(X1,…,Xn) The probability of all possible conjunctions can be calculated by summing the appropriate subset of values from the joint distribution. Therefore, all conditional probabilities can also be calculated. positive negative circle square red 0.20 0.02 blue 0.01 circle square red 0.05 0.30 blue 0.20

14 Bayes Classifiers

15 Bayes Classifiers P(cj) P(x1,x2,…,xn|cj)
Can be estimated from the frequency of classes in the training examples. P(x1,x2,…,xn|cj) O(|X|n|C|) parameters Could only be estimated if a very, very large number of training examples was available. Need to make some sort of independence assumptions about the features to make learning tractable.

16 The Naïve Bayes Classifier
Flu X1 X2 X5 X3 X4 fever sinus cough runnynose muscle-ache Conditional Independence Assumption: attributes are independent of each other given the class: Multi-valued variables: multivariate model Binary variables: multivariate Bernoulli model

17 Learning the Model First attempt: maximum likelihood estimates
C X1 X2 X5 X3 X4 X6 First attempt: maximum likelihood estimates simply use the frequencies in the data

18 Problem with Max Likelihood
Flu X1 X2 X5 X3 X4 fever sinus cough runnynose muscle-ache What if we have seen no training cases where patient had no flu and muscle aches? Zero probabilities cannot be conditioned away, no matter the other evidence!

19 Smoothing to Improve Generalization on Test Data
# of values of Xi overall fraction in data where Xi=xi,k Somewhat more subtle version extent of “smoothing”

20 Underflow Prevention Multiplying lots of probabilities, which are between 0 and 1 by definition, can result in floating-point underflow. Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities. Class with highest final un-normalized log probability score is still the most probable.

21 Probability Estimation Example
positive negative P(Y) P(small | Y) P(medium | Y) P(large | Y) P(red | Y) P(blue | Y) P(green | Y) P(square | Y) P(triangle | Y) P(circle | Y) Ex Size Color Shape Class 1 small red circle positive 2 large 3 triangle negative 4 blue

22 Probability Estimation Example
positive negative P(Y) 0.5 P(small | Y) P(medium | Y) 0.0 P(large | Y) P(red | Y) 1.0 P(blue | Y) P(green | Y) P(square | Y) P(triangle | Y) P(circle | Y) Ex Size Color Shape Class 1 small red circle positive 2 large 3 triangle negative 4 blue

23 Naïve Bayes Example Test Instance: <medium ,red, circle>
Probability positive negative P(Y) 0.5 P(small | Y) 0.4 P(medium | Y) 0.1 0.2 P(large | Y) P(red | Y) 0.9 0.3 P(blue | Y) 0.05 P(green | Y) P(square | Y) P(triangle | Y) P(circle | Y) Test Instance: <medium ,red, circle>

24 Naïve Bayes Example Test Instance: <medium ,red, circle>
Probability positive negative P(Y) 0.5 P(medium | Y) 0.1 0.2 P(red | Y) 0.9 0.3 P(circle | Y) Test Instance: <medium ,red, circle> P(positive | X) =? P(negative | X) =?

25 Naïve Bayes Example Test Instance: <medium ,red, circle>
Probability positive negative P(Y) 0.5 P(medium | Y) 0.1 0.2 P(red | Y) 0.9 0.3 P(circle | Y) Test Instance: <medium ,red, circle> P(positive | X) = P(positive)*P(medium | positive)*P(red | positive)*P(circle | positive) / P(X) * * * = / P(X) = / = P(negative | X) = P(negative)*P(medium | negative)*P(red | negative)*P(circle | negative) / P(X) * * * = / P(X) = / = P(positive | X) + P(negative | X) = / P(X) / P(X) = 1 P(X) = ( ) =

26 Question How can we see the multivariate Naïve Bayes model as a generative model? A generative model produces the observed data by the means of a probabilistic generation process. First generate a class cj according to the probability P(C) Then for each attribute: Generate xi according to P(xi | C = cj ).

27 Naïve Bayes Generative Model
neg pos pos pos neg pos neg Category Size Color Shape red circ lg red circ med tri sm blue sm blue tri sqr lg med grn med red grn red circ med grn tri circ circ lg lg circ sm sm lg red blue circ tri sqr blue red sqr sm med red circ sm lg blue grn sqr tri Size Color Shape Positive Negative

28 Naïve Bayes Inference Problem
lg red circ ?? ?? neg pos pos pos neg pos neg Category Size Color Shape red circ lg red circ med tri sm blue sm blue tri sqr lg grn red circ med grn tri circ med red med grn circ lg lg circ tri sm blue sm lg red blue circ sqr red sqr sm med circ sm lg red blue grn sqr tri Size Color Shape Positive Negative

29 Naïve Bayes for Text Classification
Two models: Multivariate Bernoulli Model Multinomial Model

30 Model 1: Multivariate Bernoulli
One feature Xw for each word in dictionary Xw = true (1) in document d if w appears in d Naive Bayes assumption: Given the document’s topic, appearance of one word in the document tells us nothing about chances that another word appears Parameter estimation

31 Model 1: Multivariate Bernoulli
One feature Xw for each word in dictionary Xw = true (1) in document d if w appears in d Naive Bayes assumption: Given the document’s topic, appearance of one word in the document tells us nothing about chances that another word appears Parameter estimation fraction of documents of topic cj in which word w appears

32 Naïve Bayes Generative Model
neg pos pos pos neg pos neg Category w w w3 no yes no yes yes yes yes no yes no no no yes yes yes no yes no yes no no yes no yes yes yes no yes yes yes no yes yes yes no no yes yes yes no yes no no no no yes yes no no w w w3 Positive Negative

33 Model 2: Multinomial Cat w1 w2 w3 w4 w5 w6

34 Multinomial Distribution
“The binomial distribution is the probability distribution of the number of "successes" in n independent Bernoulli trials, with the same probability of "success" on each trial. In a multinomial distribution, each trial results in exactly one of some fixed finite number k of possible outcomes, with probabilities p1, ..., pk (so that pi ≥ 0 for i = 1, ..., k and their sum is 1), and there are n independent trials. Then let the random variables Xi indicate the number of times outcome number i was observed over the n trials. X=(X1,…,Xn) follows a multinomial distribution with parameters n and p, where p = (p1, ..., pk).” (Wikipedia)

35 Multinomial Naïve Bayes
Class conditional unigram language Attributes are text positions, values are words. One feature Xi for each word position in document feature’s values are all words in dictionary Value of Xi is the word in position i Naïve Bayes assumption: Given the document’s topic, word in one position in the document tells us nothing about words in other positions Too many possibilities!

36 Multinomial Naive Bayes Classifiers
Second assumption: Classification is independent of the positions of the words (word appearance does not depend on position) for all positions i,j, word w, and class c Use same parameters for each position Result is bag of words model (over tokens)

37 Multinomial Naïve Bayes for Text
Modeled as generating a bag of words for a document in a given category by repeatedly sampling with replacement from a vocabulary V = {w1, w2,…wm} based on the probabilities P(wj | ci). Smooth probability estimates with Laplace m-estimates assuming a uniform distribution over all words (p = 1/|V|) and m = |V|

38 Multinomial Naïve Bayes as a Generative Model for Text
spam legit spam spam legit legit spam spam legit Category science bank pamper PM feast computer Friday king printable test homework coupons dog March score Food May exam brand spam legit

39 Naïve Bayes Inference Problem
Food feast ?? ?? spam legit spam spam legit legit spam spam legit bank science pamper Category PM feast king printable computer Friday coupons dog test homework Food March score brand May exam spam legit

40 Naïve Bayes Classification

41 Parameter Estimation fraction of documents of topic cj
Multivariate Bernoulli model: Multinomial model: Can create a mega-document for topic j by concatenating all documents in this topic Use frequency of w in mega-document fraction of documents of topic cj in which word w appears fraction of times in which word w appears across all documents of topic cj

42 Classification Multinomial vs Multivariate Bernoulli?
Multinomial model is almost always more effective in text applications!

43 WebKB Experiment (1998) Classify webpages from CS departments into:
student, faculty, course, project, etc. Train on ~5,000 hand-labeled web pages Cornell, Washington, U.Texas, Wisconsin Crawl and classify a new site (CMU) Results:

44 Naïve Bayes - Spam Assassin
Naïve Bayes has found a home in spam filtering Paul Graham’s A Plan for Spam A mutant with more mutant offspring... Widely used in spam filters Classic Naive Bayes superior when appropriately used According to David D. Lewis But also many other things: black hole lists, etc. Many topic filters also use NB classifiers

45 Naïve Bayes on Spam Email
This graph is from Naive-Bayes vs. Rule-Learning in Classification of Jefferson Provost, UT Austin

46 Naive Bayes is Not So Naive
Naïve Bayes: First and Second place in KDD-CUP 97 competition, among 16 (then) state of the art algorithms Goal: Financial services industry direct mail response prediction model: Predict if the recipient of mail will actually respond to the advertisement – 750,000 records. Robust to Irrelevant Features Irrelevant Features cancel each other without affecting results Very good in domains with many equally important features A good baseline for text classification! Very Fast: Learning with one pass of counting over the data; testing linear in the number of attributes, and document collection size Low Storage requirements


Download ppt "Text Classification – Naive Bayes"

Similar presentations


Ads by Google