Presentation is loading. Please wait.

Presentation is loading. Please wait.

Text Classification : The Naïve Bayes algorithm

Similar presentations


Presentation on theme: "Text Classification : The Naïve Bayes algorithm"— Presentation transcript:

1 Text Classification : The Naïve Bayes algorithm
Adapted from Lectures by Prabhakar Raghavan (Google) and Christopher Manning (Stanford) Relationship to Relevance feedback; Ad-hoc retrieval (vs filtering); Practical Applications What is the classification application and classification method (manual vs rule-based vs classifier learning problem)? Useful for: (small dataset, stream, large dataset, resp.) Probabilistic approach based on Naïve Bayes Rule: Independence Assumptions for Tractability:-> Conditional Independence of terms : Positional Independence of a term Smoothening necessary to avoid overfitting (incompleteness in training set) Estimation details : Multinomial (Bag of words model) vs Bernoulli (Set of words model) Feature selection (to minimize term vocabulary): Information Gain approach, Chi-squared approach ===================================================================================== Practical technique with sound theoretical basis: efficient and effective in practice – robust Prasad L13NaiveBayesClassify

2 Relevance feedback revisited
In relevance feedback, the user marks a number of documents as relevant/nonrelevant, that can be used to improve search results. Suppose we just tried to learn a filter for nonrelevant documents This is an instance of a text classification problem: Two “classes”: relevant, nonrelevant For each document, decide whether it is relevant or nonrelevant The notion of classification is very general and has many applications within and beyond IR. “Query ops” addressed how to better capture info need; Binary classification Applc. : medical images : EKG, ECG, …, ultrasound, MRI, CAT scans Cancerous vs benign, calcification or not, … Subsequent summarization: Tag cloud; electing labels Prasad L13NaiveBayesClassify

3 Standing queries The path from information retrieval to text classification: You have an information need, say: Unrest in the Niger delta region You want to rerun an appropriate query periodically to find new news items on this topic I.e., it’s classification not ranking Such queries are called standing queries Long used by “information professionals” A modern mass instantiation is Google Alerts Changing dataset fixed query vs fixed dataset changing query Prasad L13NaiveBayesClassify 13.0

4 Spam filtering: Another text classification task
From: "" Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook. Change your life NOW ! ================================================= Click Below to order: 13.0

5 Introduction to Information Retrieval
3

6 Text classification: Naïve Bayes Text Classification
Today: Introduction to Text Classification Also widely known as “text categorization”. Probabilistic Language Models Naïve Bayes text classification Multinomial Bernoulli Feature Selection Based on solid foundation, engineering approximations very effective in practice, robust and scalable approach : Something you might use in practice: My doctoral advisee built a blogs classifier (into categories such as Entertainment, politics, sports, science and technology, music and arts) to be run on a Hadoop cluster for Technorati Prasad L13NaiveBayesClassify

7 Categorization/Classification
Given: A description of an instance, x  X, where X is the instance language or instance space. Issue: how to represent text documents. A fixed set of classes: C = {c1, c2,…, cJ} Determine: The category of x: c(x)C, where c(x) is a classification function whose domain is X and whose range is C. We want to know how to build classification functions (“classifiers”). Classification application vs classifier learning problem Machine learning methodology : Learning interpolation/extrapolation function automatically from hand-annotated training dataset Prasad L13NaiveBayesClassify 13.1

8 Formal definition of Classification: Training/Testing
Given: A document space X Documents are represented in this space – typically some type of high-dimensional space. A fixed set of classes C = {c1, c2, , cJ} The classes are human-defined for the needs of an application (e.g., relevant vs. nonrelevant). A training set D of labeled documents with each labeled document <d, c> ∈ X × C 8

9 (cont’d) Using a learning method or learning algorithm, we wish to learn a classifier ϒ that maps documents to classes: ϒ : X → C Given: a description d ∈ X of a document Determine: ϒ (d) ∈ C, that is, the class that is most appropriate for d 9

10 Topic classification 10

11 Document Classification
“planning language proof intelligence” Test Data: (AI) (Programming) (HCI) Classes: ML Planning Semantics Garb.Coll. Multimedia GUI Training Data: learning intelligence algorithm reinforcement network... planning temporal reasoning plan language... programming semantics language proof... garbage collection memory optimization region... ... ... From training data, learn significant/discriminating words to enable classification Multiple overlapping categories -> beyond the scope (Note: in real life there is often a hierarchy, not present in the above problem statement; and also, you get papers on ML approaches to Garb. Coll.) Prasad 13.1

12 More Text Classification Examples: Many search engine functionalities use classification
Assign labels to each document or web-page: Labels are most often topics such as Yahoo-categories e.g., "finance," "sports," "news>world>asia>business" Labels may be genres e.g., "editorials" "movie-reviews" "news“ Labels may be opinion on a person/product e.g., “like”, “hate”, “neutral” Labels may be domain-specific e.g., "interesting-to-me" : "not-interesting-to-me” e.g., “contains adult language” : “doesn’t” e.g., language identification: English, French, Chinese, … e.g., search vertical: about Linux versus not e.g., “link spam” : “not link spam” Spam detection Technorati – blog subject classification Sentiment/emotion analysis Prasad L13NaiveBayesClassify

13 Examples of how search engines use classification
Language identification (classes: English vs. French etc.) The automatic detection of spam pages (spam vs. nonspam) Topic-specific or vertical search – restrict search to a “vertical” like “related to health” (relevant to vertical vs. not) Standing queries (e.g., Google Alerts) Sentiment detection: is a movie or product review positive or negative (positive vs. negative) The automatic detection of sexually explicit content (sexually explicit vs. not) Exploited Big Data for emotion detection 13

14 Classification Methods (1)
Manual classification Used by Yahoo! (originally; now downplayed), Looksmart, about.com, ODP, PubMed Very accurate when job is done by experts Consistent when the problem size and team is small Difficult and expensive to scale Means we need automatic classification methods for big problems Open Directory Project The Open Directory Project is the largest, most comprehensive human-edited directory of the Web. It is constructed and maintained by a vast, global community of volunteer editors. Prasad L13NaiveBayesClassify 13.0

15 Classification Methods (2)
Automatic document classification Hand-coded rule-based systems Used by CS dept’s spam filter, Reuters, CIA, etc. Companies (Verity) provide “IDE” for writing such rules E.g., assign category if document contains a given boolean combination of words Standing queries: Commercial systems have complex query languages (everything in IR query languages + accumulators) Accuracy is often very high if a rule has been carefully refined over time by a subject expert Building and maintaining these rules is expensive Prasad L13NaiveBayesClassify 13.0

16 A Verity topic (a complex classification rule)
Note: maintenance issues (author, etc.) Hand-weighting of terms We can define spam filter in an client such as wright.edu

17 Classification Methods (3)
Supervised learning of a document-label assignment function Many systems partly rely on machine learning (Autonomy, MSN, Verity, Enkata, Yahoo!, …) k-Nearest Neighbors (simple, powerful) Naive Bayes (simple, common method) Support-vector machines (new, more powerful) … plus many other methods No free lunch: requires hand-classified training data But data can be built up (and refined) by amateurs Note that many commercial systems use a mixture of methods Prasad L13NaiveBayesClassify

18 Probabilistic relevance feedback
Rather than re-weighting in a vector space… If user has told us some relevant and some irrelevant documents, then we can proceed to build a probabilistic classifier, such as the Naive Bayes model we will look at today: P(tk|R) = |Drk| / |Dr| P(tk|NR) = |Dnrk| / |Dnr| tk is a term; Dr is the set of known relevant documents; Drk is the subset that contain tk; Dnr is the set of known irrelevant documents; Dnrk is the subset that contain tk. Note that this is an instance of multivariate Bernoulli approach as opposed to multi-nomial approach Prasad L13NaiveBayesClassify 9.1.2

19 Recall a few probability basics
For events a and b: Bayes’ Rule Odds: Prior Based on Venn diagram – set-based semantics Posterior Prasad 13.2

20 p(disease | symptom) = p(symptom | disease) * p(disease) / p(symptom)
Example of Diagnosis p(disease | symptom) = p(symptom | disease) * p(disease) / p(symptom) Specific diagnosis based on population statistics – set-based semantics (contained in which Venn diagram circle?) Prasad L13NaiveBayesClassify

21 Quick Summary: Conditioning illustrated using Venn diagram
P(red) > P(green) P(green | blue) > P(red | blue) Prasad L13NaiveBayesClassify

22 Bayesian Methods Learning and classification methods based on probability theory. Bayes theorem plays a critical role in probabilistic learning and classification. Build a generative model that approximates how data is produced. Uses prior probability of each category given no information about an item. Categorization produces a posterior probability distribution over the possible categories given a description of an item (and prior probabilities). Theoretical foundation is clear – the difficulty is in estimating the relevant parameters for classification using training set, which can be sparse, not representative, or computationally difficult to handle in general. Key point => Venn diagram based logic Prasad L13NaiveBayesClassify 13.2

23 The bag of words representation
Introduction to Information Retrieval The bag of words representation I love this movie! It's sweet, but with satirical humor. The dialogue is great and the adventure scenes are fun… It manages to be whimsical and romantic while laughing at the conventions of the fairy tale genre. I would recommend it to just about anyone. I've seen it several times, and I'm always happy to see it again whenever I have a friend who hasn't seen it yet. γ( )=c

24 The bag of words representation
Introduction to Information Retrieval The bag of words representation great 2 love recommend 1 laugh happy ... γ( )=c

25 Overall Approach Summary
Bayesian (robustness due to qualitative nature) multivariate Bernoulli (counts of documents) multinomial (counts of terms) Simplifying assumption for computational tractability conditional independence positional independence Prasad L13NaiveBayesClassify

26 Overall Approach Summary
Dealing with idiosyncracies of training data smoothening technique avoiding underflow using log Improving efficiency and generalizability (noise reduction) feature selection Prasad L13NaiveBayesClassify

27 Bayes’ Rule Prasad L13NaiveBayesClassify OVERALL APPROACH:
General idea – Bayesian (robustness due to qualitative approach) multinomial (counts of terms) multivariate Bernoulli (counts of documents) Simplifying assumption for computational tractability conditional independence; positional independence Dealing with idiosyncracies of training data smoothening technique avoid underflow using log Improving efficiency and generalizability (noise reduction) feature selection Prasad L13NaiveBayesClassify

28 Naive Bayes Classifiers
Task: Classify a new instance D based on a tuple of attribute values into one of the classes cj  C MAP = maximum aposteriori probability argmax = return the argument value for which the probability expression takes the maximum value Document is imagined as a sequence of tokens (Probability of a document is ignored, which may be an approximation, because occurrence of a document can be a function of its length --- “short documents are common, while longer documents are rare”) ================================================================ Robust because we want only relative measure MAP = Maximum Aposteriori Probability Prasad L13NaiveBayesClassify

29 Naïve Bayes Classifier: Naïve Bayes Assumption
P(cj) Can be estimated from the frequency of classes in the training examples. P(x1,x2,…,xn|cj) O(|X|n•|C|) parameters Could only be estimated if a very, very large number of training examples was available. Naïve Bayes Conditional Independence Assumption: Assume that the probability of observing the conjunction of attributes is equal to the product of the individual probabilities P(xi|cj). From theory to practice … Issue: Not all co-occurring word collections show up in training set So estimate using conditional independence asssumption Issue: Downside (dependence)  counter example: barack obama Prasad L13NaiveBayesClassify

30 The Naïve Bayes Classifier
Flu X1 X2 X5 X3 X4 fever sinus cough runnynose muscle-ache Conditional Independence Assumption: Features (term presence) are independent of each other given the class: This model is appropriate for binary variables Multivariate Bernoulli model Prasad 13.3

31 Learning the Model First attempt: maximum likelihood estimates
C X1 X2 X5 X3 X4 X6 First attempt: maximum likelihood estimates simply use the frequencies in the data Estimating the parameters from the training set … Prasad 13.3

32 Problem with Max Likelihood
Flu X1 X2 X5 X3 X4 fever sinus cough runnynose muscle-ache What if we have seen no training cases where patient had no flu and muscle aches? Zero probabilities cannot be conditioned away, no matter the other evidence! In the absence of a complete training set, there will be real situations that do not have a counterpart in the training set. So “zero” probability may not be valid. Furthermore, many strong positive evidence cannot help us overcome the absence of relatively insignificant term. How do we deal with coincidental “incompleteness” that has such a disastrous effect on the outcome? We ran into similar issues when generating and evaluating “cluster label generation” for LexisNexis News Documents. (NLDB-2008 paper) Prasad 13.3

33 The problem with maximum likelihood estimates: Zeros
P(China|d) ∝ P(China) ・ P(BEIJING|China) ・ P(AND|China) ・ P(TAIPEI|China) ・ P(JOIN|China) ・ P(WTO|China) If WTO never occurs in class China in the train set: This is the case where a word in the test set does not appear in the training set. 33

34 Zero probabilities cannot be conditioned away.
The problem with maximum likelihood estimates: Zeros (cont) If there were no occurrences of WTO in documents in class China, we’d get a zero estimate: → We will get P(China|d) = 0 for any document that contains WTO! Zero probabilities cannot be conditioned away. 34

35 Smoothing to Avoid Overfitting
# of values of Xi Laplace smoothing for Bernoulli model 1 for every vocabulary term – k is size of the vocabulary Equivalent to adding one document [per keyword and per class] into the dataset to make their co-occurrence a possibility (non-zero) Prasad L13NaiveBayesClassify

36 To avoid zeros: Add-one smoothing
Before: Now: Add one to each count to avoid zeros: B is the number of different words (in this case the size of the vocabulary: |V | = M) Probability of unseen word in C = 1 – summation_over_t P(t/C) 36

37 Smoothing to Avoid Overfitting
# of values of Xi overall fraction in data where Xi=xi,k Somewhat more subtle version Laplace smoothing for Bernoulli model 1 for every vocabulary term – k is size of the vocabulary extent of “smoothing” Prasad L13NaiveBayesClassify

38 Stochastic Language Models
Models probability of generating strings (each word in turn) in the language (commonly all strings over ∑). E.g., unigram model Model M 0.2 the 0.1 a 0.01 man 0.01 woman 0.03 said 0.02 likes the man likes the woman Modelling word dependencies for a class of documents : Rubber meets the road Making explicit the assumptions / approximations … (expressive power vs efficiency vs real-world effectiveness based on the data and the classification goal) 0.2 0.01 0.02 0.2 0.01 multiply P(s | M) = Prasad L13NaiveBayesClassify 13.2.1

39 Stochastic Language Models
Model probability of generating any string Model M1 Model M2 0.2 the class 0.03 sayst 0.02 pleaseth 0.1 yon 0.01 maiden woman 0.2 the 0.01 class sayst pleaseth yon maiden 0.01 woman maiden class pleaseth yon the Cf. Microsoft Web n-gram service 0.0005 0.01 0.0001 0.2 0.02 0.1 P(s|M2) > P(s|M1) 13.2.1

40 Unigram and higher-order models
Unigram Language Models Bigram (generally, n-gram) Language Models Other Language Models Grammar-based models (PCFGs), etc. Probably not the first thing to try in IR P ( ) = P ( ) P ( | ) P ( | ) P ( | ) Easy. Effective! P ( ) P ( ) P ( ) P ( ) P ( ) P ( | ) P ( | ) P ( | ) Prasad L13NaiveBayesClassify 13.2.1

41 Using Multinomial Naive Bayes Classifiers to Classify Text: Basic method
Attributes are text positions, values are words. Still too many possibilities Assume that classification is independent of the positions of the words Use same parameters for each position Result is bag of words model (over tokens not types) Instead of estimating the probability based on the words presence in documents, Glue all the documents for a class and then use the number of word occurrence to estimate probabilities. Prasad L13NaiveBayesClassify

42 Underflow Prevention: log space
Multiplying lots of probabilities, which are between 0 and 1, can result in floating-point underflow. Since log(xy) = log(x) + log(y), it is better to perform all computations by summing logs of probabilities rather than multiplying probabilities. Class with highest final un-normalized log probability score is still the most probable. Note that model is now just max of sum of weights… Addressing practical computational problems : 10^-3 * 10^-4 = VS log (10^-3 * 10^-4) = -7 Prasad L13NaiveBayesClassify

43 Note: Two Models Model 1: Multivariate Bernoulli
One feature Xw for each word in dictionary Xw = true in document d if w appears in d Naive Bayes assumption: Given the document’s topic, appearance of one word in the document tells us nothing about chances that another word appears This is the model used in the binary independence model in classic probabilistic relevance feedback in hand-classified data Prasad L13NaiveBayesClassify

44 Two Models Model 2: Multinomial = Class conditional unigram
One feature Xi for each word pos in document feature’s values are all words in dictionary Value of Xi is the word in position i Naïve Bayes assumption: Given the document’s topic, word in one position in the document tells us nothing about words in other positions Second assumption: Word appearance does not depend on position The two independence assumptions amount to the bag of words model. for all positions i,j, word w, and class c Prasad

45 Parameter estimation ( 2 approaches)
Multivariate Bernoulli model: Multinomial model: Can create a mega-document for topic j by concatenating all documents on this topic Use frequency of w in mega-document fraction of documents of topic cj in which word w appears fraction of times in which word w appears across all documents of topic cj Prasad L13NaiveBayesClassify

46 Classification Multinomial vs Multivariate Bernoulli?
Multinomial model is almost always more effective in text applications! Casual occurrence distinguished from emphatic occurrence in a long document See IIR sections 13.2 and 13.3 for worked examples with each model Prasad L13NaiveBayesClassify

47 Quick Summary: Conditioning illustrated using Venn diagram
P(red) > P(green) P(green | blue) > P(red | blue) Prasad L13NaiveBayesClassify

48 Quick Summary: Learning Problem
Learn a (one-of, multi-class) classifier by generalizing (interpolating and extrapolating) from a labeled training dataset to be used on a test set with similar distribution (representativeness) Prasad L13NaiveBayesClassify

49 Quick Summary: Naïve Bayes Approach
FOR TRACTABILITY use “Bag of Words” model Conditional Independence: No mutual dependence among words (cf. phrases such as San Francisco and Tahrir Square) Positional Independence: No dependence on the position of occurrence in a sentence or paragraph (cf. articles and prepositions such as “The …”, “ … of …”, “ …and” and “ …at”.) You may have learned that ending a sentence with a preposition is a serious breach of grammatical etiquette. To extrapolate from individual propabilities: For multivariate Bernoulli: set semantics => conditional independence assumption For multinomial: bag semantics => positional independence assumption Prasad L13NaiveBayesClassify

50 Quick Summary: Naïve Bayes Approach
FOR PRACTICALITY Use (Laplace) Smoothing: To overcome problems with sparse/incomplete training set E.g., P(“Britain is a member of WTO” | UK) >> 0 in spite of the fact that P(“WTO”|UK) ~ 0 Prasad L13NaiveBayesClassify

51 Quick Summary: Naïve Bayes Approach
FOR PRACTICALITY Use Logarithms: To overcome underflow problems caused by small numbers E.g., Loglikelihood(hello) = -4.27 E.g., Loglikelihood(dear) = -4.57 E.g., Loglikelihood(dolly) = -5.49 E.g., Loglikelihood(stem) = -4.76 E.g., Loglikelihood(hello dear) = -7.16 E.g., Loglikelihood(hello dolly) = -6.83 E.g., Loglikelihood(hello stem) = Microsoft Web N-gram Service “hello dolly” combination more likely than “hello dear” but less likely than “hello stem” Positive correlation: ( < -7.16) : “hello dear” Negative correlation: ( > ) : “hello stem” Prasad L13NaiveBayesClassify

52 Quick Summary: Naïve Bayes Approach
ESTIMATING PROBABILITY Multi-nominal Naïve Bayes Model: one count per term for each term occurrence (cf. term frequency) Multivariate Bernoulli Model: one count per term for each document it occurs in (cf. document frequency); Also takes into account non-occurrence FEATURE SELECTION 2 statistic approach Information-theoretic approach Note that query term that is in the vocabulary but not occurring in the documents that characterize a class is given the probability = 1 / size of vocabulary What’s unclear? Probability of a “query” term not in the dataset vocabulary (x1 (for ignored) rather than 1/|V| (low wt. for smoothing)) Prasad L13NaiveBayesClassify

53 Estimate parameters from the training corpus using add-one smoothing
Overall Approach Estimate parameters from the training corpus using add-one smoothing For a new document, for each class, compute sum of (i) log of prior and (ii) logs of conditional probabilities of the terms Assign the document to the class with the largest score 53

54 Naive Bayes: Training total number of tokens in denominator (“tuning/smoothing” factor can be set to some alpha other than 1) Multinomial Conditional Independence Naïve Bayes: Learning from Training Set 54

55 Naive Bayes: Testing Multinomial Conditional Independence Naïve Bayes: Classifying Test document Real-world setting : very robust because we are seeking only relative information 55

56 Exercise Estimate parameters of Naive Bayes classifier
Classify test document 56

57 Example: Parameter estimates
The denominators are (8 + 6) and (3 + 6) because the lengths of textc and are 8 and 3, respectively, and because the constant B is 6 as the vocabulary consists of six terms. Term frequency (multiplicity) of `Chinese` is critical in determining the class of test doc as `China‘. 57

58 Example: Classification
Thus, the classifier assigns the test document to c = China. The reason for this classification decision is that the three occurrences of the positive indicator CHINESE in d5 outweigh the occurrences of the two negative indicators JAPAN and TOKYO. Term frequency (multiplicity) of `Chinese` is critical in determining the class of test doc as `China‘. 58

59 Naïve Bayes: Learning Algorithm
From training corpus, extract Vocabulary Calculate required P(cj) and P(xk | cj) terms For each cj in C do docsj  subset of documents for which the target class is cj Textj  single document containing all docsj for each word xk in Vocabulary nk  number of occurrences of xk in Textj n = total number of tokens (alpha is “tuning/smoothing” factor that can be set to 1) Multinomial Conditional Independence Naïve Bayes: Learning from Training Set Prasad L13NaiveBayesClassify

60 Naïve Bayes: Classifying
positions  all word positions in current document which contain tokens found in Vocabulary Return cNB, where Multinomial Conditional Independence Naïve Bayes: Classifying Test document Real-world setting : very robust because we are seeking only relative information Prasad L13NaiveBayesClassify

61 Naive Bayes: Time Complexity
Training Time: O(|D|Ld + |C||V|)) where Ld is the average length of a document in D. Assumes V and all Di , ni, and nij pre-computed in O(|D|Ld) time during one pass through all of the data. Generally just O(|D|Ld) since usually |C||V| < |D|Ld Test Time: O(|C| Lt) where Lt is the average length of a test document. Very efficient overall, linearly proportional to the time needed to just read in all the data. Plus, robust in practice Why? |D| Ld = size of training set :: Parsing requires one pass through the documents |C| |V| = Complexity of computing all probability values (loop over terms and classes) Testing requires checking test document with each class for membership Prasad L13NaiveBayesClassify

62 Time complexity of Naive Bayes
Lave: average length of a training doc, La: length of the test doc, Ma: number of distinct terms in the test doc, training set, V : vocabulary, set of classes is the time it takes to compute all counts. is the time it takes to compute the parameters from the counts. Generally: Test time is also linear (in the length of the test document). Thus: Naive Bayes is linear in the size of the training set (training) and the test document (testing). This is optimal. |D| Ld = size of training set :: Parsing requires one pass through the documents |C| |V| = Complexity of computing all probability values (loop over terms and classes) Testing requires checking test document with each class for membership 62

63 Bernoulli: Training “tuning/smoothing” factor ½ for out of the dataset vocabulary term 63

64 Bernoulli : Testing All vocabulary terms are used to get the score (not just the document terms) That is terms not in the document is also considered significant indicator of relevance. 64

65 Exercise Estimate parameters of Bernoulli classifier
Classify test document 65

66 Example: Parameter estimates
The denominators are (3 + 2) and (1 + 2) because there are three documents in c and one document in c , and because the constant B is 2 as there are two cases to consider for each term, occurrence and nonoccurrence. (No information is probability = 0.5.) 66

67 Example: Classification
Thus, the classifier assigns the test document to c = not-China. When looking only at binary occurrence and not at term frequency, Japan and Tokyo are indicators for c (2/3 > 1/5) and the conditional probabilities of Chinese for c and c are not different enough (4/5 vs. 2/3) to affect the classification decision. Presence of `Japan‘ and ‚Tokyo‘ in class NOT ‚China‘, and their absence in class ‚China‘ is critical to this classification. Term `Chinese` is NOT as critical in determining the class of test doc as NOT `China‘. because it is present in both. 67

68 Violation of Naive Bayes independence assumption
Exercise Examples for why conditional independence assumption is not really true? Examples for why positional independence assumption is not really true? 68

69 Violation of Naive Bayes independence assumption
The independence assumptions do not really hold of documents written in natural language. Conditional independence: Positional independence: How can Naive Bayes work if it makes such inappropriate assumptions? 69

70 Why does Naive Bayes work?
Naive Bayes can work well even though conditional independence assumptions are badly violated. Example: Classification is about predicting the correct class and not about accurately estimating probabilities. Correct estimation ⇒ accurate prediction. But not vice versa! Double counting of evidence causes underestimation (0.01) and overestimation (0.99). 70

71 Naive Bayes is not so naive
Naive Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97) More robust to nonrelevant features than some more complex learning methods More robust to concept drift (changing of definition of class over time) than some more complex learning methods Better than methods like decision trees when we have many equally important features A good dependable baseline for text classification Optimal if independence assumptions hold (never true for text, but true for some domains) Very fast Low storage requirements 71

72 Feature Selection: Why?
Text collections have a large number of features 10,000 – 1,000,000 unique words … and more Feature Selection Makes using a particular classifier feasible Some classifiers can’t deal with 100,000 of features Reduces training time Training time for some methods is quadratic or worse in the number of features Can improve generalization (performance) Eliminates noise features Avoids overfitting Bias-variance tradeoff Bernoulli model is really sensitive to noise and requires feature selection for acceptable accuracy. Prasad L13NaiveBayesClassify 13.5

73 Feature selection: how?
Two ideas: Hypothesis testing statistics: Are we confident that the value of one categorical variable is associated with the value of another Chi-square test (2) Information theory: How much information does the value of one categorical variable give you about the value of another Mutual information (MI) They’re similar, but 2 measures confidence in association, (based on available statistics), while MI measures extent of association (assuming perfect knowledge of probabilities) correlation Prasad L13NaiveBayesClassify 13.5

74 2 statistic (CHI) 2 test is used to test the independence of two events, which for us is occurrence of a term and occurrence of a class. N.. = Observed Frequency E.. = Expected frequency This basically checks for independence of two events (terms and classes) by measuring deviation (of the observed counts) from the average (conforming to independence assumption (‘Null hypothesis’)). Larger Chi-squared values implies NOT INDEPENDENT (rejecting the ‘Null hypothesis’). Note that in the example shown: Jaguar and auto are correlated (2 vs 0.25) Jaguar and not auto a bit less (3 vs 4.75), while other entries point to independence. ==================================================== Independent means joint probability is product of the individual probabilities Positive correlation => joint prob > prod. indv. prob. Negative correlation => joint prob < prod indv. prob. 13.5.2

75 Example from Reuters-RCV1 (term export class poultry)
Prasad L13NaiveBayesClassify

76 Computing expected frequency
where, N = , ,106 = 801,948 Prasad L13NaiveBayesClassify

77 Observed frequencies and computed expected frequencies
where, N = , ,106 = 801,948 High value of X^2 implies dependence – whether positive or negative. Notice the squaring. ----- N11 >> E11 => export and poultry are correlated Other values not as significant Prasad L13NaiveBayesClassify

78 Interpretation of CHI-squared results
X2 value of 284 >> implies that poultry and export are dependent with 99.9% certainty, as the possibility of them being independent is as remote as < 0.1% Prasad L13NaiveBayesClassify

79 2 statistic (CHI) 2 is interested in (fo – fe)2/fe summed over all table entries: is the observed number what you’d expect given the marginals? The null hypothesis is rejected with confidence .999, since 12.9 > (the value for .999 confidence). 9500 500 (4.75) (0.25) (9498) 3 Class  auto (502) 2 Class = auto Term  jaguar Term = jaguar This basically checks for independence of two events (terms and classes) by measuring deviation (of the observed counts) from the average (conforming to independence assumption (‘Null hypothesis’)). Larger Chi-squared values implies NOT INDEPENDENT (rejecting the ‘Null hypothesis’). Note that in the example shown: Jaguar and auto are correlated (2 vs 0.25) Jaguar and not auto a bit less (3 vs 4.75), while other entries point to independence. ==================================================== Independent means joint probability is product of the individual probabilities Positive correlation => joint prob > prod. indv. prob. Negative correlation => joint prob < prod indv. prob. expected: fe observed: fo 13.5.2

80 2 statistic (CHI) There is a simpler formula for 2x2 2: Yields 1.29?
D = #(¬t, ¬c) B = #(t,¬c) C = #(¬t,c) A = #(t,c) Yields 1.29? For complete independence: A= t c , B = (1-t) c, C = t (1-c), D = (1-t) (1-c) A + B + C + D = AD – BC = tc (1-t)(1-c) – (1-t) c t (1-c) = 0 ================================================ A= B=3 C= D= N=1005 Nx(AD-CB)^2 = 1005x(2x x3)^2 = (A+C)x(B+D)x(A+B)x(C+D) = (2+500)x(3+9500)x(2+3)x( ) = chi^2=1.29 (12.9??) N = A + B + C + D Value for complete independence of term and category?

81 Feature selection via Mutual Information
In training set, choose k words which best discriminate (give most info. on) the categories. The Mutual Information between a word, class is: For each word w and each category c Note that if the term and the class are independent then p(ew,ec) = p(ew)*p(ec) Thus p(ew,ec) log [ p(ew,ec) / p(ew)p(ec) ] = 0 O/w, if the term and the class are completely dependent then p(ew,ec) = p(ew) = p(ec) Thus p(ew,ec) log [ p(ew,ec) / p(ew)p(ec) ] > 0 and takes the highest value (as 0 <= | p(ew,ec)/p(e_) | <= 1) The weighting of the log term using p(ew,ec) allows us to pick the most promising features (most bang for the class discriminating features) Prasad L13NaiveBayesClassify 13.5.1

82 Formula for computation (Equation 13.17)
N01 is number of documents that contain the term t (et = 1) and are not in class c (ec = 0). N1. is number of documents that contain the term t (et = 1). P(1,1) = N(1,1) / N P(1,.) = N(1,.)/N P(.,1) = N(.,1)/N N(1,.) = N(1,0) + N(1,1) N(.,1) = N(0,1) + N(1,1) ALL ONES IN SQAURE Independent ONE 4 AND REST O COMPLETELY DEPENDENT. Prasad L13NaiveBayesClassify

83 Example Prasad L13NaiveBayesClassify

84 Feature selection via MI (contd.)
For each category we build a list of k most discriminating terms. For example (on 20 Newsgroups): sci.electronics: circuit, voltage, amp, ground, copy, battery, electronics, cooling, … rec.autos: car, cars, engine, ford, dealer, mustang, oil, collision, autos, tires, toyota, … Greedy: does not account for correlations between terms Prasad L13NaiveBayesClassify

85 Features with High MI scores for 6 Reuters-RCV1 classes
Prasad L13NaiveBayesClassify

86 Feature Selection Mutual Information Chi-square
Clear information-theoretic interpretation May select rare uninformative terms Chi-square Statistical foundation May select very slightly informative frequent terms that are not very useful for classification Just use the commonest terms? No particular foundation In practice, this is often 90% as good Occurs in a small subset of class documents => soundly indicative but low coverage Frequent terms problematic (stop words like?) Prasad L13NaiveBayesClassify

87 Feature selection for NB : Overall Significance
In general, feature selection is necessary for multivariate Bernoulli NB. Otherwise, you suffer from noise, multi-counting. “Feature selection” really means something different for multinomial NB. It means dictionary truncation. This “feature selection” normally isn’t needed for multinomial NB. The multinomial NB model only has 1 feature Prasad L13NaiveBayesClassify

88 Evaluating Categorization
Evaluation must be done on test data that are independent of the training data (usually a disjoint set of instances). Classification accuracy: c/n where n is the total number of test instances and c is the number of test instances correctly classified by the system. Adequate if one class per document Otherwise F measure for each class Results can vary based on sampling error due to different training and test sets. Average results over multiple training and test sets (splits of the overall data) for the best results. See IIR 13.6 for evaluation on Reuters-21578 Prasad L13NaiveBayesClassify 13.6

89 WebKB Experiment (1998) Classify webpages from CS departments into:
student, faculty, course,project Train on ~5,000 hand-labeled web pages Cornell, Washington, U.Texas, Wisconsin Crawl and classify a new site (CMU) Results:

90 NB Model Comparison: WebKB
Prasad

91 Prasad L13NaiveBayesClassify

92 Naïve Bayes on spam email
This graph is from Naive-Bayes vs. Rule-Learning in Classification of Jefferson Provost, UT Austin Prasad L13NaiveBayesClassify 13.6

93 SpamAssassin Naïve Bayes has found a home in spam filtering
Paul Graham’s A Plan for Spam A mutant with more mutant offspring... Naive Bayes-like classifier with weird parameter estimation Widely used in spam filters Classic Naive Bayes superior when appropriately used According to David D. Lewis But also many other things: black hole lists, etc. Many topic filters also use NB classifiers Prasad L13NaiveBayesClassify

94 Example: Sensors NB FACTORS: P(s) = 1/2 P(+|s) = 1/4 P(+|r) = 3/4
Reality Raining Sunny P(+,+,r) = 3/8 P(-,-,r) = 1/8 P(+,+,s) = 1/8 P(-,-,s) = 3/8 NB Model NB FACTORS: P(s) = 1/2 P(+|s) = 1/4 P(+|r) = 3/4 PREDICTIONS: P(r,+,+) = (½)(¾)(¾) P(s,+,+) = (½)(¼)(¼) P(r|+,+) = 9/10 P(s|+,+) = 1/10 Raining? M1 M2 Prasad L13NaiveBayesClassify

95 Naïve Bayes Posterior Probabilities
Classification results of naïve Bayes (the class with maximum posterior probability) are usually fairly accurate. However, due to the inadequacy of the conditional independence assumption, the actual posterior-probability numerical estimates are not. Output probabilities are commonly very close to 0 or 1. Correct estimation  accurate prediction, but correct probability estimation is NOT necessary for accurate prediction (just need right ordering of probabilities) Prasad L13NaiveBayesClassify

96 Naive Bayes is Not So Naive
Naïve Bayes: First and Second place in KDD-CUP 97 competition, among 16 (then) state of the art algorithms Goal: Financial services industry direct mail response prediction model: Predict if the recipient of mail will actually respond to the advertisement – 750,000 records. Robust to Irrelevant Features Irrelevant Features cancel each other without affecting results Instead Decision Trees can heavily suffer from this. Very good in domains with many equally important features Decision Trees suffer from fragmentation in such cases – especially if little data A good dependable baseline for text classification (but not the best)! Optimal if the Independence Assumptions hold: If assumed independence is correct, then it is the Bayes Optimal Classifier for problem Very Fast: Learning with one pass of counting over the data; testing linear in the number of attributes, and document collection size Low Storage requirements IIR 13 Fabrizio Sebastiani. Machine Learning in Automated Text Categorization. ACM Computing Surveys, 34(1):1-47, 2002. Yiming Yang & Xin Liu, A re-examination of text categorization methods. Proceedings of SIGIR, 1999. Andrew McCallum and Kamal Nigam. A Comparison of Event Models for Naive Bayes Text Classification. In AAAI/ICML-98 Workshop on Learning for Text Categorization, pp Tom Mitchell, Machine Learning. McGraw-Hill, 1997. Clear simple explanation of Naïve Bayes Open Calais: Automatic Semantic Tagging Free (but they can keep your data), provided by Thompson/Reuters Weka: A data mining software package that includes an implementation of Naive Bayes Reuters – the most famous text classification evaluation set and still widely used by lazy people (but now it’s too small for realistic experiments – you should use Reuters RCV1) Prasad L13NaiveBayesClassify


Download ppt "Text Classification : The Naïve Bayes algorithm"

Similar presentations


Ads by Google