4 DecodingFinding the words given the acoustics is an HMM inference problemWe want to know which state sequence x1:T is most likely given the evidence e1:T:From the sequence x, we can simply read off the words
5 Parameter Estimation Estimating the distribution of a random variable Elicitation: ask a human (why is this hard?)Empirically: use training data (learning!)E.g.: for each outcome x, look at the empirical rate of that value:This is the estimate that maximizes the likelihood of the data
6 Example: Spam Filter Input: email Output: spam/ham Setup: Get a large collection of example s, each labeled “spam” or “ham”Note: someone has to hand label all this data!Want to learn to predict labels of new, future sFeatures: the attributes used to make the ham / spam decisionWords: FREE!Text patterns: $dd, CAPSNon-text: senderInContacts……
7 Example: Digit Recognition Input: images / pixel gridsOutput: a digit 0-9Setup:Get a large collection of example images, each labeled with a digitNote: someone has to hand label all this data!Want to learn to predict labels of new, future digit imagesFeatures: the attributes used to make the digit decisionPixels: (6,8) = ONShape patterns: NumComponents, AspectRation, NumLoops……
8 A Digit RecognizerInput: pixel gridsOutput: a digit 0-9
10 Important ConceptsData: labeled instances, e.g. s marked spam/hamTraining setHeld out set (we will give examples today)Test setFeatures: attribute-value pairs that characterize each xExperimentation cycleLearn parameters (e.g. model probabilities) on training set(Tune hyperparameters on held-out set)Compute accuracy of test setVery important: never “peek” at the test set!EvaluationAccuracy: fraction of instances predicted correctlyOverfitting and generalizationWant a classifier which does well on test dataOverfitting: fitting the training data very closely, but not generalizing well
11 General Naive Bayes A general naive Bayes model: We only specify how each feature depends on the classTotal number of parameters is linear in n
12 General Naive Bayes What do we need in order to use naive Bayes? Inference (you know this part)Start with a bunch of conditionals, p(Y) and the p(Fi|Y) tablesUse standard inference to compute p(Y|F1…Fn)Nothing new hereLearning: Estimates of local conditional probability tablesp(Y), the prior over labelsp(Fi|Y) for each feature (evidence variable)These probabilities are collectively called the parameters of the model and denoted by θ
13 Inference for Naive Bayes Goal: compute posterior over causesStep 1: get joint probability of causes and evidenceStep 2: get probability of evidenceStep 3: renormalize
14 Naive Bayes for Digits Simple version: Naive Bayes model: One feature Fij for each grid position <i,j>Possible feature values are on / off, based on whether intensity is more or less than 0.5 in underlying imageEach input maps to a feature vector, e.g.Here: lots of features, each is binary valuedNaive Bayes model:
15 Learning in NB (Without smoothing) p(Y=y)approximated by the frequency of each Y in training datap(F|Y=y)approximated by the frequency of (y,F)
17 Example: Spam Filter Naive Bayes spam filter Data: Classifiers Collection of s labeled spam or hamNote: some one has to hand label all this data!Split into training, held-out, test setsClassifiersLearn on the training set(Tune it on a held-out set)Test it on new s
18 Word at position i, not ith word in the dictionary! Naive Bayes for TextBag-of-Words Naive Bayes:Features: Wi is the word at position iPredict unknown class label (spam vs. ham)Each Wi is identically distributedGenerative model:Tied distributions and bag-of-wordsUsually, each variable gets its own conditional probability distribution p(F|Y)In a bag-of-words modelEach position is identically distributedAll positions share the same conditional probs p(W|C)Why make this assumptionWord at position i, not ith word in the dictionary!
19 Example: Spam Filtering Model:What are the parameters?Where do these tables come from?the0.0156to0.0153and0.0115of0.0095you0.0093a0.0086with0.0080from0.0075…the0.0210to0.0133of0.011920020.0110with0.0108from0.0107and0.0105a0.0100…ham0.66spam0.33
20 Spam example Word p(w|spam) p(w|ham) Σ log p(w|spam) Σ log p(w|ham) (prior)-1.1-0.4Gary-11.8-8.9would-19.1-16.0you-23.8-21.8like-30.9-28.9to-35.1-33.2lose-44.5-44.0weight-53.3-55.0while-61.5-63.2-66.2-69.0sleep-76.0-80.5
21 Problem with this approach p(feature, C=2)p(feature, C=3)p(C=2)=0.1p(C=3)=0.1p(on|C=2)=0.8p(on|C=3)=0.8p(on|C=2)=0.1p(on|C=3)=0.9p(on|C=2)=0.1p(on|C=3)=0.7p(on|C=3)=0.0p(on|C=2)=0.012 wins!!
22 Another examplePosteriors determined by relative probabilities (odds ratios):south-westinfnationmorallynicelyextentseriously…screensinfminuteguaranteed$205.00deliverysignature…What went wrong here?
23 Generalization and Overfitting Relative frequency parameters will overfit the training data!Just because we never saw a 3 with pixel (15,15) on during training doesn’t mean we won’t see it at test timeUnlikely that every occurrence of “minute” is 100% spamUnlikely that every occurrence of “seriously” is 100% spamWhat about all the words that don’t occur in the training set at all?In general, we can’t go around giving unseen events zero probabilityAs an extreme case, imagine using the entire as the only featureWould get the training data perfect (if deterministic labeling)Wouldn’t generalize at allJust making the bag-of-words assumption gives us some generalization, but isn’t enoughTo generalize better: we need to smooth or regularize the estimates
24 Estimation: Smoothing Maximum likelihood estimates:Problems with maximum likelihood estimates:If I flip a coin once, and it’s heads, what’s the estimate for p(heads)?What if I flip 10 times with 8 heads?What if I flip 10M times with 8M heads?Basic idea:We have some prior expectation about parameters (here, the probability of heads)Given little evidence, we should skew towards our priorGiven a lot of evidence, we should listen to the data
25 Estimation: Laplace Smoothing Laplace’s estimate (extended):Pretend you saw every outcome k extra timesWhat’s Laplace with k=0?k is the strength of the priorLaplace for conditionals:Smooth each condition independently:
26 Estimation: Linear Smoothing In practice, Laplace often performs poorly for p(X|Y):When |X| is very largeWhen |Y| is very largeAnother option: linear interpolationAlso get p(X) from the dataMake sure the estimate of p(X|Y) isn’t too different from p(X)What if α is 0? 1?
27 Do these make more sense? Real NB: SmoothingFor real classification problems, smoothing is criticalNew odds ratios:helvetica11.4seems10.8group10.2ago8.4area8.3…verdana28.8Credit28.4ORDER27.2<FONT>26.9money26.5…Do these make more sense?
28 Tuning on Held-Out Data Now we’ve got two kinds of unknownsParameters: the probabilities p(Y|X), p(Y)Hyperparameters, like the amount of smoothing to do: k,αWhere to learn?Learn parameters from training dataMust tune hyperparameters on different dataWhy?For each value of the hyperparameters, train and test on the held-out dataChoose the best value and do a final test on the test data
30 What to Do About Errors? Need more features- words aren’t enough! Have you ed the sender before?Have 1K other people just gotten the same ?Is the sending information consistent?Is the in ALL CAPS?Do inline URLs point where they say they point?Does the address you by (your) name?Can add these information sources as new variables in the NB modelNext class we’ll talk about classifiers that let you add arbitrary features more easily