Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning II 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호

Similar presentations


Presentation on theme: "Machine Learning II 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호"— Presentation transcript:

1 Machine Learning II 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호 (karma@pusan.ac.kr)

2 Bayes’ Rule Please answer the following question on probability. Suppose one is interested in a rare syntactic construction, perhaps parasitic gaps, which occurs on average once in 100,000 sentences. Joe Linguist has developed a complicated pattern matcher that attempts to identify sentences with parasitic gaps. It’s pretty good, it’s not perfect: if a sentence has a parasitic gap, it will say so with probability 0.95, if it doesn’t, it will wrongly say it does with probability 0.005. Suppose the test say that a sentence contains a parasitic gap. What the probability that this is true? Sol) G : the event of the sentence having a parasitic gap T: the event of the test being positive

3 Naïve Bayes - Introduction Simple probabilistic classifiers based on applying Bayes' theorem Strong (naive) independence assumptions between the features

4 Naïve Bayes – Train & Test(Classification) train test

5

6

7 Naïve Bayes Examples

8

9

10

11 Smoothing Zero probabilities cause a zero probability on the entire data So….how do we estimate the likelihood of unseen data? Laplace smoothing Add 1 to every type count to get an adjusted count c *

12 Laplace Smoothing Examples Add 1 to every type count to get an adjusted count c * Wait Pat TrueFalse Some40 Full14 None01 Wait Pat TrueFalse Some4+1=50+1=1 Full1+1=24+1=5 None0+1+11+1=2

13 Decision Tree Flowchart-like structure Internal node represents test on an attribute Branch represents outcome of test Leaf node represents class label Path from root to leaf represents classification rules

14 Information Gain

15 Root Node Example For the training set, 6 positives, 6 negatives, H(6/12, 6/12) = 1 bit Consider the attributes Patrons and Type: Patrons has the highest IG of all attributes and so is chosen by the learning algorithm as the root Information gain is then repeatedly applied at internal nodes until all leaves contain only examples from one class or the other


Download ppt "Machine Learning II 부산대학교 전자전기컴퓨터공학과 인공지능연구실 김민호"

Similar presentations


Ads by Google