Presentation is loading. Please wait.

Presentation is loading. Please wait.

Naïve Bayes based Model Billy Doran 09130985. “If the model does what people do, do people do what the model does?”

Similar presentations


Presentation on theme: "Naïve Bayes based Model Billy Doran 09130985. “If the model does what people do, do people do what the model does?”"— Presentation transcript:

1 Naïve Bayes based Model Billy Doran 09130985

2 “If the model does what people do, do people do what the model does?”

3 Bayesian Learning Determines the probability of a hypothesis H given a set of data D: Ρ(Η|D) = P(D|H) P(H)⁄P(D)

4 P(H) is the prior probability of H. The probability of observing H for the whole data set P(H|D) is the posterior probability of H. This means that given the Data D what is the probability of the hypothesis H. P(D) is the prior probability of observing D. It is constant throughout the data set and can be ignored. P(D|H) is the likelihood of observing the data given the hypothesis. Does the hypothesis reproduce the data?

5 Maximum a Posteriori Probability In order to classify an example as belonging to one category or another we aim to find the maximal value of P(H|D) For example we can take the training pattern, if we want to find the probability that this example belongs to category A the posterior probability is: P(Category A|A,X,C)

6 Naïve Bayes The Naïve Bayes algorithm allows us to assume conditional independence of the dimensions. This means that we consider each dimension in terms of its probability given the category: P(A,B|Cat A) = P(A|Cat A)P(B|Cat A) Using this information we are able to build a table of the Conditional Probabilities for each dimension

7 Conditional Probability Table Probabilities for Category A ABC Dimension10.666600.1 Dimension20.50.16660.1 Dimension300.1666 P(Dimension1=A|Category A) is 4/6, which is 0.6666

8 Calculation In order to get the scores for the pattern we first find P(A|A,B,C)=P(A|A)P(B|A)P(C|A)P(A) =0.666*0.1666*0.1666*0.375=0.00688 P(B|A,B,C)=0.166*0.5*0.1*0.375=0.0031125 P(C|A,B,C)=0.1*0.1*0.833*0.375=0.00312375 Next we normalise the score to get a value in the range [0-1] A=0.00688/(0.0068+0.0031125+0.00312375) = 0.52

9 Conjunctions In order to calculate the conjunction of categories we find the joint probability of the two categories P(A&B) = P(A)P(B) This is similar to the Prototype Theory for conjunctions.

10 Training Data ABCA&BA&CB&CSingleJoint 10.5713276510.0857077180.3429646310.0489671890.1959451770.029394716A 20.7143163310.1785255040.1071581650.1275236830.0765448280.019130465A 30.882643620.073237440.044118940.0646425590.0389413010.003231158A 40.6005284650.199375450.2000960850.1197306330.1201633950.039894247A 50.334157830.554901770.11094040.1854247710.0370716030.061561024BAB 60.5434428110.4076228710.0489343180.2215197190.0265930030.019946747AAB 70.0602065620.9037851950.0360082430.05441380.0021679320.032543716B 80.0602065620.9037851950.0360082430.05441380.0021679320.032543716B 90.1428939020.714726820.1423792780.1021301040.0203451310.101762289B 100.1428939020.714726820.1423792780.1021301040.0203451310.101762289B 110.2433886180.0808050210.675806360.0196670220.1644835760.054608547C 120.0699051930.3496518460.5804429610.024442480.0405759770.202952953C 130.0286152140.0171759990.9542087880.0004914950.0273048880.016389489C 140.0812332160.0161881320.9025786520.0013150140.0733193670.014611062C 150.0286152140.0171759990.9542087880.0004914950.0273048880.016389489C 160.1785140260.1071512760.7143346980.0191280060.1275187630.076541874C

11 Training Data The model is almost perfectly consistent learner, meaning that it reproduces the original training data with 100% accuracy. For the conjunction examples #5 and #6 it classifies them as B and A respectively. They obtain a significantly higher score in the AB conjunction than in the AC or BC conjunctions. This seems to suggest that these two examples are more representative of one member of the conjunction than the other.

12 Test Data ABCA&BA&CB&CSingleJoint 1 0.5434428110.4076228710.0489343180.2215197190.0265930030.019946747 A>B>CAB>AC>BC 2 0.0699051930.3496518460.5804429610.024442480.0405759770.202952953 C>B>ABC>AC>AB 3 0.3948511860.0786858310.5264629830.0310691940.2078745330.041425177 C>A>BAC>BC>AB 4 0.1921843250.3462086960.4616069790.0665358850.0887136260.15981235 C>B>ABC>AC>AB 5 0.1428939020.714726820.1423792780.1021301040.0203451310.101762289 C>B=AAB=AC>AC

13 Graphs: Comparing Experimental results to Model results

14 Test Data The results are generally consistent with the experimental data. Except for #3 and #4: For #3 the experiment predicts AC>AB>BC, while the model generates AC>BC>AB For #4 the experimental data predicts C>B>A, the model gives B>C>A

15 Statistical Analysis The average for the correlation between the model and experimental data was R=0.88 At alpha =0.05 and df = n-2, this was significant. #1 0.82, #2 0.93, #3 0.85, #4 0.84, #5 0.88, #6 0.92

16 Unusual Predictions How would the model handle ? Output: A > B > C, AC > AB > BC Is it possible to ask the model about triple conjunction? Example: Model predicts: C>B=A, AB=AC>ABC>BC

17 Conclusion Naïve Bayes produces a good hypothesis of how people learn category classification. The use of probabilities matches well with the underlying logic of the correlations between the dimensions and categories. Creating a Causal Network might be an informative way to investigate further the interactions between the individual dimensions.

18 Limitations As the model uses a version of prototype to calculate its conjunction it is not able to capture overextension. To rectify this The formulae: Can be used to approximate overextension, where C is the category and KC is the set of non-C categories.

19 Limitations The model, also, does not take into account negative evidence. While it captures the general trend of the categories it does not, for example, represent the strength of negativity for Category C in test pattern #5 This pattern is very similar to the conjunction patterns given in the training data. The strong negative reaction seems to be caused by the association between these conjunctions and categories A and B.

20 The End


Download ppt "Naïve Bayes based Model Billy Doran 09130985. “If the model does what people do, do people do what the model does?”"

Similar presentations


Ads by Google