 # Review P(h i | d) – probability that the hypothesis is true, given the data (effect  cause) Used by MAP: select the hypothesis that is most likely given.

## Presentation on theme: "Review P(h i | d) – probability that the hypothesis is true, given the data (effect  cause) Used by MAP: select the hypothesis that is most likely given."— Presentation transcript:

Review P(h i | d) – probability that the hypothesis is true, given the data (effect  cause) Used by MAP: select the hypothesis that is most likely given the observed data P(d | h i ) – probability that the data is true, given the hypothesis (cause  effect) Used by ML: select the hypothesis most likely to predict the observed data P(h i | d) = P(d | h i ) P(h i ) / P(d)

ML Learning θ = c/N Prediction is that actual proportion of candies in the bag is the same as the proportion in the observed data Problem: if some event has no yet been observed, it is assumed to have zero probability

MAP 50/50 bag

ML 50/50 bag

MAP 25/75 bag

ML 25/75 bag

Multiple Parameters New manufacturer wraps candies in different colors, probabilistically based in flavor

Observed data: c = cherry flavor, l = lime flavor, r c = cherry with red, g c = cherry with green r l = lime with red, g l = lime with green

With complete (observable) data, the maximum- likelihood parameter problem decomposes into separate problems, one for each variable.

Naïve Bayes Models We can expand to additional attributes, assuming attributes are conditionally independent: θ = P(C = true) θ i1 = P(X i = true | C = true) θ i2 = P(Xi = true | C = false) There 2n+1 parameters and this can be solved deterministically (no search)

Learning Bayes Nets If we don’t know the structure of the Bayes net, we can resort to search: Start with a model with no links Add parents, fitting parameters as above Measure accuracy (goal is best accuracy) Alternate: Start with initial guess of structure Make modifications by hill-climbing, simulated annealing: reverse, add or delete arcs

Simple reflex agents

Model-based reflex agents

Goal-based agents

Utility-based agents

Learning agents

Download ppt "Review P(h i | d) – probability that the hypothesis is true, given the data (effect  cause) Used by MAP: select the hypothesis that is most likely given."

Similar presentations