Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables.

Similar presentations


Presentation on theme: "Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables."— Presentation transcript:

1 Bayesian networks practice

2 Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables X 1,…,X n. The probability for them to have the values x 1,…,x n respectively is P(x n,…,x 1 ): P(x n,…,x 1 ): is short for P(X n =x n,…, X n = x 1 ): We order them according to the topological of the given BayesNet

3 Inference in Bayesian Networks Basic task is to compute the posterior probability for a query variable, given some observed event –that is, some assignment of values to a set of evidence variables. Notation: –X denotes query variable –E denotes the set of evidence variables E 1,…,E m, and e is a particular event, i.e. an assignment to the variables in E. –Y will denote the set of the remaining variables (hidden variables). A typical query asks for the posterior probability P(x|e 1,…,e m ) E.g. We could ask: What’s the probability of a burglary if both Mary and John call, P(burglary | johhcalls, marycalls)?

4 Classification We compute and compare the following: However, how do we compute: What about the hidden variables Y 1,…,Y k ?

5 Inference by enumeration Example: P(burglary | johhcalls, marycalls)? (Abbrev. P(b|j,m))

6 Another example Once the right topology has been found. the probability table associated with each node is determined. Estimating such probabilities is fairly straightforward and is similar to the approach used by naïve Bayes classifiers.

7 High Blood Pressure Suppose we get to know that the new patient has high blood pressure. What’s the probability he has heart disease under this condition?

8 High Blood Pressure (Cont’d)

9 High Blood Pressure (  )

10 High Blood Pressure, Healthy Diet, and Regular Exercise

11 High Blood Pressure, Healthy Diet, and Regular Exercise (Cont’d)

12 The model therefore suggests that eating healthily and exercising regularly may reduce a person's risk of getting heart disease.

13 Weather data What is the Bayesian Network corresponding to Naïve Bayes?

14 “Effects” and “Causes” vs. “Evidence” and “Class” Why Naïve Bayes has this graph? Because when we compute in Naïve Bayes: P(play=yes | E) = P(Outlook=Sunny | play=yes) * P(Temp=Cool | play=yes) * P(Humidity=High | play=yes) * P(Windy=True | play=yes) * P(play=yes) / P(E) we are interested in computing P(…|play=yes), which are probabilities of our evidence “observations” given the class. Of course, “play” isn’t a cause for “outlook”, “temperature”, “humidity”, and “windy”. However, “play” is the class and knowing that it has a certain value, will influence the observational evidence probability values. For example, if play=yes, and we know that the playing happens indoors, then it is more probable (than without this class information) the outlook to be observed “rainy.”

15 Right or Wrong Topology? In general, there is no right or wrong graph topology. –Of course the calculated probabilities (from the data) will be different for different graphs. –Some graphs will induce better classifiers than some other. –If you reverse the arrows in the previous figure, then you get a pure causal graph, whose induced classifier might have estimated error (through cross- validation) better or worse than the Naïve Bayes one (depending on the data). If the topology is constructed manually, we (humans) tend to prefer the causal direction. –In domains such as medicine the graphs are usually less complex in the causal direction.

16 Weka suggestion How Weka finds the shape of the graph? Fixes an order of attributes (variables) and then adds and removes arcs until it gets the smallest estimated error (through cross-validation). By default it starts with a Naïve Bayes network. Also, it maintains a score of graph complexity, trying to keep the complexity low.


Download ppt "Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables."

Similar presentations


Ads by Google