Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.

Similar presentations


Presentation on theme: "Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact."— Presentation transcript:

1 Bayesian networks Chapter 14 Section 1 – 2

2 Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions Syntax: –a set of nodes, one per variable –a directed, acyclic graph (link ≈ "directly influences") –a conditional distribution for each node given its parents: P (X i | Parents (X i ))

3 Example Topology of network encodes conditional independence assertions: Weather is independent of the other variables Toothache and Catch are conditionally independent given Cavity

4 Example I'm at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglar? Variables: Burglary, Earthquake, Alarm, JohnCalls, MaryCalls Network topology reflects "causal" knowledge: –A burglar can set the alarm off –An earthquake can set the alarm off –The alarm can cause Mary to call –The alarm can cause John to call

5 Example contd.

6 Compactness If each variable has no more than k parents, the complete network requires O(n · 2 k ) numbers I.e., grows linearly with n, vs. O(2 n ) for the full joint distribution For burglary net, 1 + 1 + 4 + 2 + 2 = 10 numbers (vs. 2 5 - 1 = 31)

7 Semantics

8 Constructing Bayesian networks 1. Choose an ordering of variables X 1, …,X n 2. For i = 1 to n –add X i to the network –select parents from X 1, …,X i-1 such that P (X i | Parents(X i )) = P (X i | X 1,... X i-1 ) This choice of parents guarantees: P (X 1, …,X n ) = π i =1 P (X i | X 1, …, X i-1 ) (chain rule) = π i =1 P (X i | Parents(X i )) (by construction) n n

9 Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? Example

10 Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? Example

11 Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? P(B | A, J, M) = P(B)? Example

12 Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)?No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A,J, M) = P(E | A)? P(E | B, A, J, M) = P(E | A, B)? Example

13 Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)?No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A,J, M) = P(E | A)? No P(E | B, A, J, M) = P(E | A, B)? Yes Example

14 Example contd. Deciding conditional independence is hard in non-causal directions (Causal models and conditional independence seem hardwired for humans!) Network is less compact

15 Example contd. Deciding conditional independence is hard in non-causal directions (Causal models and conditional independence seem hardwired for humans!) Network is less compact: 1 + 2 + 4 + 2 + 4 = 13 numbers needed

16 Another example

17

18

19 Example from Medical Diagnostics Network represents a knowledge structure that models the relationship between medical difficulties, their causes and effects, patient information and diagnostic tests Visit to Asia Tuberculosis or Cancer XRay Result Dyspnea BronchitisLung Cancer Smoking Patient Information Medical Difficulties Diagnostic Tests

20 Example from Medical Diagnostics Relationship knowledge is modeled by deterministic functions, logic and conditional probability distributions Patient Information Diagnostic Tests Visit to Asia Tuberculosis or Cancer XRay Result Dyspnea BronchitisLung Cancer Smoking Tuber Present Absent Lung Can Present Absent Present Absent Tub or Can True False Medical Difficulties Tub or Can True False Bronchitis Present Absent Present Absent Present 0.90 0.70 0.80 0.10 Absent 0.l0 0.30 0.20 0.90 Dyspnea

21 Example from Medical Diagnostics Propagation algorithm processes relationship information to provide an unconditional or marginal probability distribution for each node The unconditional or marginal probability distribution is frequently called the belief function of that node

22 As a finding is entered, the propagation algorithm updates the beliefs attached to each relevant node in the network Interviewing the patient produces the information that “Visit to Asia” is “Visit” This finding propagates through the network and the belief functions of several nodes are updated

23 Further interviewing of the patient produces the finding “Smoking” is “Smoker” This information propagates through the network

24 Finished with interviewing the patient, the physician begins the examination The physician now moves to specific diagnostic tests such as an X-Ray, which results in a “Normal” finding which propagates through the network Note that the information from this finding propagates backward and forward through the arcs

25 The physician also determines that the patient is having difficulty breathing, the finding “Present” is entered for “Dyspnea” and is propagated through the network The doctor might now conclude that the patient has bronchitis and does not have tuberculosis or lung cancer

26 Summary Bayesian networks provide a natural representation for (causally induced) conditional independence Topology + conditional probability = compact representation of joint distribution Generally easy for domain experts to construct


Download ppt "Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact."

Similar presentations


Ads by Google