Presentation is loading. Please wait.

Presentation is loading. Please wait.

B AYESIAN N ETWORKS. S IGNIFICANCE OF C ONDITIONAL INDEPENDENCE Consider Grade(CS101), Intelligence, and SAT Ostensibly, the grade in a course doesn’t.

Similar presentations


Presentation on theme: "B AYESIAN N ETWORKS. S IGNIFICANCE OF C ONDITIONAL INDEPENDENCE Consider Grade(CS101), Intelligence, and SAT Ostensibly, the grade in a course doesn’t."— Presentation transcript:

1 B AYESIAN N ETWORKS

2 S IGNIFICANCE OF C ONDITIONAL INDEPENDENCE Consider Grade(CS101), Intelligence, and SAT Ostensibly, the grade in a course doesn’t have a direct relationship with SAT scores but good students are more likely to get good SAT scores, so they are not independent… It is reasonable to believe that Grade(CS101) and SAT are conditionally independent given Intelligence

3 B AYESIAN N ETWORK Explicitly represent independence among propositions Notice that Intelligence is the “cause” of both Grade and SAT, and the causality is represented explicitly Intel. Grade highlow 0.30.7 SAT P(I,G,S) = P(G,S|I) P(I) = P(G|I) P(S|I) P(I) I \ G‘A’‘B’‘C’ low0.20.340.46 high0.740.170.09 P(I) P(G|I) P(S|I) I \ Slowhigh low0.950.05 high0.20.8

4 A M ORE C OMPLEX BN BurglaryEarthquake Alarm MaryCallsJohnCalls causes effects Directed acyclic graph Intuitive meaning of arc from x to y: “x has direct influence on y”

5 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 Size of the CPT for a node with k parents: 2 k A M ORE C OMPLEX BN 10 probabilities, instead of 31

6 S IGNIFICANCE OF B AYESIAN N ETWORKS If we know that some variables are conditionally independent, we should be able to decompose joint distribution to take advantage of it Bayesian networks are a way of efficiently factoring the joint distribution into conditional probabilities And also building complex joint distributions from smaller models of probabilistic relationships But… What knowledge does the BN encode about the distribution? How do we use a BN to compute probabilities of variables that we are interested in?

7 W HAT DOES THE BN ENCODE ? Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or  Alarm BurglaryEarthquake Alarm MaryCallsJohnCalls For example, John does not observe any burglaries directly P(B  J)  P(B) P(J) P(B  J|A)  P(B|A) P(J|A)

8 W HAT DOES THE BN ENCODE ? The beliefs JohnCalls and MaryCalls are independent given Alarm or  Alarm For instance, the reasons why John and Mary may not call if there is an alarm are unrelated BurglaryEarthquake Alarm MaryCallsJohnCalls P(B  J|A)  P(B|A) P(J|A) P(J  M|A)  P(J|A) P(M|A) P(B  J|A)  P(B|A) P(J|A) P(J  M|A)  P(J|A) P(M|A) A node is independent of its non-descendants given its parents

9 W HAT DOES THE BN ENCODE ? BurglaryEarthquake Alarm MaryCallsJohnCalls A node is independent of its non-descendants given its parents Burglary and Earthquake are independent Burglary and Earthquake are independent The beliefs JohnCalls and MaryCalls are independent given Alarm or  Alarm For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

10 L OCALLY S TRUCTURED W ORLD A world is locally structured (or sparse) if each of its components interacts directly with relatively few other components In a sparse world, the CPTs are small and the BN contains much fewer probabilities than the full joint distribution If the # of entries in each CPT is bounded by a constant, i.e., O(1), then the # of probabilities in a BN is linear in n – the # of propositions – instead of 2 n for the joint distribution

11 B UT DOES A BN REPRESENT A BELIEF STATE ? I N OTHER WORDS, CAN WE COMPUTE THE FULL JOINT DISTRIBUTION OF THE PROPOSITIONS FROM IT ?

12 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(J  M  A   B   E) = ??

13 P(J  M  A  B  E) = P(J  M|A,  B,  E)  P(A  B  E) = P(J|A,  B,  E)  P(M|A,  B,  E)  P(A  B  E) (J and M are independent given A) P(J|A,  B,  E) = P(J|A) (J and B and J and E are independent given A) P(M|A,  B,  E) = P(M|A) P(A  B  E) = P(A|  B,  E)  P(  B|  E)  P(  E) = P(A|  B,  E)  P(  B)  P(  E) (B and E are independent) P(J  M  A  B  E) = P(J|A)P(M|A)P(A|  B,  E)P(  B)P(  E) BurglaryEarthquake Alarm MaryCallsJohnCalls

14 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(J  M  A   B   E) = P(J|A)P(M|A)P(A|  B,  E)P(  B)P(  E) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062

15 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(J  M  A   B   E) = P(J|A)P(M|A)P(A|  B,  E)P(  B)P(  E) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062 P(x 1  x 2  …  x n ) =  i=1,…,n P(x i |parents(X i ))  full joint distribution table

16 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(x 1  x 2  …  x n ) =  i=1,…,n P(x i |parents(X i ))  full joint distribution table P(J  M  A   B   E) = P(J|A)P(M|A)P(A|  B,  E)P(  b)P(  e) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062 Since a BN defines the full joint distribution of a set of propositions, it represents a belief state

17 W HAT DOES THE BN ENCODE ? Burglary  Earthquake JohnCalls  MaryCalls | Alarm JohnCalls  Burglary | Alarm JohnCalls  Earthquake | Alarm MaryCalls  Burglary | Alarm MaryCalls  Earthquake | Alarm BurglaryEarthquake Alarm MaryCallsJohnCalls A node is independent of its non-descendents, given its parents

18 R EADING OFF INDEPENDENCE RELATIONSHIPS How about Burglary  Earthquake | Alarm ? No! Why? BurglaryEarthquake Alarm MaryCallsJohnCalls

19 R EADING OFF INDEPENDENCE RELATIONSHIPS How about Burglary  Earthquake | Alarm ? No! Why? P(B  E|A) = P(A|B,E)P(B  E)/P(A) = 0.00075 P(B|A)P(E|A) = 0.086 BurglaryEarthquake Alarm MaryCallsJohnCalls

20 R EADING OFF INDEPENDENCE RELATIONSHIPS How about Burglary  Earthquake | JohnCalls? No! Why? Knowing JohnCalls affects the probability of Alarm, which makes Burglary and Earthquake dependent BurglaryEarthquake Alarm MaryCallsJohnCalls

21 I NDEPENDENCE RELATIONSHIPS Rough intuition (this holds for tree-like graphs, polytrees): Evidence on the (directed) road between two variables makes them independent Evidence on an “A” node makes descendants independent Evidence on a “V” node, or below the V, makes the ancestors of the variables dependent (otherwise they are independent) Formal property in general case : D-separation  independence (see R&N)

22 B ENEFITS OF S PARSE M ODELS Modeling Fewer relationships need to be encoded (either through understanding or statistics) Large networks can be built up from smaller ones Intuition Dependencies/independencies between variables can be inferred through network structures Tractable probabilistic inference

23 P ROBABILISTIC I NFERENCE IN B AYES N ETS

24 P ROBABILISTIC I NFERENCE Is the following problem…. Given: A belief state P(X 1,…,X n ) in some form (e.g., a Bayes net or a joint probability table) A query variable indexed by q Some subset of evidence variables indexed by e 1,…,e k Find: P(X q | X e1,…, X ek )

25 P ROBABILISTIC I NFERENCE WITHOUT E VIDENCE For the moment we’ll assume no evidence variables Find P(X q )

26 P ROBABILISTIC I NFERENCE WITHOUT E VIDENCE In joint probability table, we can find P(X q ) through marginalization Computational complexity: 2 n-1 (assuming boolean random variables) In Bayesian networks, we find P(X q ) using top down inference

27 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE Suppose we want to compute P(Alarm)

28 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE Suppose we want to compute P(Alarm) 1.P(Alarm) = Σ b,e P(A,b,e) 2.P(Alarm) = Σ b,e P(A|b,e)P(b)P(e) Suppose we want to compute P(Alarm) 1.P(Alarm) = Σ b,e P(A,b,e) 2.P(Alarm) = Σ b,e P(A|b,e)P(b)P(e)

29 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE Suppose we want to compute P(Alarm) 1.P(Alarm) = Σ b,e P(A,b,e) 2.P(Alarm) = Σ b,e P(A|b,e)P(b)P(e) 3.P(Alarm) = P(A|B,E)P(B)P(E) + P(A|B,  E)P(B)P(  E) + P(A|  B,E)P(  B)P(E) + P(A|  B,  E)P(  B)P(  E) Suppose we want to compute P(Alarm) 1.P(Alarm) = Σ b,e P(A,b,e) 2.P(Alarm) = Σ b,e P(A|b,e)P(b)P(e) 3.P(Alarm) = P(A|B,E)P(B)P(E) + P(A|B,  E)P(B)P(  E) + P(A|  B,E)P(  B)P(E) + P(A|  B,  E)P(  B)P(  E)

30 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE Suppose we want to compute P(Alarm) 1.P(A) = Σ b,e P(A,b,e) 2.P(A) = Σ b,e P(A|b,e)P(b)P(e) 3.P(A) = P(A|B,E)P(B)P(E) + P(A|B,  E)P(B)P(  E) + P(A|  B,E)P(  B)P(E) + P(A|  B,  E)P(  B)P(  E) 4.P(A) = 0.95*0.001*0.002 + 0.94*0.001*0.998 + 0.29*0.999*0.002 + 0.001*0.999*0.998 = 0.00252 Suppose we want to compute P(Alarm) 1.P(A) = Σ b,e P(A,b,e) 2.P(A) = Σ b,e P(A|b,e)P(b)P(e) 3.P(A) = P(A|B,E)P(B)P(E) + P(A|B,  E)P(B)P(  E) + P(A|  B,E)P(  B)P(E) + P(A|  B,  E)P(  B)P(  E) 4.P(A) = 0.95*0.001*0.002 + 0.94*0.001*0.998 + 0.29*0.999*0.002 + 0.001*0.999*0.998 = 0.00252

31 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE Now, suppose we want to compute P(MaryCalls)

32 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE Now, suppose we want to compute P(MaryCalls) 1.P(M) = P(M|A)P(A) + P(M|  A) P(  A) Now, suppose we want to compute P(MaryCalls) 1.P(M) = P(M|A)P(A) + P(M|  A) P(  A)

33 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE Now, suppose we want to compute P(MaryCalls) 1.P(M) = P(M|A)P(A) + P(M|  A) P(  A) 2.P(M) = 0.70*0.00252 + 0.01*(1-0.0252) = 0.0117 Now, suppose we want to compute P(MaryCalls) 1.P(M) = P(M|A)P(A) + P(M|  A) P(  A) 2.P(M) = 0.70*0.00252 + 0.01*(1-0.0252) = 0.0117

34 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE WITH E VIDENCE Suppose we want to compute P(Alarm|Earthquake)

35 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE WITH E VIDENCE Suppose we want to compute P(A|e) 1.P(A|e) = Σ b P(A,b|e) 2.P(A|e) = Σ b P(A|b,e)P(b) Suppose we want to compute P(A|e) 1.P(A|e) = Σ b P(A,b|e) 2.P(A|e) = Σ b P(A|b,e)P(b)

36 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 T OP -D OWN INFERENCE WITH E VIDENCE Suppose we want to compute P(A|e) 1.P(A|e) = Σ b P(A,b|e) 2.P(A|e) = Σ b P(A|b,e)P(b) 3.P(A|e) = 0.95*0.001 + 0.29*0.999 + = 0.29066 Suppose we want to compute P(A|e) 1.P(A|e) = Σ b P(A,b|e) 2.P(A|e) = Σ b P(A|b,e)P(b) 3.P(A|e) = 0.95*0.001 + 0.29*0.999 + = 0.29066

37 T OP -D OWN INFERENCE Only works if the graph of ancestors of a variable is a polytree Evidence given on ancestor(s) of the query variable Efficient: O(d 2 k ) time, where d is the number of ancestors of a variable, with k a bound on # of parents Evidence on an ancestor cuts off influence of portion of graph above evidence node

38 Q UERYING THE BN The BN gives P(T|C) What about P(C|T)? Cavity Toothache P(C) 0.1 CP(T|C) TFTF 0.4 0.01111

39 B AYES ’ R ULE P(A  B) = P(A|B) P(B) = P(B|A) P(A) So… P(A|B) = P(B|A) P(A) / P(B)

40 A PPLYING B AYES ’ R ULE Let A be a cause, B be an effect, and let’s say we know P(B|A) and P(A) (conditional probability tables) What’s P(B)?

41 A PPLYING B AYES ’ R ULE Let A be a cause, B be an effect, and let’s say we know P(B|A) and P(A) (conditional probability tables) What’s P(B)? P(B) =  a P(B,A=a) [marginalization] P(B,A=a) = P(B|A=a)P(A=a)[conditional probability] So, P(B) =  a P(B | A=a) P(A=a)

42 A PPLYING B AYES ’ R ULE Let A be a cause, B be an effect, and let’s say we know P(B|A) and P(A) (conditional probability tables) What’s P(A|B)?

43 A PPLYING B AYES ’ R ULE Let A be a cause, B be an effect, and let’s say we know P(B|A) and P(A) (conditional probability tables) What’s P(A|B)? P(A|B) = P(B|A)P(A)/P(B)[Bayes rule] P(B) =  a P(B | A=a) P(A=a)[Last slide] So, P(A|B) = P(B|A)P(A) / [  a P(B | A=a) P(A=a)]

44 H OW DO WE READ THIS ? P(A|B) = P(B|A)P(A) / [  a P(B | A=a) P(A=a)] [An equation that holds for all values A can take on, and all values B can take on] P(A=a|B=b) =

45 H OW DO WE READ THIS ? P(A|B) = P(B|A)P(A) / [  a P(B | A=a) P(A=a)] [An equation that holds for all values A can take on, and all values B can take on] P(A=a|B=b) = P(B=b|A=a)P(A=a) / [  a P(B=b | A=a) P(A=a)] Are these the same a?

46 H OW DO WE READ THIS ? P(A|B) = P(B|A)P(A) / [  a P(B | A=a) P(A=a)] [An equation that holds for all values A can take on, and all values B can take on] P(A=a|B=b) = P(B=b|A=a)P(A=a) / [  a P(B=b | A=a) P(A=a)] Are these the same a? NO!

47 H OW DO WE READ THIS ? P(A|B) = P(B|A)P(A) / [  a P(B | A=a) P(A=a)] [An equation that holds for all values A can take on, and all values B can take on] P(A=a|B=b) = P(B=b|A=a)P(A=a) / [  a’ P(B=b | A=a’) P(A=a’)] Be careful about indices!

48 Q UERYING THE BN The BN gives P(T|C) What about P(C|T)? P(Cavity|Toothache) = P(Toothache|Cavity) P(Cavity) P(Toothache) [Bayes’ rule] Querying a BN is just applying Bayes’ rule on a larger scale… Cavity Toothache P(C) 0.1 CP(T|C) TFTF 0.4 0.01111 Denominator computed by summing out numerator over Cavity and  Cavity

49 N AÏVE B AYES M ODELS P(Cause,Effect 1,…,Effect n ) = P(Cause)  i P(Effect i | Cause) Cause Effect 1 Effect 2 Effect n

50 N AÏVE B AYES C LASSIFIER P(Class,Feature 1,…,Feature n ) = P(Class)  i P(Feature i | Class) Class Feature 1 Feature 2 Feature n P(C|F 1,….,F k ) = P(C,F 1,….,F k )/P(F 1,….,F k ) = 1/Z P(C)  i P(Fi|C) Given features, what class? Spam / Not Spam English / French/ Latin … Word occurrences

51 M ORE COMPLEX NETWORKS Two limitations of current discussion: “Tree like” networks Evidence at specific locations Next time: Tractable exact inference without “loops” With loops: NP-hard (compare to CSPs) Approximate inference for general networks

52 S OME A PPLICATIONS OF BN Medical diagnosis Troubleshooting of hardware/software systems Fraud/uncollectible debt detection Data mining Analysis of genetic sequences Data interpretation, computer vision, image understanding

53 M ORE C OMPLICATED S INGLY -C ONNECTED B ELIEF N ET Radio Battery SparkPlugs Starts Gas Moves

54 Region = {Sky, Tree, Grass, Rock} R2 R4 R3 R1 Above

55

56 H OMEWORK Read R&N 14.1-3


Download ppt "B AYESIAN N ETWORKS. S IGNIFICANCE OF C ONDITIONAL INDEPENDENCE Consider Grade(CS101), Intelligence, and SAT Ostensibly, the grade in a course doesn’t."

Similar presentations


Ads by Google