Presentation is loading. Please wait.

Presentation is loading. Please wait.

P ROBABILISTIC I NFERENCE. A GENDA Random variables Bayes rule Intro to Bayesian Networks.

Similar presentations


Presentation on theme: "P ROBABILISTIC I NFERENCE. A GENDA Random variables Bayes rule Intro to Bayesian Networks."— Presentation transcript:

1 P ROBABILISTIC I NFERENCE

2 A GENDA Random variables Bayes rule Intro to Bayesian Networks

3 C HEAT S HEET : P ROBABILITY D ISTRIBUTIONS Event: boolean propositions that may or may not be true in a state of the world For any event x: P(x)  0(axiom) P(x)=1-P(  x) For any events x,y: P(x  y) = P(y  x)(symmetry) P(x  y) = P(y  x)(symmetry) P(x  y) = P(x)+P(y)-P(x  y)(axiom) P(x) = P(x  y)+P(x  y)(marginalization) x and y are independent iff P(x  y)=P(x)P(y). This is an explicit assumption

4 C HEAT S HEET : CONDITIONAL DISTRIBUTIONS P(x|y) reads “probability of x given y” P(x) is equivalent to P(x|true) P(x|y)  P(y|x) P(x|y) = P(x  y)/P(y)(definition) P(x|y  z) = P(x  y|z)/P(y|z)(definition) P(x|y)=1-P(  x|y). i.e., P(|y) is a probability distribution

5 R ANDOM V ARIABLES 5

6 In a possible world, a random variable X can take on one of a set of values Val(X)={x 1,…,x n } Such an event is written ‘X=x’ Capital: random variable Lowercase: assignment of variable to value Truth assignments to boolean random variables may also be expressed as ‘X’ or ‘  X’ 6

7 N OTATION WITH R ANDOM V ARIABLES Capital letters A,B,C denote random variables Each random variable X can take one of a set of possible values x  Val(X) Boolean random variable has Val(X)={True,False} Although the most unambiguous way of writing a probabilistic belief is over an event… P(X=x) = a number P(X=x  Y=y) = a number …it is tedious to list a large number of statements that hold for multiple values x and y Random variables allow using a shorthand notation. (Unfortunately this is a source of a lot of initial confusion!)

8 D ECODING P ROBABILITY N OTATION Mental rule #1: Lowercase: assignments are often left implicit when unambiguous P(a) = P(A=a) = a number

9 D ECODING P ROBABILITY N OTATION (B OOLEAN VARIABLES ) P(X=True) is written P(X) P(X=False) is written P(  X) [Since P(  X) = 1-P(X), knowing P(X) is enough to specify the whole distribution over X=True or X=False]

10 D ECODING P ROBABILITY N OTATION Mental rule #2: Drop the AND, use commas P(a,b) = P(a  b) = P(A=a  B=b) = a number

11 D ECODING P ROBABILITY N OTATION Mental rule #3: Uppercase => values left implicit Suppose Val(X) = {1,2,3} When I write P(X), it states “the distribution defined over all of P(X=1), P(X=2), P(X=3)” It is not a single number, but rather a set of numbers P(X) = [A probability table]

12 D ECODING P ROBABILITY N OTATION P(A,B) = [P(A=a  B=b) for all combinations of a  Val(A), b  Val(B)] A probability table with |Val(A)|x|Val(B)| entries 12

13 D ECODING P ROBABILITY N OTATION Mental rule #3: Uppercase => values left implicit So when you see f(A,B)=g(A,B) this means: “f(a,b) = g(a,b) for all values of a  Val(A) and b  Val(B)” f(A,B)=g(A) means: “f(a,b) = g(a) for all values of a  Val(A) and b  Val(B)” f(A,b)=g(A,b) means: “f(a,b) = g(a,b) for all values of a  Val(A)” Order doesn’t matter. P(A,B) is equivalent to P(B,A)

14 A NOTHER M NEMONIC : F UNCTIONAL E QUALITIES P(X) is treated as a function over a variable X Operations and relations are on “function objects” If you say f(x) = g(x) without a value of x, then you can infer f(x) = g(x) holds for all x Likewise if you say f(x,y) = g(x) without stating a value of x or y, then you can infer f(x,y) = g(x) holds for all x,y 14

15 Q UIZ : W HAT DOES THIS MEAN ? P(A  B) = P(A)+P(B)- P(A  B) P(A=a  B=b) = P(A=a) + P(B=b) -P(A=a  B=b) For all a  Val(A) and b  Val(B)

16 M ARGINALIZATION 16

17 M ARGINALIZATION 17

18 D ECODING P ROBABILITY N OTATION (M ARGINALIZATION ) Mental rule #4: domains are usually implicit Suppose a belief state P(X,Y,Z) is defined over X, Y, and Z If I write P(X), I am implicitly marginalizing over Y and Z P(X) =  y  z P(X,y,z) P(X) =  y  z P(X  Y=y  Z=z) P(X=x) =  y  z P(X=x  Y=y  Z=z) for all x By convention, each of y and z are summed over Val(Y), Val(Z) (should be interpreted as)

19 C ONDITIONAL P ROBABILITY FOR R ANDOM V ARIABLES P(A|B) is the posterior probability of A given knowledge of B “For each b  Val(B): given that I know B=b, what would I believe is the distribution over A?” If a new piece of information C arrives, the agent’s new belief (if it obeys the rules of probability) should be P(A|B,C)

20 C ONDITIONAL P ROBABILITY FOR R ANDOM V ARIABLES P(A,B) = P(A|B) P(B) = P(B|A) P(A) P(A|B) is the posterior probability of A given knowledge of B Axiomatic definition: P(A|B) = P(A,B)/P(B)

21 C ONDITIONAL P ROBABILITY P(A,B) = P(A|B) P(B) = P(B|A) P(A) P(A,B,C) = P(A|B,C) P(B,C) = P(A|B,C) P(B|C) P(C) P(Cavity) =  t  p P(Cavity,t,p) =  t  p P(Cavity|t,p) P(t,p) =  t  p P(Cavity|t,p) P(t|p) P(p)

22 I NDEPENDENCE Two random variables A and B are independent if P(A,B) = P(A) P(B) hence P(A|B) = P(A) Knowing B doesn’t give you any information about A [This equality has to hold for all combinations of values that A,B can take on]

23 R EMEMBER : P ROBABILITY N OTATION L EAVES V ALUES I MPLICIT P(A  B) = P(A)+P(B)- P(A  B) means P(A=a  B=b) = P(A=a) + P(B=b) -P(A=a  B=b) For all a  Val(A) and b  Val(B) A and B are random variables. A=a and B=b are events. Random variables indicate many possible combinations of events

24 C ONDITIONAL P ROBABILITY P(A,B) = P(A|B) P(B) = P(B|A) P(A) P(A|B) is the posterior probability of A given knowledge of B Axiomatic definition: P(A|B) = P(A,B)/P(B)

25 P ROBABILISTIC I NFERENCE 25

26 C ONDITIONAL D ISTRIBUTIONS StateP(state) C, T, P0.108 C, T,  P 0.012 C,  T, P 0.072 C,  T,  P 0.008  C, T, P 0.016  C, T,  P 0.064  C,  T, P 0.144  C,  T,  P 0.576 P(Cavity|Toothache) = P(Cavity  Toothache)/P(Toothache) = (0.108+0.012)/(0.108+0.012+0.016+0.064) = 0.6 Interpretation: After observing Toothache, the patient is no longer an “average” one, and the prior probability (0.2) of Cavity is no longer valid P(Cavity|Toothache) is calculated by keeping the ratios of the probabilities of the 4 cases of Toothache unchanged, and normalizing their sum to 1

27 U PDATING THE B ELIEF S TATE  The patient walks into the dentists door  Let D now observe evidence E: Toothache holds with probability 0.8 (e.g., “the patient says so”)  How should D update its belief state? StateP(state) C, T, P0.108 C, T,  P 0.012 C,  T, P 0.072 C,  T,  P 0.008  C, T, P 0.016  C, T,  P 0.064  C,  T, P 0.144  C,  T,  P 0.576

28 U PDATING THE B ELIEF S TATE P(Toothache|E) = 0.8 We want to compute P(C,T,P|E) = P(C,P|T,E) P(T|E) Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence: P(C,P|T,E) = P(C  P|T) P(C,T,P|E) = P(C,P,T) P(T|E)/P(T) StateP(state) C, T, P0.108 C, T,  P 0.012 C,  T, P 0.072 C,  T,  P 0.008  C, T, P 0.016  C, T,  P 0.064  C,  T, P 0.144  C,  T,  P 0.576

29 U PDATING THE B ELIEF S TATE P(Toothache|E) = 0.8 We want to compute P(C  T  P|E) = P(C  P|T,E) P(T|E) Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence: P(C  P|T,E) = P(C  P|T) P(C  T  P|E) = P(C,P,T) P(T|E)/P(T) StateP(state) C, T, P0.108 C, T,  P 0.012 C,  T, P 0.072 C,  T,  P 0.008  C, T, P 0.016  C, T,  P 0.064  C,  T, P 0.144  C,  T,  P 0.576 These rows should be scaled to sum to 0.8 These rows should be scaled to sum to 0.2

30 U PDATING THE B ELIEF S TATE P(Toothache|E) = 0.8 We want to compute P(C  T  P|E) = P(C  P|T,E) P(T|E) Since E is not directly related to the cavity or the probe catch, we consider that C and P are independent of E given T, hence: P(C  P|T,E) = P(C  P|T) P(C  T  P|E) = P(C,P,T) P(T|E)/P(T) StateP(state) C, T, P0.108 0.432 C, T,  P 0.012 0.048 C,  T, P 0.072 0.018 C,  T,  P 0.008 0.002  C, T, P 0.016 0.064  C, T,  P 0.064 0.256  C,  T, P 0.144 0.036  C,  T,  P 0.576 0.144 These rows should be scaled to sum to 0.8 These rows should be scaled to sum to 0.2

31 I SSUES If a state is described by n propositions, then a belief state contains 2 n states (possibly, some have probability 0)  Modeling difficulty: many numbers must be entered in the first place  Computational issue: memory size and time

32 I NDEPENDENCE OF EVENTS Two events A=a and B=b are independent if P(A=a  B=b) = P(A=a) P(B=b) hence P(A=a|B=b) = P(A=a) Knowing B=b doesn’t give you any information about whether A=a is true

33 I NDEPENDENCE OF RANDOM VARIABLES Two random variables A and B are independent if P(A,B) = P(A) P(B) hence P(A|B) = P(A) Knowing B doesn’t give you any information about A [This equality has to hold for all combinations of values that A and B can take on, i.e., all events A=a and B=b are independent]

34 S IGNIFICANCE OF INDEPENDENCE If A and B are independent, then P(A,B) = P(A) P(B) => The joint distribution over A and B can be defined as a product of the distribution of A and the distribution of B Rather than storing a big probability table over all combinations of A and B, store two much smaller probability tables! To compute P(A=a  B=b), just look up P(A=a) and P(B=b) in the individual tables and multiply them together

35 C ONDITIONAL I NDEPENDENCE Two random variables A and B are conditionally independent given C, if P(A, B|C) = P(A|C) P(B|C) hence P(A|B,C) = P(A|C) Once you know C, learning B doesn’t give you any information about A [again, this has to hold for all combinations of values that A,B,C can take on]

36 S IGNIFICANCE OF C ONDITIONAL INDEPENDENCE Consider Rainy, Thunder, and RoadsSlippery Ostensibly, thunder doesn’t have anything directly to do with slippery roads… But they happen together more often when it rains, so they are not independent… So it is reasonable to believe that Thunder and RoadsSlippery are conditionally independent given Rainy So if I want to estimate whether or not I will hear thunder, I don’t need to think about the state of the roads, just whether or not it’s raining!

37 Toothache and PCatch are independent given Cavity, but this relation is hidden in the numbers! [Quiz] Bayesian networks explicitly represent independence among propositions to reduce the number of probabilities defining a belief state StateP(state) C, T, P0.108 C, T,  P 0.012 C,  T, P 0.072 C,  T,  P 0.008  C, T, P 0.016  C, T,  P 0.064  C,  T, P 0.144  C,  T,  P 0.576

38 B AYESIAN N ETWORK Notice that Cavity is the “cause” of both Toothache and PCatch, and represent the causality links explicitly Give the prior probability distribution of Cavity Give the conditional probability tables of Toothache and PCatch Cavity ToothachePCatch 5 probabilities, instead of 7 P(C,T,P) = P(T,P|C) P(C) = P(T|C) P(P|C) P(C) Cavity  Cavity P(T|C)0.60.1 P(Cavity)0.2 Cavity  Cavity P(P|C)0.90.02

39 C ONDITIONAL P ROBABILITY T ABLES Cavity Toothache P(Cavity)0.2 Cavity  Cavity P(T|C)0.60.1 PCatch Columns sum to 1 If X takes n values, just store n-1 entries P(C,T,P) = P(T,P|C) P(C) = P(T|C) P(P|C) P(C) Cavity  Cavity P(T|C)0.60.1 P(  T|C)0.40.9 Cavity  Cavity P(P|C)0.90.02

40 S IGNIFICANCE OF C ONDITIONAL INDEPENDENCE Consider Grade(CS101), Intelligence, and SAT Ostensibly, the grade in a course doesn’t have a direct relationship with SAT scores but good students are more likely to get good SAT scores, so they are not independent… It is reasonable to believe that Grade(CS101) and SAT are conditionally independent given Intelligence

41 BAYESIAN N ETWORK Explicitly represent independence among propositions Notice that Intelligence is the “cause” of both Grade and SAT, and the causality is represented explicitly Intel. Grade P(I=x) high 0.3 low 0.7 SAT 7 probabilities, instead of 11 P(I,G,S) = P(G,S|I) P(I) = P(G|I) P(S|I) P(I) P(G=x|I)I=lowI=high ‘A’ 0.20.74 ‘B’ 0.340.17 ‘C’ 0.460.09 P(S=x|I)I=lowI=high low 0.950.2 high 0.050.8

42 S IGNIFICANCE OF B AYESIAN N ETWORKS If we know that variables are conditionally independent, we should be able to decompose joint distribution to take advantage of it Bayesian networks are a way of efficiently factoring the joint distribution into conditional probabilities And also building complex joint distributions from smaller models of probabilistic relationships But… What knowledge does the BN encode about the distribution? How do we use a BN to compute probabilities of variables that we are interested in?

43 A M ORE C OMPLEX BN BurglaryEarthquake Alarm MaryCallsJohnCalls causes effects Directed acyclic graph Intuitive meaning of arc from x to y: “x has direct influence on y”

44 BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 Size of the CPT for a node with k parents: 2 k A M ORE C OMPLEX BN 10 probabilities, instead of 31

45 W HAT DOES THE BN ENCODE ? Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or  Alarm BurglaryEarthquake Alarm MaryCallsJohnCalls For example, John does not observe any burglaries directly P(B  J)  P(B) P(J) P(B  J|A)  P(B|A) P(J|A)

46 W HAT DOES THE BN ENCODE ? The beliefs JohnCalls and MaryCalls are independent given Alarm or  Alarm For instance, the reasons why John and Mary may not call if there is an alarm are unrelated BurglaryEarthquake Alarm MaryCallsJohnCalls P(B  J|A)  P(B|A) P(J|A) P(J  M|A)  P(J|A) P(M|A) P(B  J|A)  P(B|A) P(J|A) P(J  M|A)  P(J|A) P(M|A) A node is independent of its non-descendants given its parents

47 W HAT DOES THE BN ENCODE ? BurglaryEarthquake Alarm MaryCallsJohnCalls A node is independent of its non-descendants given its parents Burglary and Earthquake are independent Burglary and Earthquake are independent The beliefs JohnCalls and MaryCalls are independent given Alarm or  Alarm For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

48 L OCALLY S TRUCTURED W ORLD A world is locally structured (or sparse) if each of its components interacts directly with relatively few other components In a sparse world, the CPTs are small and the BN contains much fewer probabilities than the full joint distribution If the # of entries in each CPT is bounded by a constant, i.e., O(1), then the # of probabilities in a BN is linear in n – the # of propositions – instead of 2 n for the joint distribution

49 E QUATIONS I NVOLVING R ANDOM V ARIABLES G IVE R ISE TO C AUSALITY R ELATIONSHIPS C = A  B C = max(A,B) Constrains joint probability P(A,B,C) Nicely encoded as causality relationship C AB Conditional probability given by equation rather than a CPT

50 N AÏVE B AYES M ODELS P(Cause,Effect 1,…,Effect n ) = P(Cause)  i P(Effect i | Cause) Cause Effect 1 Effect 2 Effect n

51 B AYES ’ R ULE AND OTHER P ROBABILITY M ANIPULATIONS P(A,B) = P(A|B) P(B) = P(B|A) P(A) P(A|B) = P(B|A) P(A) / P(B) Gives us a way to manipulate distributions e.g. P(B) =  a P(B|A=a) P(A=a) Can derive P(A|B), P(B) using only P(B|A) and P(A)

52 N AÏVE B AYES C LASSIFIER P(Class,Feature 1,…,Feature n ) = P(Class)  i P(Feature i | Class) Class Feature 1 Feature 2 Feature n P(C|F 1,….,F k ) = P(C,F 1,….,F k )/P(F 1,….,F k ) = 1/Z P(C)  i P(Fi|C) Given features, what class? Spam / Not Spam English / French/ Latin … Word occurrences

53 B UT DOES A BN REPRESENT A BELIEF STATE ? I N OTHER WORDS, CAN WE COMPUTE THE FULL JOINT DISTRIBUTION OF THE PROPOSITIONS FROM IT ?

54 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(J  M  A   B   E) = ??

55 P(J  M  A  B  E) = P(J  M|A,  B,  E)  P(A  B  E) = P(J|A,  B,  E)  P(M|A,  B,  E)  P(A  B  E) (J and M are independent given A) P(J|A,  B,  E) = P(J|A) (J and B and J and E are independent given A) P(M|A,  B,  E) = P(M|A) P(A  B  E) = P(A|  B,  E)  P(  B|  E)  P(  E) = P(A|  B,  E)  P(  B)  P(  E) (B and E are independent) P(J  M  A  B  E) = P(J|A)P(M|A)P(A|  B,  E)P(  B)P(  E) BurglaryEarthquake Alarm MaryCallsJohnCalls

56 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(J  M  A   B   E) = P(J|A)P(M|A)P(A|  B,  E)P(  B)P(  E) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062

57 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(J  M  A   B   E) = P(J|A)P(M|A)P(A|  B,  E)P(  B)P(  E) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062 P(x 1  x 2  …  x n ) =  i=1,…,n P(x i |parents(X i ))  full joint distribution table

58 C ALCULATION OF J OINT P ROBABILITY BEP(A| … ) TTFFTTFF TFTFTFTF 0.95 0.94 0.29 0.001 BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) 0.001 P(E) 0.002 AP(J|…) TFTF 0.90 0.05 AP(M|…) TFTF 0.70 0.01 P(x 1  x 2  …  x n ) =  i=1,…,n P(x i |parents(X i ))  full joint distribution table P(J  M  A   B   E) = P(J|A)P(M|A)P(A|  B,  E)P(  b)P(  e) = 0.9 x 0.7 x 0.001 x 0.999 x 0.998 = 0.00062 Since a BN defines the full joint distribution of a set of propositions, it represents a belief state

59 H OMEWORK Read R&N 14.1-3


Download ppt "P ROBABILISTIC I NFERENCE. A GENDA Random variables Bayes rule Intro to Bayesian Networks."

Similar presentations


Ads by Google