Presentation is loading. Please wait.

Presentation is loading. Please wait.

. PGM 2002/3 – Tirgul6 Approximate Inference: Sampling.

Similar presentations


Presentation on theme: ". PGM 2002/3 – Tirgul6 Approximate Inference: Sampling."— Presentation transcript:

1 . PGM 2002/3 – Tirgul6 Approximate Inference: Sampling

2 Approximation u Until now, we examined exact computation u In many applications, approximation are sufficient Example: P(X = x|e) = 0.3183098861838 Maybe P(X = x|e)  0.3 is a good enough approximation e.g., we take action only if P(X = x|e) > 0.5 u Can we find good approximation algorithms?

3 Types of Approximations Absolute error  An estimate q of P(X = x | e) has absolute error , if P(X = x|e) -   q  P(X = x|e) +  equivalently q -   P(X = x|e)  q +  u Absolute error is not always what we want: If P(X = x | e) = 0.0001, then an absolute error of 0.001 is unacceptable If P(X = x | e) = 0.3, then an absolute error of 0.001 is overly precise 0 1 q 22

4 Types of Approximations Relative error  An estimate q of P(X = x | e) has relative error , if P(X = x|e)(1 -  )  q  P(X = x|e)(1 +  ) equivalently q/(1 +  )  P(X = x|e)  q/(1 -  )  Sensitivity of approximation depends on actual value of desired result 0 1 q q/(1+  ) q/(1-  )

5 Complexity u Recall, exact inference is NP-hard u Is approximate inference any easier? u Construction for exact inference: Input: a 3-SAT problem  Output: a BN such that P(X=t) > 0 iff  is satisfiable

6 Complexity: Relative Error  Suppose that q is an relative error estimate of P(X = t),  If  is not satisfiable, then P(X = t)(1 -  )  q  P(X = t)(1 +  ) 0 = P(X = t)(1 -  )  q  P(X = t)(1 +  ) = 0 Thus, if q > 0, then  is satisfiable An immediate consequence: Thm: Given , finding an  -relative error approximation is NP- hard

7 Complexity: Absolute error  We can find absolute error approximations to P(X = x) l We will see such algorithms shortly u However, once we have evidence, the problem is harder Thm  If  < 0.5, then finding an estimate of P(X=x|e) with  absulote error approximation is NP-Hard

8 Proof u Recall our construction... 11 Q1Q1 Q3Q3 Q2Q2 Q4Q4 QnQn 22 33 kk A1A1  k-1 A2A2 A k/2 X...

9 Proof (cont.)  Suppose we can estimate with  absolute error  Let p 1  P(Q 1 = t | X = t) Assign q 1 = t if p 1 > 0.5, else q 1 = f Let p 2  P(Q 2 = t | X = t, Q 1 = q 1 ) Assign q 2 = t if p 2 > 0.5, else q 2 = f … Let p n  P(Q n = t | X = t, Q 1 = q 1, …, Q n-1 = q n-1 ) Assign q n = t if p n > 0.5, else q n = f

10 Proof (cont.) Claim: if  is satisfiable, then q 1,…, q n is a satisfying assignment  Suppose  is satisfiable  By induction on i there is a satisfying assignment with Q 1 = q 1, …, Q i = q i Base case: If Q 1 = t in all satisfying assignments,  P(Q 1 = t | X = t) = 1  p 1  1 -  > 0.5  q 1 = t If Q 1 = f, in all satisfying assignments, then q 1 = f Otherwise, statement holds for any choice of q 1

11 Induction argument: If Q i+1 = t in all satisfying assignments s.t. Q 1 = q 1, …, Q i = q i  P(Q i+1 = t | X = t, Q 1 = q 1, …, Q i = q i ) = 1  p i+1  1 -  > 0.5  q i+1 = t If Q i+1 = f in all satisfying assignments s.t. Q 1 = q 1, …, Q i = q i then q i+1 = f Proof (cont.) Claim: if  is satisfiable, then q 1,…, q n is a satisfying assignment  Suppose  is satisfiable  By induction on i there is a satisfying assignment with Q 1 = q 1, …, Q i = q i

12 Proof (cont.)  We can efficiently check whether q 1,…, q n is a satisfying assignment (linear time) If it is, then  is satisfiable If it is not, then  is not satisfiable  Suppose we have an approximation procedure with  relative error   we can determine 3-SAT with n procedure calls u  approximation is NP-hard

13 Search Algorithms Idea: search for high probability instances  Suppose x[1], …, x[N] are instances with high mass u We can approximate:  If x[i] is a complete instantiation, then P(e|x[i]) is 0 or 1

14 Search Algorithms (cont)  Instances that do not satisfy e, do not play a role in approximation  We need to focus the search to find instances that do satisfy e u Clearly, in some cases this is hard (e.g., the construction from our NP-hardness result

15 Stochastic Simulation  Suppose we can sample instances according to P (X 1,…,X n )  What is the probability that a random sample satisfies e? This is exactly P(e)  We can view each sample as tossing a biased coin with probability P(e) of “Heads”

16 Stochastic Sampling  Intuition: given a sufficient number of samples x[1],…,x[N], we can estimate  Law of large number implies that as N grows, our estimate will converge to p with high probability u How many samples do we need to get a reliable estimation? Use Chernof’s bound for binomial distributions

17 Sampling a Bayesian Network  If P (X 1,…,X n ) is represented by a Bayesian network, can we efficiently sample from it? u Idea: sample according to structure of the network l Write distribution using the chain rule, and then sample each variable given its parents

18 Samples: B E A C R Logic sampling P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 b Earthquake Radio Burglary Alarm Call 0.03

19 Samples: B E A C R Logic sampling P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 eb Earthquake Radio Burglary Alarm Call 0.001

20 Samples: B E A C R Logic sampling P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 eab 0.4 Earthquake Radio Burglary Alarm Call

21 Samples: B E A C R Logic sampling P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 eacb Earthquake Radio Burglary Alarm Call 0.8

22 Samples: B E A C R Logic sampling P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 eacb r 0.3 Earthquake Radio Burglary Alarm Call

23 Logic Sampling  Let X 1, …, X n be order of variables consistent with arc direction  for i = 1, …, n do sample x i from P(X i | pa i ) (Note: since Pa i  {X 1,…,X i-1 }, we already assigned values to them)  return x 1, …,x n

24 Logic Sampling u Sampling a complete instance is linear in number of variables l Regardless of structure of the network  However, if P(e) is small, we need many samples to get a decent estimate

25 Can we sample from P( X 1,…,X n |e)? u If evidence is in roots of network, easily u If evidence is in leaves of network, we have a problem l Our sampling method proceeds according to order of nodes in graph u Note, we can use arc-reversal to make evidence nodes root. l In some networks, however, this will create exponentially large tables...

26 Likelihood Weighting  Can we ensure that all of our sample satisfy e? u One simple solution: l When we need to sample a variable that is assigned value by e, use the specified value  For example: we know Y = 1 Sample X from P(X) Then take Y = 1  Is this a sample from P( X,Y |Y = 1) ? X Y

27 Likelihood Weighting  Problem: these samples of X are from P(X) u Solution: Penalize samples in which P(Y=1|X) is small u We now sample as follows: Let x[i] be a sample from P(X) Let w[i] be P(Y = 1|X = x [i]) X Y

28 Likelihood Weighting u Why does this make sense?  When N is large, we expect to sample NP(X = x) samples with x[i] = x u Thus, u When we normalize, we get approximation of the conditional probability

29 Samples: B E A C R Likelihood Weighting P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a 0.8 0.05 P(r) e e 0.30.001 b Earthquake Radio Burglary Alarm Call 0.03 Weight = r a = a

30 Samples: B E A C R Likelihood Weighting P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 eb Earthquake Radio Burglary Alarm Call 0.001 Weight = r = a

31 Samples: B E A C R Likelihood Weighting P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 eb 0.4 Earthquake Radio Burglary Alarm Call Weight = r = a 0.6 a

32 Samples: B E A C R Likelihood Weighting P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 ecb Earthquake Radio Burglary Alarm Call 0.05 Weight = r = a a 0.6

33 Samples: B E A C R Likelihood Weighting P(b) 0.03 P(e) 0.001 P(a) b e 0.98 0.4 0.7 0.01 P(c) a a 0.8 0.05 P(r) e e 0.30.001 ecb r 0.3 Earthquake Radio Burglary Alarm Call Weight = r = a a 0.6 *0.3

34 Likelihood Weighting  Let X 1, …, X n be order of variables consistent with arc direction u w = 1  for i = 1, …, n do if X i = x i has been observed  w  w* P(X i = x i | pa i ) l else  sample x i from P(X i | pa i )  return x 1, …,x n, and w

35 Importance Sampling A general method for evaluating P(X) when we cannot sample from P(X). Idea: Choose an approximating distribution Q(X) and sample from it Using this we can now sample from Q and then W(X) If we could generate samples from P(X) Now that we generate the sample from Q(X)

36 (Unnormalized) Importance Sampling 1. For m=1:M Sample X[m] from Q(X) Calculate W(m) = P(X)/Q(X) 2. Estimate the expectation of f(X) using Requirements: P(X)>0  Q(X)>0 (do not ignore possible scenarios) It is possible to calculate P(X),Q(X) for a specific X=x It is possible to sample from Q(X)

37 Normalized Importance Sampling Assume that we cannot now even evalute P(X=x) but can evaluate P’(X=x) =  P(X=x) (for example we can evaluate P(X) but not P(X|e) in a Bayesian network) We define w’(X) = P’(X)/Q(X). We can then evaluate  : and then: where in the last step we simply replace  with the above equation

38 Normalized Importance Sampling We can now estimate the expectation of f(X) similarly to unnormalized importance sampling by sampling from Q(X) and then (hence the name “normalized”)

39 Importance Sampling to LW We want to compute P(Y=y|e)? (X is the set of random variables in the network and Y is some subset we are interested in) 1) Define a mutilated Bayesian network B Z=z to be a network where: all variables in Z are disconnected from their parents and are deterministically set to z all other variables remain unchanged 2) Choose Q to be B E=e convince yourself that P’(X)/Q(X) is exactly P(Y=y|X) 3) Choose f(x) to be 1(Y[m]=y)/M 4) Plug into the formula and you get exactly Likelihood Weighting  Likelihood weighting is correct!!!


Download ppt ". PGM 2002/3 – Tirgul6 Approximate Inference: Sampling."

Similar presentations


Ads by Google