Download presentation

Presentation is loading. Please wait.

Published byCameron Artley Modified over 2 years ago

1
A. Darwiche Sensitivity Analysis in Bayesian Networks Adnan Darwiche Computer Science Department http://www.cs.ucla.edu/~darwiche

2
A. Darwiche

4
Ex: How will this parameter change affect the query Pr(BP = Low | HR = Low)? How will this parameter change impact the value of an arbitrary query Pr(y|e)?

5
A. Darwiche Now Pr(BP = Low | HR = Low) = 0.69, but experts believe it should be >= 0.75. How do we efficiently identify minimal parameter changes that can help us satisfy this query constraint?

6
A. Darwiche

7
How do we choose among these candidate parameter changes? How do we measure and quantify change? Absolute? Relative? Or something else?

8
A. Darwiche Naïve Bayes classifier Class variable C Attributes E PPr(p) yes0.87 no0.13 PUPr(u|p) yes-ve0.27 no+ve0.107 PBPr(b|p) yes-ve0.36 no+ve0.106 PSPr(s|p) yes-ve0.10 no+ve0.01

9
A. Darwiche System Schematic P: A power supply for whole system T: Transmitter: generates to bus (generates no data when faulty) R: Receiver: receives data if available on bus S: Sensor: reflects status of data on receiver (fails low) Reliability of each component is shown next to it T RRR P SSS.99.999.995 1 23.99

10
A. Darwiche

13
Now Pr(BP = Low | HR = Low) = 0.69, but experts believe it should be >= 0.75. How do we efficiently identify minimal parameter changes that can help us satisfy this query constraint?

14
A. Darwiche From global to local belief change Goal: for each network parameter x|u, compute the minimal amount of change that can enforce query constraints such as: 1.Pr(y|e) . 2.Pr(y|e) – Pr(z|e) . 3.Pr(y|e) / Pr(z|e) .

15
A. Darwiche Computing the partial derivative For each network parameter x|u, we introduce a meta parameter x|u, such that: Our procedure requires us to first compute the partial derivative Pr(e) / x|u :

16
A. Darwiche Result: to ensure constraint Pr’(y|e) , we need to change the meta parameter x|u by such that: Parameter changes The solution is either q or q. If the change is invalid, the parameter is irrelevant. We have to compute the solution of for every parameter in the Bayesian network.

17
A. Darwiche Time complexity: O(n 2 w ), where n is the number of variables, and w is the treewidth. This time complexity is the same as performing inference to find Pr(e), the simplest query possible. Therefore, this procedure is very efficient, since we are not paying any penalty by computing the partial derivatives. Complexity

18
A. Darwiche Reasoning about Information Understand the impact of new information on situation assessment and decision making What new evidence would it take, and how reliable should it be, to confirm a particular hypothesis?

19
A. Darwiche False positive is 10% False negative is 30%

20
A. Darwiche

22
Get a better scanning test: false negative from 10% to 4.6% Other tests irrelevant! Need probability of pregnancy to be <= 5% given three tests are negative

23
A. Darwiche Current Work… Multiple parameters: –Same CPTs –Multiple CPTs Multiple constraints

24
A. Darwiche How will this parameter change impact the value of an arbitrary query Pr(y|e)?

25
A. Darwiche Bounding the partial derivative Key result: the network-independent bound on the partial derivative:

26
A. Darwiche Plot of the bound

27
A. Darwiche We apply an infinitesimal change x|u to the meta parameter x|u, leading to a change of Pr(y|e) (we assume x|u 0.5). Result: the relative change in the query Pr(y|e) is at most double the relative change in the meta parameter x|u : Bounding query change due to infinitesimal parameter change

28
A. Darwiche Bounding query change due to arbitrary parameter change If the change in x|u is positive: If the change in x|u is negative: Combining both results, we have:

29
A. Darwiche Bounding query change: example Tampering True0.02 False0.98 Tampering ’’ True0.036 False0.964 = false = true Pr(fire | e) = 0.029 Pr’(fire | e) = ? Pr’(fire | e) = [0.016, 0.053] Exact value: 0.021

30
A. Darwiche Permissible changes for Pr(x|u) to ensure robustness The graphs analytically show that the permissible changes are smaller for non-extreme queries. Moreover, we can afford more absolute changes for non-extreme parameters. Pr(y|e) = 0.9 Pr(y|e) = 0.6

31
A. Darwiche A probabilistic distance measure D(Pr, Pr’) satisfies the three properties of distance: positiveness, symmetry, and the triangle inequality.

32
A. Darwiche 0.10 c abc A Pr(A,B,C) BC a a a a a a a b b b b b b b c c c c c c 0.20 0.25 0.05 0.10 0.15 Pr’(A,B,C) 0.20 0.30 0.10 0.05 0.10 0.05 0.10 Pr’(w) / Pr(w) 2.00 1.50 0.40 1.00 2.00 0.50 1.00 0.67 max Pr’(w) / Pr(w)min Pr’(w) / Pr(w)

33
A. Darwiche Distance Properties Measure satisfies properties of distance: 1.Positiveness: D(Pr, Pr’) >= 0, and D(Pr, Pr’) = 0 iff Pr = Pr’ 2.Symmetry: D(Pr, Pr’) = D(Pr’, Pr) 3.Triangle inequality: D(Pr, Pr’) + D(Pr’, Pr’’) >= D(Pr, Pr’’)

34
A. Darwiche Significance of distance measure Let: –Pr and Pr’ be two distributions. – and be any two events. –Odds of | under Pr: O( | ). –Odds of | under Pr’: O’( | ). Key result: we have the following tight bound:

35
A. Darwiche Bounding belief change Given Pr and Pr’: –p = Pr( | ) –d = D(Pr, Pr’) What can we say about Pr’( | )?

36
A. Darwiche Bounding belief change d = 0.1d = 1

37
A. Darwiche Comparison with KL-divergence 1.KL-divergence is incomparable with our distance measure. 2.We cannot use KL-divergence to guarantee a similar bound provided by our distance measure. 3.Our recent work suggests that KL-divergence can be viewed as an average-case bound, and our distance measure as a worst-case bound.

38
A. Darwiche What is the global impact of this local parameter change? parameter change Applications to Bayesian networks

39
A. Darwiche N’ is obtained from N by changing the CPT of X from X|u to ’ X|u. N and N’ induce distributions Pr and Pr’. Distance between networks

40
A. Darwiche Bounding query change: example Tampering True0.02 False0.98 Tampering ’’ True0.036 False0.964 = false = true Pr(fire | e) = 0.029 Pr’(fire | e) = ? Pr’(fire | e) = [0.016, 0.053]

41
A. Darwiche Jeffrey’s Rule Given distribution Pr: Given soft evidence:

42
A. Darwiche Jeffrey’s Rule If w is a world that satisfies i :

43
A. Darwiche Example of Jeffrey’s Rule A piece of cloth: Its color can be one of: green, blue, violet. May be sold or unsold the next day. Green Sold0.12 0.3 Not sold0.18 Blue Sold0.12 0.3 Not Sold0.18 Violet Sold0.32 0.4 Not Sold0.08

44
A. Darwiche Example of Jeffrey’s Rule Given soft evidence: green: 0.7, blue: 0.25, violet: 0.05 Green Sold0.12 0.3 Not sold0.18 Blue Sold0.12 0.3 Not Sold0.18 Violet Sold0.32 0.4 Not Sold0.08 0.7 / 0.3 0.25 / 0.3 0.05 / 0.4 Green Sold0.28 0.7 Not sold0.42 Blue Sold0.1 0.25 Not Sold0.15 Violet Sold0.04 0.05 Not Sold0.01

45
A. Darwiche Bound on Jeffrey’s Rule Distance between Pr and Pr’: Bound on the amount of belief change:

46
A. Darwiche From Numbers to Decisions + Probabilistic Inference 0.87yes 0.13no Pr(p)P 0.27-veyes no P 0.107+ve Pr(u|p)U 0.36-veyes no P 0.106+ve Pr(b|p)B 0.10-veyes no P 0.01+ve Pr(s|p)S Pregnant? (P) Urine test (U) Blood test (B) Scanning test (S) Decision Function Test results: U, B, S Yes, No

47
A. Darwiche U +ve -ve B S Yes +ve -ve No -ve +ve Situation: U=+ve, B=-ve, S=-ve 0.87yes 0.13no Pr(p)P 0.27-veyes no P 0.107+ve Pr(u|p)U 0.36-veyes no P 0.106+ve Pr(b|p)B 0.10-veyes no P 0.01+ve Pr(s|p)S Pregnant? (P) Urine test (U) Blood test (B) Scanning test (S) Ordered Decision Diagram + Probabilistic Inference From Numbers to Decisions

48
A. Darwiche X1 X2 X3 1 0 Binary Decision Diagram Test-once property

49
A. Darwiche Improving Reliability of Sensors Currently False negative 27.0% False positive 10.7% Pregnant? (P) Urine test (U) Blood test (B) Scanning test (S) Same decisions (in all situations) if new test is: False negative 10% False positive 5% Different decisions (in some situations) if new test: False negative 5% False positive 2.5% Can characterize these situations, compute their likelihood, analyze their properties Yes if > 90%

50
A. Darwiche Adding New Sensors Pregnant? (P) Urine test (U) Blood test (B) Scanning test (S) New test (N) Can characterize these situations, compute their likelihood, analyze their properties Same decisions (in all situations) if: False negative 40% False positive 20% Different (in some situations) decisions if: False negative 20% False positive 10% Yes if > 90%

51
A. Darwiche Equivalence of NB classifiers Equivalent iff prior of P in F N’ [0.684, 0.970) Change prior of P

52
A. Darwiche Equivalence of NB classifiers

53
A. Darwiche Path 1Path 2 Sub-ODD D 1 Sub-ODD D 2

54
A. Darwiche Path 1Path 2 Sub-ODD D 1 = D 2

55
A. Darwiche Theoretical results of algorithm Space complexity: –Total number of nodes in the ODD O(b n/2 ) Time complexity: –O(nb n/2 ) Improves greatly over brute-force approach: –Total number of instances = O(b n )

56
A. Darwiche Experimental results of algorithm Network# Attributes# Instances# Nodes bound# Nodes Tic-tac-toe91968324758 Votes1665536774396 Spect22 4 10 6 6153609 Breast-cancer-w9 1 10 9 211174405 Hepatitis19 2 10 10 467949644 Kr-vs-kp36 1 10 11 91748859905 Mushroom22 1 10 14 1 10 8 43638

57
A. Darwiche http://reasoning.cs.ucla.edu/samiam/

58
A. Darwiche

Similar presentations

OK

Jeff Howbert Introduction to Machine Learning Winter 2012 1 Classification Bayesian Classifiers.

Jeff Howbert Introduction to Machine Learning Winter 2012 1 Classification Bayesian Classifiers.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google