Presentation is loading. Please wait.

Presentation is loading. Please wait.

Zachary Blomme University of Nebraska, Lincoln Pattern Recognition

Similar presentations


Presentation on theme: "Zachary Blomme University of Nebraska, Lincoln Pattern Recognition"— Presentation transcript:

1 Zachary Blomme University of Nebraska, Lincoln Pattern Recognition
Influence Diagrams Zachary Blomme University of Nebraska, Lincoln Pattern Recognition

2 Outline Decision Trees Influence Diagrams Dynamic Networks
Mathematically equivalent to influence diagrams, and simpler when the problem instance is small Influence Diagrams A Bayesian network that has been slightly modified to make a decision Dynamic Networks Models dynamic aspects of a Bayesian network (usually time)

3 Decision Trees Decision Node Chance Node Utility Expected Utility
Represents decisions to be made Chance Node Represents random variables Utility Value of outcome to decision make Expected Utility The expected utility of a chance node is the expected value of the node’s outcomes Decision Node Chance Node

4 Example 1 You are considering purchasing 100 shares of NASDIP(n) currently valued at $10. This is your model for the value of the stock at the end of the month: p(n=$5) = .25 p(n=$10) = .25 p(n=$20) = .5

5 Example 1 $5 $500 .25 $10 NASDIP $1000 .25 Buy NASDIP $20 $2000 .5 D
Money in the bank $1005 .5% interest

6 Example 1 Solved $5 $500 $1375 .25 $10 NASDIP $1000 .25 Buy NASDIP $20
$2000 D .5 $1375 Money in the bank $1005 .5% interest

7 Example 2 As example 1, except that we now also have the chance to buy an option on NASDIP for $ This option would allow you to buy 500 shares of NASDIP for $11 in one month.

8 Example 2 $5 $500 .25 $10 NASDIP1 $1000 .25 Buy NASDIP $20 $2000 .5 D
$0 .25 Buy option NASDIP2 $10 $0 .25 $20 $4500 .5

9 Example 2 solved $5 $500 .25 $1375 NASDIP1 $10 $1000 .25 Buy NASDIP
$20 $2000 D .5 $2250 $5 $0 .25 Buy option $2250 NASDIP2 $10 $0 .25 $20 $4500

10 Utility Function We don’t have to use expected value to calculate utility One alternate method is the exponential utility function The parameter R is the risk tolerance. As it becomes smaller the function will become more risk adverse UR(x) = 1 – e-x/R

11 Example 3 $5 $500 .25 $10 NASDIP $1000 .25 Buy NASDIP $20 $2000 .5 D
Money in the bank $1005 .5% interest

12 Solving Decision Trees
Starting at the right Proceed to the left Passing expected utilities to chance nodes Passing maxima to decision nodes Until the root is reached

13 Example 6 Buying and selling a Car Know someone who’ll sell for $10000
Know someone who’ll buy for $11000 Problem: 20% of these cars are known to have a bad transmission There exists a test that can identify bad transmissions with a 30% false negative rate and a 10% false positive rate The test costs $200

14 Example 6 Good .571 $10800 Tran1 Bad .428 D1 $7800 Bad .42 $9800
Keep Money Good .965 $10800 Tran2 Test D2 Bad .034 $7800 Good .58 Run Test $9800 Keep Money Good .8 $11000 Buy Car Tran3 D3 Bad .2 $8000 $10000 Keep Money

15 Example 7 Should Leonardo take an umbrella when he goes out?
Leonardo finds carrying an umbrella to be inconvenient There is a 40% chance of rain The possible outcomes for this event are, in increasing preference: His new suit will be ruined Suit will not be ruined, inconvenience Suit will not be ruined

16 Example 7 rain Suit ruined .4 Rain Don’t take Umbrella No rain
Suit not ruined .6 D Take Umbrella Suit not ruined, inconvenience

17 Assign numeric values to outcomes
Assign a utility of 0 to worst outcome Assign a utility of 1 to best outcome To determine the utility of an unknown intermediate outcomes run a lottery with two known, flanking outcomes The utility of the unknown intermediate outcome is defined as the expected utility of the lottery where you would be indifferent to the lottery and the assured unknown intermediate outcome

18 Example 7 U(suit not ruined) = 1 U(suit ruined) = 0
U(inconvenience) = EU(Lotteryp’) = p’U(suit not ruined) + (1-p’)U(suit ruined) =p’(1) + (1-p’)(0) =p’ For this example we’ll choose p’ = .8 U(inconvenience) = .8

19 Example 7 rain Suit ruined .4 Rain Don’t take Umbrella No rain
.4 Rain Don’t take Umbrella No rain Suit not ruined 1 .6 D Take Umbrella Suit not ruined, Inconvenience .8

20 Example 8 Amit is 15 years old and has a streptococcal infection (sore throat) There is a treatment available that reduces the number of days with a sore throat from 4 to 3 The treatment has a chance of death from anaphylaxis. Amit’s remaining life expectancy is 60 years

21 Example 8 No death 3 sore throat days .999997 A Treat death dead
Don’t treat 4 sore throat days

22 How do we quantify Amit’s life?
We can do this using quality adjusted life expectancies(QALE) Amit feels that 1 year of life with a sore throat is worth .9 years of life without So Amit’s QALE will be: L is Amit’s remaining life expectancy and t is his time with the disease QALE(L,t) = (L-t) + .9t

23 Example 8 3 days = .008219 years 4 days = .010959 years
QALE(60, 3 days) = years QALE(60, 4 days) = years

24 Example 8 No death 3 sore throat days .999997 59.998998 yrs
Treat death Dead 0 yrs D Don’t treat 4 sore throat Days

25 Influence Diagrams Decision Node Chance Node Utility Node
Represents decisions to be made Chance Node Represents random variables Utility Node Random variable whose values are the utilities of the outcomes Chance Node Utility Node

26 Edges Decision Node Value of parent is known at time of decision, so edge represents a sequence Value of the node is probabilistically dependent upon parent Value of node is deterministically dependent upon parent Chance Node Utility Node

27 Influence Diagrams Chance nodes in an influence diagram satisfy the Markov condition Because of this, an influence diagram is a Bayesian network with decision nodes and a utility node

28 Example 10/17 You are considering purchasing 100 shares of NASDIP(n) currently valued at $10. This is your model for the value of the stock at the end of the month: p(n=$5) = .25 p(n=$10) = .25 p(n=$20) = .5

29 Example 1 Solved $5 $500 $1375 .25 $10 NASDIP $1000 .25 Buy NASDIP $20
$2000 D .5 $1375 Money in the bank $1005 .5% interest

30 Example 10/17 P(NASDIP = $5) = .25 P(NASDIP = $10) = .25
Utility D d1 = Buy d2 = Bank U(d1, $5) = $500 U(d1, $10) = $1000 U(d1, $20) = $2000 U(d1, n) = $1005

31 Example 12 This problem illustrates how chance nodes can depend on decision nodes In this example buying a great deal of stock can influence market price Dow ICK Utility D

32 Example 14 Buying and selling a Car
Know someone who’ll sell for $10000 Know someone who’ll buy for $11000 Problem: 20% of these cars are known to have a bad transmission There exists a test that can identify bad transmissions with a 30% false negative rate and a 10% false positive rate The test costs $200

33 Example 6 Good .571 $10800 Tran1 Bad .428 D1 $7800 Bad .42 $9800
Keep Money Good .965 $10800 Tran2 Test D2 Bad .034 $7800 Good .58 Run Test $9800 Keep Money Good .8 $11000 Buy Car Tran3 D3 Bad .2 $8000 $10000 Keep Money

34 Example 14/20 P(Test=positive|tran = good) = .3
P(Test=positive|tran = bad) = .9 P(tran = good) = .8 P(tran = bad) = .2 Test Tran Utility D Run Test U(r1, d1, good) = $10800 U(r1, d1, bad) = $7800 U(r1, d2, t) = $9800 U(r2, d, good) = $11000 U(r2, d, bad) = $8000 U(r3, d, t) = $10000 r1 = run test r2 = buy car r3 = don’t buy d1 = Buy d2 = Don’t Buy

35 Dynamic Bayesian Networks
Uses a Bayesian network to modify dynamic aspects of a problem, usually time. To do this, we must define a few terms and make a few assumptions

36 Definitions Random Vector Random Matrix Independence
A column vector whose variables are random variables Random Matrix Like random vector, only a matrix Independence A random vector is independent if the set of it’s variables are independent

37 Definitions X is a random vector and the set of variables that constitute X x is a vector value of X and the set of variables that constitute x P(x) is the joint probability distribution P(x1, …, xn)

38 Dynamic Bayesian Networks
Assume changes occur between discrete time points. Assume there is a finite number T of these time points. A simple dynamic Bayesian network contains the variables that constitute the T random vectors X[t], and has the following specifications

39 Initial Bayesian Network
X1[0] A DAG containing variables in X[0] Also contains a probability distribution for these variables X2[0] X3[0]

40 Transition Bayesian Network
X1[t] X1[t+1] X2[t] X2[t+1] X3[t] X3[t+1]

41 Dynamic Bayesian Network
X1[0] X1[1] X1[2] X2[0] X2[1] X2[2] X3[0] X3[1] X3[2]

42 Further Assumptions All you need to predict a world state at time t is the world state at time t-1 Because of this we satisfy the Markov property This process is stationary. P(x[t+1] | x[t]) is the same for all t

43 D[t] D[t+1] D[t+2] LR[t] LR[t+1] LR[t+2] LR[t+3] ER[t] ER[t+1] ER[t+2] ER[t+3] EA[t] EA[t+1] EA[t+2] EA[t+3] LA[t] LA[t+1] LA[t+2] LA[t+3]


Download ppt "Zachary Blomme University of Nebraska, Lincoln Pattern Recognition"

Similar presentations


Ads by Google