Download presentation
Presentation is loading. Please wait.
1
1 Civil Systems Planning Benefit/Cost Analysis Scott Matthews Courses: 12-706 / 19-702
2
12-706 and 73-3592 Announcements Recitation Friday HW 3 Due Today (now)
3
12-706 and 73-3593 Risk Profiles (“pmf”) Risk profile shows a distribution of possible payoffs associated with particular strategies. Chances associated with possible consequences A strategy is what you might plan to do going in to the decision. Holds your plans constant, allows chances to occur Only eliminate things YOU wouldn’t do, not things “they” might not do (you cant control them). Centered around the decision (not chance) nodes in the tree
4
12-706 and 73-3594 Risk Profiles (cont.) There are only 3 “decision strategies” in the base Texaco case: Accept the $2 billion offer (topmost branch of 1st dec. node) Counteroffer $5 Billion, but plan to refuse counteroffer (lower branch of 1st node, upper branch of second) Counteroffer $5B, but plan to accept counteroffer (lower branch of both decision nodes)
5
12-706 and 73-3595 Risk Profiles (cont.) Key concept: you do not have complete control over outcome of “the game” or “the lottery” represented by the tree BUT Consideration of risk profile for each strategy cuts out part of original tree You can plan a strategy (i.e., which branches to choose) but the other side may make choices such that you do not exactly go in the tree where you intended. Risk profile for “Accept $2 Billion” is obvious - get $2B with 100% chance.
6
12-706 and 73-3596 Profile for “Counteroffer $5B, refuse counteroffer”.. Below is just the part of original tree to consider when calculating the risk profile:
7
12-706 and 73-3597 Solving Risk Profile Solve for discrete probabilities of outcomes Make risk profile 25% chance of $0
8
12-706 and 73-3598 Cumulative Risk Profiles Percent chance that “payoff is less than x” “Accept $2B” RP and CRP are below (easy) CRP Goes 0->1 at $2B (0% chance it is below $2B, 100% chance below anything > $2B.
9
12-706 and 73-3599 CRPs for Other 2 Strategies
10
12-706 and 73-35910 Dominance To pick between strategies, it is useful to have rules by which to eliminate options Let’s construct an example - assume minimum “court award” expected is $2.5B (instead of $0). Now there are no “zero endpoints” in the decision tree.
11
12-706 and 73-35911 Stochastic Dominance: Example #1 CRP below for 2 strategies shows “Accept $2 Billion” is dominated by the other.
12
12-706 and 73-35912 Stochastic Dominance “Defined” A is better than B if: Pr(Profit > $z |A) ≥ Pr(Profit > $z |B), for all possible values of $z. Or (complementarity..) Pr(Profit ≤ $z |A) ≤ Pr(Profit ≤ $z |B), for all possible values of $z. A FOSD B iff F A (z) ≤ F B (z) for all z
13
12-706 and 73-35913 Example L1 = (0, 1/6; 1, 1/3; 2, 1/2) L2 = (0, 1/3; 1, 1/3; 2, 1/3) Given these 2 lotteries, does one first- order stochastic dominate the other?
14
12-706 and 73-35914 Value of Information We have been doing decision analysis with best guesses of probabilities Have been building trees with chance and decision nodes, finding expected values It is relevant and interesting to determine how important information might be in our decision problems. Could be in the form of paying an expert, a fortune teller, etc. Goal is to reduce/eliminate uncertainty in the decision problem.
15
12-706 and 73-35915 Willingness to Pay = EVPI We’re interested in knowing our WTP for (perfect) information about our decision. The book shows this as Bayesian probabilities, but think of it this way.. We consider the advice of “an expert who is always right”. If they say it will happen, it will. If they say it will not happen, it will not. They are never wrong. Bottom line - receiving their advice means we have eliminated the uncertainty about the event.
16
12-706 and 73-35916 Notes on EVPI Key is understanding what the relevant information is, and how it affects the tree. Quotes from pp. 501, 509 of Clemen “Redraw the tree so that the uncertainty nodes for which perfect information is (now) available come before the decision node(s).” (When multiple uncertain nodes exist..) “Move those chance nodes for which information is to be obtained so that they (all) precede the decision node.” Note: by “before” or “precede” we mean in the tree, from left to right (as opposed to in the tree solving process)
17
12-706 and 73-35917
18
12-706 and 73-35918
19
12-706 and 73-35919 Discussion The difference between the 2 trees (decision scenarios) is the EVPI $1000 - $580 = $420. That is the amount up to which you would be willing to pay for advice on how to invest. If you pay less than the $420, you would expect to come out ahead, net of the cost of the information. If you pay $425 for the info, you would expect to lose $5 overall! Finding EVPI is really simple to do in @RISK / PrecisionTree plug-in
20
12-706 and 73-35920 Is EVPI Additive? Pair group exercise Let’s look at handout for simple “2 parts uncertainty problem” considering the choice of where to go for a date, and the utility associated with whether it is fun or not, and whether weather is good or not. What is Expected value in this case? What is EVPI for “fun?”; EVPI for “weather?” What do the revised decision trees look like? What is EVPI for “fun and Weather?” Is EVPI fun + EVPI weather = EVPI fun+weather ?
21
12-706 and 73-35921 Additivity, cont. Now look at p,q labels on handout for the decision problem (top values in tree) Is it additive if instead p=0.3, q = 0.8? What if p=0.2 and q=0.2? Should make us think about sensitivity analysis - i.e., how much do answers/outcomes change if we change inputs..
22
12-706 and 73-35922 EVPI - Why Care? For information to “have value” it has to affect our decision Just like doing Tornado diagrams showed us which were the most sensitive variables EVPI analysis shows us which of our uncertainties is the most important, and thus which to focus further effort on If we can spend some time/money to further understand or reduce the uncertainty, it is worth it when EVPI is relatively high.
23
12-706 and 73-35923 Final Thoughts on Plugins You can combine (in @RISK) the decision trees and the sensitivity plugins. Do “Sensitivity of Expected Values” by varying the probabilities (see end Chap 5) Also - can do EVPI with @RISK. Don’t need to do everything by hand! But it helps to be able to.
24
12-706 and 73-35924 Visualizing Decision Tree Results zEMV Outdoors : 100p+70q zEMV Indoors : 40p + 50q + 60(1-p-q) zEMV O > EMV I : p > -(2/3)q+1/2 1 0 1 Outdoors Indoors
25
12-706 and 73-35925 Similar: EVII Imperfect, rather than perfect, information (because it is rarely perfect) Example: our expert acknowledges she is not always right, we use conditional probability (rather than assumption of 100% correct all the time) to solve trees. Ideally, they are “almost always right” and “almost never wrong” e.g.. P(Up Predicted | Up) is less than but close to 1. P(Up Predicted | Down) is greater than but close to 0
26
12-706 and 73-35926 Assessing the Expert
27
12-706 and 73-35927 Expert side of EVII tree This is more complicated than EVPI because we do not know whether the expert is right or not. We have to decide whether to believe her.
28
12-706 and 73-35928 Use Bayes’ Theorem “Flip” the probabilities. We know P(“Up”|Up) but instead need P(Up | “Up”). P(Up|”Up”) = P(“Up”|Up)*P(Up) P(“Up”|Up)*P(Up)+.. P(“Up”|Down)P(Down) = 0.8*0.5 0.8*0.5+0.15*0.3+0.2*0.2 =0.8247
29
12-706 and 73-35929 EVII Tree Excerpt
30
12-706 and 73-35930 Rolling Back to the Top
31
12-706 and 73-35931 Transition Speaking of Information.. Facility case study for monday
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.