Presentation is loading. Please wait.

Presentation is loading. Please wait.

John Loucks St. Edward’s University . SLIDES . BY.

Similar presentations


Presentation on theme: "John Loucks St. Edward’s University . SLIDES . BY."— Presentation transcript:

1 John Loucks St. Edward’s University . SLIDES . BY

2 Chapter 21 Decision Analysis
Problem Formulation Decision Making with Probabilities Decision Analysis with Sample Information Computing Branch Probabilities Using Bayes’ Theorem

3 Problem Formulation The first step in the decision analysis process is problem formulation. We begin with a verbal statement of the problem. Then we identify: the decision alternatives the states of nature (uncertain future events) the payoffs (consequences) associated with each specific combination of: decision alternative state of nature

4 Problem Formulation A decision problem is characterized by decision alternatives, states of nature, and resulting payoffs. The decision alternatives are the different possible strategies the decision maker can employ. The states of nature refer to future events, not under the control of the decision maker, which may occur. States of nature should be defined so that they are mutually exclusive and collectively exhaustive.

5 Payoff Tables The consequence resulting from a specific combination of a decision alternative and a state of nature is a payoff. A table showing payoffs for all combinations of decision alternatives and states of nature is a payoff table. Payoffs can be expressed in terms of profit, cost, time, distance or any other appropriate measure.

6 Decision Trees A decision tree provides a graphical representation showing the sequential nature of the decision-making process. Each decision tree has two types of nodes: round nodes correspond to the states of nature square nodes correspond to the decision alternatives

7 Decision Trees The branches leaving each round node represent the different states of nature while the branches leaving each square node represent the different decision alternatives. At the end of each limb of a tree are the payoffs attained from the series of branches making up that limb.

8 Decision Making with Probabilities
Once we have defined the decision alternatives and states of nature for the chance events, we focus on determining probabilities for the states of nature. The classical method, relative frequency method, or subjective method of assigning probabilities may be used. Because one and only one of the N states of nature can occur, the probabilities must satisfy two conditions: P(sj) > 0 for all states of nature

9 Decision Making with Probabilities
Then we use the expected value approach to identify the best or recommended decision alternative. The expected value of each decision alternative is calculated (explained on the next slide). The decision alternative yielding the best expected value is chosen.

10 Expected Value Approach
The expected value of a decision alternative is the sum of weighted payoffs for the decision alternative. The expected value (EV) of decision alternative di is defined as where: N = the number of states of nature P(sj ) = the probability of state of nature sj Vij = the payoff corresponding to decision alternative di and state of nature sj

11 Expected Value Approach
Example: Burger Prince Burger Prince Restaurant is considering opening a new restaurant on Main Street. It has three different restaurant layout models (A, B, and C), each with a different seating capacity. Burger Prince estimates that the average number of customers served per hour will be 80, 100, or 120. The payoff table for the three models is on the next slide.

12 Expected Value Approach
Payoff Table Average Number of Customers Per Hour s1 = s2 = s3 = 120 Model A Model B Model C $10, $15, $14,000 $ 8, $18, $12,000 $ 6, $16, $21,000

13 Expected Value Approach
Calculate the expected value for each decision. The decision tree on the next slide can assist in this calculation. Here d1, d2, d3 represent the decision alternatives of models A, B, and C. And s1, s2, s3 represent the states of nature of 80, 100, and 120 customers per hour.

14 Expected Value Approach
Payoffs Decision Tree .4 s1 10,000 s2 .2 2 15,000 s3 .4 d1 14,000 .4 s1 8,000 d2 3 1 s2 .2 18,000 s3 .4 d3 12,000 .4 s1 6,000 4 s2 .2 16,000 s3 .4 21,000

15 Expected Value Approach
EMV = .4(10,000) + .2(15,000) + .4(14,000) = $12,600 d1 2 Model A EMV = .4(8,000) + .2(18,000) + .4(12,000) = $11,600 Model B d2 1 3 d3 EMV = .4(6,000) + .2(16,000) + .4(21,000) = $14,000 Model C 4 Choose the model with largest EV, Model C

16 Expected Value of Perfect Information
Frequently, information is available that can improve the probability estimates for the states of nature. The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur. The EVPI provides an upper bound on the expected value of any sample or survey information.

17 Expected Value of Perfect Information
The expected value of perfect information is defined as EVPI = |EVwPI – EVwoPI| where: EVPI = expected value of perfect information EVwPI = expected value with perfect information about the states of nature EVwoPI = expected value without perfect information

18 Expected Value of Perfect Information
EVPI Calculation Step 1: Determine the optimal return corresponding to each state of nature. Step 2: Compute the expected value of these optimal returns. Step 3: Subtract the EV of the optimal decision from the amount determined in step (2).

19 Expected Value of Perfect Information
Calculate the expected value for the optimum payoff for each state of nature and subtract the EV of the optimal decision. EVPI = .4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000

20 Decision Analysis With Sample Information
Knowledge of sample (survey) information can be used to revise the probability estimates for the states of nature. Prior to obtaining this information, the probability estimates for the states of nature are called prior probabilities. With knowledge of conditional probabilities for the outcomes or indicators of the sample or survey information, these prior probabilities can be revised by employing Bayes' Theorem. The outcomes of this analysis are called posterior probabilities or branch probabilities for decision trees.

21 Decision Analysis With Sample Information
Decision Strategy A decision strategy is a sequence of decisions and chance outcomes. The decisions chosen depend on the yet to be determined outcomes of chance events. The approach used to determine the optimal decision strategy is based on a backward pass through the decision tree.

22 Decision Analysis With Sample Information
Backward Pass Through the Decision Tree At Chance Nodes: Compute the expected value by multiplying the payoff at the end of each branch by the corresponding branch probability. At Decision Nodes: Select the decision branch that leads to the best expected value. This expected value becomes the expected value at the decision node.

23 Decision Analysis With Sample Information
Example: Burger Prince Burger Prince must decide whether to purchase a marketing survey from Stanton Marketing for $1,000. The results of the survey are "favorable" or "unfavorable". The conditional probabilities are: P(favorable | 80 customers per hour) = .2 P(favorable | 100 customers per hour) = .5 P(favorable | 120 customers per hour) = .9

24 Decision Analysis With Sample Information
Decision Tree (top half) s1 (.148) $10,000 s2 (.185) 4 $15,000 d1 s3 (.667) $14,000 s1 (.148) $8,000 d2 s2 (.185) 2 5 $18,000 s3 (.667) I1 (.54) d3 $12,000 s1 (.148) $6,000 s2 (.185) 6 $16,000 s3 (.667) 1 $21,000

25 Decision Analysis With Sample Information
Decision Tree (bottom half) s1 (.696) 1 $10,000 I2 (.46) s2 (.217) 7 $15,000 d1 s3 (.087) $14,000 s1 (.696) $8,000 d2 s2 (.217) 3 8 $18,000 s3 (.087) d3 $12,000 s1 (.696) $6,000 s2 (.217) 9 $16,000 s3 (.087) $21,000

26 Decision Analysis With Sample Information
EMV = .148(10,000) (15,000) + .667(14,000) = $13,593 d1 4 $17,855 d2 EMV = .148 (8,000) (18,000) + .667(12,000) = $12,518 2 5 I1 (.54) d3 EMV = .148(6,000) (16,000) +.667(21,000) = $17,855 6 1 EMV = .696(10,000) (15,000) +.087(14,000)= $11,433 7 d1 I2 (.46) d2 EMV = .696(8,000) (18,000) + .087(12,000) = $10,554 3 8 d3 $11,433 EMV = .696(6,000) (16,000) +.087(21,000) = $9,475 9

27 Expected Value of Sample Information
The expected value of sample information (EVSI) is the additional expected profit possible through knowledge of the sample or survey information. EVSI = |EVwSI – EVwoSI| where: EVSI = expected value of sample information EVwSI = expected value with sample information about the states of nature EVwoSI = expected value without sample information

28 Expected Value of Sample Information
EVwSI Calculation Step 1: Determine the optimal decision and its expected return for the possible outcomes of the sample using the posterior probabilities for the states of nature. Step 2: Compute the expected value of these optimal returns.

29 Decision Analysis With Sample Information
4 $13,593 $17,855 d2 2 5 $12,518 I1 (.54) d3 6 $17,855 EVwSI = .54(17,855) + .46(11,433) = $14,900.88 1 7 $11,433 d1 I2 (.46) d2 3 8 $10,554 d3 $11,433 9 $ 9,475

30 Expected Value of Sample Information
If the outcome of the survey is "favorable”, choose Model C. If the outcome of the survey is “unfavorable”, choose Model A. EVwSI = .54($17,855) + .46($11,433) = $14,900.88

31 Expected Value of Sample Information
EVSI Calculation Subtract the EVwoSI (the value of the optimal decision obtained without using the sample information) from the EVwSI. EVSI = .54($17,855) + .46($11,433) - $14,000 = $900.88 Conclusion Because the EVSI is less than the cost of the survey, the survey should not be purchased.

32 Computing Branch Probabilities Using Bayes’ Theorem
Bayes’ Theorem can be used to compute branch probabilities for decision trees. For the computations we need to know: the initial (prior) probabilities for the states of nature, the conditional probabilities for the outcomes or indicators of the sample information given each state of nature. A tabular approach is a convenient method for carrying out the computations.

33 Computing Branch Probabilities Using Bayes’ Theorem
Step 1 For each state of nature, multiply the prior probability by its conditional probability for the indicator. This gives the joint probabilities for the states and indicator. Step 2 Sum these joint probabilities over all states. This gives the marginal probability for the indicator. Step 3 For each state, divide its joint probability by the marginal probability for the indicator. This gives the posterior probability distribution.

34 Decision Analysis With Sample Information
Example: Burger Prince Recall that Burger Prince is considering purchasing a marketing survey from Stanton Marketing. The results of the survey are "favorable“ or "unfavorable". The conditional probabilities are: P(favorable | 80 customers per hour) = .2 P(favorable | 100 customers per hour) = .5 P(favorable | 120 customers per hour) = .9 Compute the branch (posterior) probability distribution.

35 Posterior Probabilities
Favorable State Prior Conditional Joint Posterior 80 100 120 .4 .2 .2 .5 .9 .08 .10 .36 .148 .185 .667 .08/.54 Total 1.000 P(favorable) = .54

36 Posterior Probabilities
Unfavorable State Prior Conditional Joint Posterior 80 100 120 .4 .2 .8 .5 .1 .32 .10 .04 .696 .217 .087 .32/.46 Total 1.000 P(unfavorable) = .46

37 End of Chapter 21


Download ppt "John Loucks St. Edward’s University . SLIDES . BY."

Similar presentations


Ads by Google