Download presentation

Presentation is loading. Please wait.

Published byAlexzander Tobin Modified about 1 year ago

1
MGS3100_06.ppt/Nov 3, 2014/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Decision Analysis Nov 3, 2014

2
MGS3100_06.ppt/Nov 3, 2014/Page 2 Georgia State University - Confidential Agenda Problems Decision Analysis

3
MGS3100_06.ppt/Nov 3, 2014/Page 3 Georgia State University - Confidential Decision Analysis Open MGS3100_06Decision_Making.xls Decision Alternatives Your options - factors that you have control over A set of alternative actions - We may chose whichever we please States of Nature Possible outcomes – not affected by decision. Probabilities are assigned to each state of nature Certainty Only one possible state of nature Decision Maker (DM) knows with certainty what the state of nature will be Ignorance Several possible states of nature DM Knows all possible states of nature, but does not know probability of occurrence Risk Several possible states of nature with an estimate of the probability of each DM Knows all possible states of nature, and can assign probability of occurrence for each state

4
MGS3100_06.ppt/Nov 3, 2014/Page 4 Georgia State University - Confidential Decision Making Under Ignorance LaPlace-Bayes –All states of nature are equally likely to occur. –Select alternative with best average payoff Maximax –Evaluates each decision by the maximum possible return associated with that decision –The decision that yields the maximum of these maximum returns (maximax) is then selected Maximin –Evaluates each decision by the minimum possible return associated with the decision –The decision that yields the maximum value of the minimum returns (maximin) is selected Minmax Regret –The decision is made on the least regret for making that choice –Select alternative that will minimize the maximum regret

5
MGS3100_06.ppt/Nov 3, 2014/Page 5 Georgia State University - Confidential LaPlace-Bayes

6
MGS3100_06.ppt/Nov 3, 2014/Page 6 Georgia State University - Confidential Maximax

7
MGS3100_06.ppt/Nov 3, 2014/Page 7 Georgia State University - Confidential Maximin

8
MGS3100_06.ppt/Nov 3, 2014/Page 8 Georgia State University - Confidential MinMax Regret Table Regret Table: Highest payoff for state of nature – payoff for this decision

9
MGS3100_06.ppt/Nov 3, 2014/Page 9 Georgia State University - Confidential Decision Making Under Risk Expected Return (ER) or Expected Value (EV) or Expected Monetary Value (EMV) –S j The j th state of nature –D i The i th decision –P(S j )The probability that S j will occur –R ij The return if D i and S j occur –ER j The long-term average return ER i = S R ij P(S j ) Variance = S (ER i - R ij ) 2 P(S j ) The EMV criterion chooses the decision alternative which has the highest EMV. We'll call this EMV the Expected Value Under Initial Information (EVUII) to distinguish it from what the EMV might become if we later get more information. Do not make the common student error of believing that the EMV is the payoff that the decision maker will get. The actual payoff will be the for that alternative (j) Vi,j and for the State of Nature (i) that actually occurs.

10
MGS3100_06.ppt/Nov 3, 2014/Page 10 Georgia State University - Confidential Decision Making Under Risk One way to evaluate the risk associated with an Alternative Action by calculating the variance of the payoffs. Depending on your willingness to accept risk, an Alternative Action with only a moderate EMV and a small variance may be superior to a choice that has a large EMV and also a large variance. The variance of the payoffs for an Alternative Action is defined as Variance = S (ER i - R ij ) 2 P(S j ) Most of the time, we want to make EMV as large as possible and variance as small as possible. Unfortunately, the maximum-EMV alternative and the minimum-variance alternative are usually not the same, so that in the end it boils down to an educated judgment call.

11
MGS3100_06.ppt/Nov 3, 2014/Page 11 Georgia State University - Confidential Expected Return

12
MGS3100_06.ppt/Nov 3, 2014/Page 12 Georgia State University - Confidential Expected Value of Perfect Information EVPI measures how much better you could do on this decision if you could always know what state of nature would occur. The Expected Value of Perfect Information (EVPI) provides an absolute upper limit on the value of additional information (ignoring the value of reduced risk). It measures the amount by which you could improve on your best EMV if you had perfect information. It is the difference between the Expected Value Under Perfect Information (EVUPI) and the EMV of the best action (EVUII). The Expected Value of Perfect Information measures how much better you could do on this decision, averaging over repeating the decision situation many times, if you could always know what State of Nature would occur, just in time to make the best decision for that State of Nature. Remember that it does not imply control of the States of Nature, just perfect prediction. Remember also that it is a long run average. It places an upper limit on the value of additional information.

13
MGS3100_06.ppt/Nov 3, 2014/Page 13 Georgia State University - Confidential Expected Value of Perfect Information –EVUPI - Expected Value under perfect information S P(S i ) max(V ij ) –EVUII – EMV of the best action max(EMV j ) –EVPI = EVUPI - EVUII

14
MGS3100_06.ppt/Nov 3, 2014/Page 14 Georgia State University - Confidential Expected Value of Perfect Information

15
MGS3100_06.ppt/Nov 3, 2014/Page 15 Georgia State University - Confidential Expected Value of Sample Information EVSI = expected value of sample information EVwSI = expected value with sample information EVwoSI = expected value without sample information EVSI = EVwSI – EVwoSI Efficiency Index = [EVSI/EVPI]100

16
MGS3100_06.ppt/Nov 3, 2014/Page 16 Georgia State University - Confidential Agenda Decision Analysis Problems

17
MGS3100_06.ppt/Nov 3, 2014/Page 17 Georgia State University - Confidential What kinds of problems? Alternatives known States of Nature and their probabilities are known. Payoffs computable under different possible scenarios

18
MGS3100_06.ppt/Nov 3, 2014/Page 18 Georgia State University - Confidential Basic Terms Decision Alternatives States of Nature (eg. Condition of economy) Payoffs ($ outcome of a choice assuming a state of nature) Criteria (eg. Expected Value) Z

19
MGS3100_06.ppt/Nov 3, 2014/Page 19 Georgia State University - Confidential Example Problem 1 - Expected Value & Decision Tree

20
MGS3100_06.ppt/Nov 3, 2014/Page 20 Georgia State University - Confidential Expected Value

21
MGS3100_06.ppt/Nov 3, 2014/Page 21 Georgia State University - Confidential Decision Tree

22
MGS3100_06.ppt/Nov 3, 2014/Page 22 Georgia State University - Confidential Example Problem 2 - Sequential Decisions Would you hire a consultant (or a psychic) to get more info about states of nature? How would additional info cause you to revise your probabilities of states of nature occurring? Draw a new tree depicting the complete problem. Consultant’s Track Record Z

23
MGS3100_06.ppt/Nov 3, 2014/Page 23 Georgia State University - Confidential Example Problem 2 - Sequential Decisions (Ans) Open MGS3100_06Joint_Probabilities_Table.xls 1.First thing you want to do is get the information (Track Record) from the Consultant in order to make a decision. 2.This track record can be converted to look like this: P(F/S1) = 0.2P(U/S1) = 0.8 P(F/S2) = 0.6P(U/S2) = 0.4 P(F/S3) = 0.7P(U/S3) = 0.3 F= FavorableU=Unfavorable 3.Next, you take this information and apply the prior probabilities to get the Joint Probability Table/Bayles Theorum Z

24
MGS3100_06.ppt/Nov 3, 2014/Page 24 Georgia State University - Confidential Example Problem 2 - Sequential Decisions (Ans) Open MGS3100_06Joint_Probabilities_Table.xls 4.Next step is to create the Posterior Probabilities (You will need this information to compute your Expected Values) P(S1/F) = 0.06/0.49 = P(S2/F) = 0.36/0.49 = P(S3/F) = 0.07/0.49 = P(S1/U) = 0.24/0.51 = 0.47 P(S2/U) = 0.24/0.51 = 0.47 P(S3/U) = 0.03/0.51 = Solve the decision tree using the posterior probabilities just computed. Z

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google