Download presentation

Presentation is loading. Please wait.

1
**Decision Analysis (Decision Tables, Utility)**

Y. İlker TOPCU, Ph.D. twitter.com/yitopcu

2
**Introduction One dimensional (single criterion) decision making**

Single stage vs. multi stage decision making Decision analysis is an analytical and systematic way to tackle problems A good decision is based on logic: rational decision maker (DM)

3
**Components of Decision Analysis**

A state of nature is an actual event that may occur in the future. A payoff matrix (decision table) is a means of organizing a decision situation, presenting the payoffs from different decisions/alternatives given the various states of nature.

4
**Basic Steps in Decision Analysis**

Clearly define the problem at hand List the possible alternatives Identify the possible state of natures List the payoff (profit/cost) of alternatives with respect to state of natures Select one of the mathematical decision analysis methods (models) Apply the method and make your decision

5
**Types of Decision-Making Environments**

Type 1: Decision-making under certainty DM knows with certainty the payoffs of every alternative . Type 2: Decision-making under uncertainty DM does not know the probabilities of the various states of nature. Actually s/he knows nothing! Type 3: Decision-making under risk DM does know the probabilities of the various states of nature.

6
**Decision Making Under Certainty**

Instead of state of natures, a true state is known to the decision maker before s/he has to make decision The optimal choice is to pick an alternative with the highest payoff

7
**Decision Making Under Uncertainty**

Maximax Maximin Criterion of Realism Equally likelihood Minimax

8
**Decision Table / Payoff Matrix**

9
**Maximax Choose the alternative with the maximum optimistic level**

ok = {oi} = { {vij}}

10
**Maximin Choose the alternative with the maximum security level**

sk = {si} = { {vij}}

11
Criterion of Realism Hurwicz suggested to use the optimism-pessimism index (a) Choose the alternative with the maximum weighted average of optimistic and security levels {a oi + (1 – a) si} where 0≤a≤1

12
**0.1667 ≤ a ≤ 0.6154 “Construct small plant” **

Criterion of Realism 120a-20 = 0 a = 380a-180 = 120a-20 a = 0 ≤ a ≤ “Do nothing” ≤ a ≤ “Construct small plant” ≤ a ≤ 1 “Construct large plant”

13
Equally Likelihood Laplace argued that “knowing nothing at all about the true state of nature” is equivalent to “all states having equal probability” Choose the alternative with the maximum row average (expected value)

14
**Minimax Choose the alternative with the minimum worst (maximum) regret**

Savage defined the regret (opportunity loss) as the difference between the value resulting from the best action given that state of nature j is the true state and the value resulting from alternative i with respect to state of nature j Choose the alternative with the minimum worst (maximum) regret

15
**Summary of Example Results**

METHOD DECISION Maximax “Construct large plant” Maximin “Do nothing” Criterion of Realism depends on a Equally likelihood “Construct small plant” Minimax “Construct small plant” The appropriate method is dependent on the personality and philosophy of the DM.

16
**Solutions with QM for Windows**

17
**Decision Making Under Risk**

Probability Objective Subjective Expected (Monetary) Value Expected Value of Perfect Information Expected Opportunity Loss Utility Theory Certainty Equivalence Risk Premium

18
Probability A probability is a numerical statement about the likelihood that an event will occur The probability, P, of any event occurring is greater than or equal to 0 and less than or equal to 1: 0 P(event) 1 The sum of the simple probabilities for all possible outcomes of an activity must equal 1

19
**Objective Probability**

Determined by experiment or observation: Probability of heads on coin flip Probability of spades on drawing card from deck P(A): probability of occurrence of event A n(A): number of times that event A occurs n: number of independent and identical repetitions of the experiments or observations P(A) = n(A) / n

20
**Subjective Probability**

Determined by an estimate based on the expert’s personal belief, judgment, experience, and existing knowledge of a situation

21
**Payoff Matrix with Probabilities**

22
**Expected (Monetary) Value**

Choose the alternative with the maximum weighted row average EV(ai) = vij P(qj)

23
**Sensitivity Analysis EV (Large plant) = $200,000P – $180,000 (1 – P)**

EV (Small plant) = $100,000P – $20,000(1 – P) EV (Do nothing) = $0P + $0(1 – P)

24
**Sensitivity Analysis Large Plant Small Plant EV P -200000 -150000**

-50000 50000 100000 150000 200000 250000 0.2 0.4 0.6 0.8 1 P EV Point 1 Small Plant Large Plant Point 2

25
**Expected Value of Perfect Information**

A consultant or a further analysis can aid the decision maker by giving exact (perfect) information about the true state: the decision problem is no longer under risk; it will be under certainty. Is it worthwhile for obtaining perfectly reliable information: is EVPI greater than the fee of the consultant (the cost of the analysis)? EVPI is the maximum amount a decision maker would pay for additional information

26
**Expected Value of Perfect Information**

EVPI = EV with perfect information – Maximum EV under risk EV with perfect information: 200*.6+0*.4=120 Maximum EV under risk: 52 EVPI = 120 – 52 = 68

27
**Expected Opportunity Loss**

Choose the alternative with the minimum weighted row average of the regret matrix EOL(ai) = rij P(qj)

28
Utility Theory Utility assessment may assign the worst payoff a utility of 0 and the best payoff a utility of 1. A standard gamble is used to determine utility values: When DM is indifferent between two alternatives, the utility values of them are equal. Choose the alternative with the maximum expected utility EU(ai) = u(ai) = u(vij) P(qj)

29
**Standard Gamble for Utility Assessment**

Best payoff (v*) u(v*) = 1 Worst payoff (v–) u(v–) = 0 Certain payoff (v) u(v) = 1*p+0*(1–p) (p) (1–p) Lottery ticket Certain money

30
**Utility Assessment (1st approach)**

Best payoff (v*) u(v*) = 1 Worst payoff (v–) u(v–) = 0 Certain payoff (x1) u(x1) = 0.5 (0.5) Lottery ticket Certain money I v* u(v*) = 1 x1 u(x1) = 0.5 x2 u(x2) = 0.75 (0.5) Lottery ticket Certain money II x1 u(x1) = 0.5 v– u(v–) = 0 x3 u(x3) = 0.25 (0.5) Lottery ticket Certain money In the example: u(-180) = 0 and u(200) = 1 x1= 100 u(100) = 0.5 x2 = 175 u(175) = 0.75 x3 = 5 u(5) = 0.25 III

31
**Expected Utility (Example 1)**

32
**Utility Assessment (2nd approach)**

Best payoff (v*) u(v*) = 1 Worst payoff (v–) u(v–) = 0 Certain payoff (vij) u(vij) = p (p) (1–p) Lottery ticket Certain money In the example: u(-180) = 0 and u(200) = 1 For vij=–20, p=%70 u(–20) = 0.7 For vij=0, p=%75 u(0) = 0.75 For vij=100, p=%90 u(100) = 0.9

33
**Expected Utility (Example 2)**

34
**Preferences for Risk Risk aversion (avoidance)**

Risk neutrality (indifference) Utility Risk proness (seeking) Monetary outcome

35
**Certainty Equivalence**

If a DM is indifferent between accepting a lottery ticket and accepting a sum of certain money, the monetary sum is the Certainty Equivalent (CE) of the lottery Z is CE of the lottery (Y>Z>X). X Y Z p 1–p Lottery ticket Certain money

36
Risk Premium Risk Premium (RP) of a lottery is the difference between the EV of the lottery and the CE of the lottery If the DM is risk averse (avoids risk), RP > 0 S/he prefers to receive a sum of money equal to expected value of a lottery than to enter the lottery itself If the DM is risk prone (seeks risk), RP < 0 S/he prefers to enter a lottery than to receive a sum of money equal to its expected value If the DM is risk neutral, RP = 0 S/he is indifferent between entering any lottery and receiving a sum of money equal to its expected value

Similar presentations

OK

PowerPoint presentation to accompany Operations Management, 6E (Heizer & Render) © 2001 by Prentice Hall, Inc., Upper Saddle River, N.J. 07458 A-1 Operations.

PowerPoint presentation to accompany Operations Management, 6E (Heizer & Render) © 2001 by Prentice Hall, Inc., Upper Saddle River, N.J. 07458 A-1 Operations.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google