Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Mutli-Attribute Decision Making Eliciting Weights Scott Matthews Courses: 12-706 / 19-702.

Similar presentations


Presentation on theme: "1 Mutli-Attribute Decision Making Eliciting Weights Scott Matthews Courses: 12-706 / 19-702."— Presentation transcript:

1 1 Mutli-Attribute Decision Making Eliciting Weights Scott Matthews Courses: 12-706 / 19-702

2 12-706 and 73-3592 Admin Issues  HW 4 due today  No Friday class this week

3 12-706 and 73-3593 Multi-objective Methods  Multiobjective programming  Mult. criteria decision making (MCDM)  Is both an analytical philosophy and a set of specific analytical techniques  Deals explicitly with multi-criteria DM  Provides mechanism incorporating values  Promotes inclusive DM processes  Encourages interdisciplinary approaches

4 12-706 and 73-3594 2:1 Tradeoff Example  Find an existing point (any) and consider a hypothetical point you would trade for.  You would be indifferent in this trade  E.g., V(30,9) -> H(31,7)  H would get Uf = 6/10 and Uc = 4/7  Since we’re indifferent, U(V) must = U(H)  k C (6/7) + k F (5/10) = k C (4/7) + k F (6/10)  k C (2/7) = k F (1/10) k F = k C (20/7)  But k F + k C =1 k C (20/7) + k C = 1  k C (27/7) = 1 ; k C = 7/27 = 0.26 (so k f =0.74)

5 12-706 and 73-3595 With these weights..  U(M) = 0.26*1 + 0.74*0 = 0.26  U(V) = 0.26*(6/7) + 0.74*0.5 = 0.593  U(T) = 0.26*(3/7) + 0.74*1 = 0.851  U(H) = 0.26*(4/7) + 0.74*0.6 = 0.593  Note H isnt really an option - just “checking” that we get same U as for Volvo (as expected)

6 12-706 and 73-3596 Marginal Rate of Substitution  For our example =  = 1/2  Which is what we said it should be  (1 unit per 2 units)

7 12-706 and 73-3597 Eliciting Weights for MCDM  2:1 tradeoff (“pricing out”) is example about eliciting weights (i.e., 2:1 )  Method was direct, and was based on easy quantitative 0-1 scale  What are other options to help us?

8 12-706 and 73-3598 Ratios  Helpful when attributes are not quantitative  Car example: color (how much more do we like red?)  First ask sets of pairwise comparison questions  Then set up quant scores  Then put on 0-1 scale  This is what MCDM software does (series of pairwise comparisons)

9 12-706 and 73-3599 MCDM - Swing Weights  Use hypothetical extreme combinations to determine weights  Base option = worst on all attributes  Other options - “swing” one of the attributes from worst to best  Determine rank preference, find weights

10 12-706 and 73-359 Choosing a Car  CarFuel Eff (mpg) Comfort  Index  Mercedes2510  Chevrolet283  Toyota356  Volvo309  Which dominated, non-dominated?  Dominated can be removed from decision  BUT we’ll need to maintain their values for ranking

11 12-706 and 73-35911 Swing Weights Table  Combinations of varying all worst attribute values with each best attribute  How would we rank / rate options below? ComboRankRateWeight Base25 F, 3C30 Fuel35F, 6C Comfort25F, 10C

12 12-706 and 73-35912 Example  Worst and best get 0, 100 ratings by default  If we assessed “Fuel” option highest, and suggested that “Comfort” option would give us a 20 (compared to 100) rating.. ComboRankRateWeight Benchmark25 F, 3C300 Fuel35F, 3C1100100/120 Comfort25F, 10C22020/120

13 12-706 and 73-35913 Outcome of Swing Weights  Each row is a “worst case” utility and best case utility  E.g., U(“Fuel” option)= k f *U f (35) + k c *U c (6)  U(Fuel)= k f *1 + 0 = k f  Same for U(comfort) option => k c  We assessed swing weights as utilities  Utility of swinging each attribute from worst to best gives us our (elicited) weights

14 12-706 and 73-35914 So how to assess?  Proportional scoring ~ risk neutral  Ratios - good for qualitative attributes  First do qualitative comparisons (eg colors)  Then derive a 0-1 scale  Incorporate risk attitudes (not neutral)  We have used mostly linear utility  Risky has lower utility

15 12-706 and 73-35915 MCDM with Decision Trees  Incorporate uncertainties as event nodes with branches across possibilities  See “summer job” example in Chapter 4.

16 12-706 and 73-35916

17 12-706 and 73-35917  Still need special (external) scales.  And need to value/normalize them  Give 100 to best, 0 to worst, find scale for everything between (job fun)  Get both criteria on 0-100 scales!

18 12-706 and 73-35918

19 12-706 and 73-35919

20 12-706 and 73-35920 Next Step: Weights  Need weights between 2 criteria  Don’t forget they are based on whole scale  e.g., you value “improving salary on scale 0- 100 at 3x what you value fun going from 0- 100”. Not just “salary vs. fun”

21 12-706 and 73-35921 Proportional Scoring for Salary, Subjective Rankings for Fun

22 12-706 and 73-35922

23 12-706 and 73-35923

24 12-706 and 73-35924

25 12-706 and 73-35925 Notes  While forest job dominates in-town, recall it has caveats:  These estimates, these tradeoffs, these weights, etc.  Might not be a general result.  Make sure you look at tutorial at end of Chapter 4 on how to simplify with plugins  Read Chap 15 Eugene library example!

26 12-706 and 73-35926 How to solve MCDM problems  All methods (AHP, SMART,..) return some sort of weighting factor set  Use these weighting factors in conjunction with data values (mpg, price,..) to make value functions  In multilevel/hierarchical trees, deal with each set of weights at each level of tree

27 12-706 and 73-35927 Stochastic Dominance “Defined”  A is better than B if:  Pr(Profit > $z |A) ≥ Pr(Profit > $z |B), for all possible values of $z.  Or (complementarity..)  Pr(Profit ≤ $z |A) ≤ Pr(Profit ≤ $z |B), for all possible values of $z.  A FOSD B iff F A (z) ≤ F B (z) for all z

28 12-706 and 73-35928 Stochastic Dominance: Example #1  CRP below for 2 strategies shows “Accept $2 Billion” is dominated by the other.

29 12-706 and 73-35929 Stochastic Dominance (again)  Chapter 4 (Risk Profiles) introduced deterministic and stochastic dominance  We looked at discrete, but similar for continuous  How do we compare payoff distributions?  Two concepts:  A is better than B because A provides unambiguously higher returns than B  A is better than B because A is unambiguously less risky than B  If an option Stochastically dominates another, it must have a higher expected value

30 12-706 and 73-35930 First-Order Stochastic Dominance (FOSD)  Case 1: A is better than B because A provides unambiguously higher returns than B  Every expected utility maximizer prefers A to B  (prefers more to less)  For every x, the probability of getting at least x is higher under A than under B.  Say A “first order stochastic dominates B” if:  Notation: F A (x) is cdf of A, F B (x) is cdf of B.  F B (x) ≥ F A (x) for all x, with one strict inequality  or.. for any non-decr. U(x), ∫U(x)dF A (x) ≥ ∫U(x)dF B (x)  Expected value of A is higher than B

31 12-706 and 73-35931 FOSD Source: http://www.nes.ru/~agoriaev/IT05notes.pdf

32 12-706 and 73-35932 FOSD Example  Option A  Option B Profit ($M)Prob. 0 ≤ x < 50.2 5 ≤ x < 100.3 10 ≤ x < 150.4 15 ≤ x < 200.1 Profit ($M)Prob. 0 ≤ x < 50 5 ≤ x < 100.1 10 ≤ x < 150.5 15 ≤ x < 200.3 20 ≤ x < 250.1

33 12-706 and 73-35933

34 12-706 and 73-35934 Second-Order Stochastic Dominance (SOSD)  How to compare 2 lotteries based on risk  Given lotteries/distributions w/ same mean  So we’re looking for a rule by which we can say “B is riskier than A because every risk averse person prefers A to B”  A ‘SOSD’ B if  For every non-decreasing (concave) U(x)..

35 12-706 and 73-35935 SOSD Example  Option A  Option B Profit ($M)Prob. 0 ≤ x < 50.1 5 ≤ x < 100.3 10 ≤ x < 150.4 15 ≤ x < 200.2 Profit ($M)Prob. 0 ≤ x < 50.3 5 ≤ x < 100.3 10 ≤ x < 150.2 15 ≤ x < 200.1 20 ≤ x < 250.1

36 12-706 and 73-35936 Area 2 Area 1

37 12-706 and 73-35937 SOSD

38 12-706 and 73-35938 SD and MCDM  As long as criteria are independent (e.g., fun and salary) then  Then if one alternative SD another on each individual attribute, then it will SD the other when weights/attribute scores combined  (e.g., marginal and joint prob distributions)


Download ppt "1 Mutli-Attribute Decision Making Eliciting Weights Scott Matthews Courses: 12-706 / 19-702."

Similar presentations


Ads by Google