Download presentation

Presentation is loading. Please wait.

Published byFabiola Yandell Modified over 4 years ago

1
Comparing Sequential Sampling Models With Standard Random Utility Models Jörg Rieskamp Center for Economic Psychology University of Basel, Switzerland 4/16/2012 Warwick

2
Decision Making Under Risk French mathematicians (1654) Rational Decision Making: Principles of Expected Value Blaise Pascal Pierre Fermat

3
Decision Making Under Risk St. Petersburg Paradox Expected utility theory (1738): Replacing the value of money by its subjective value Nicholas Bernoulli Daniel Bernoulli

4
Expected Utility Theory Axiomatic expected utility theory von Neumann & Morgenstern, 1947

5
Frederick Mosteller 1916 - 2006

6
the authors argued that when first offering a bet with a certain probability of winning, and then increasing that probability "there is not a sudden jump from no acceptances to all acceptances at a particular offer, just as in a hearing experiment there is not a critical loudness below which nothing is heard and above which all loudnesses are heard” instead “the bet is taken occasionally, then more and more often, until, finally, the bet is taken nearly all the time” Mosteller & Nogee, 1951, Journal of Political Economy, p. 374 Probabilistic Nature of Preferential Choice

7
–experiment conducted over 10 weeks with 3 sessions each weak –participants repeatedly accepted or rejected gambles (N=30) Example - the participants had to accept or reject a simple binary gamble with a probability of 2/3 to loose 5 cents and a probability of 1/3 to win a particular amount - the winning amount varied between 5 and 16 cents Mosteller’s & Nogee’s Study

8
Results: „Subject B-I"

9
–Participants decided between 180 pairs of gambles –Receiving 15 Euros as a show-up fee –One gamble was selected and played at the end of the experiment and the winning amounts were paid to the subjects Rieskamp (2008). JEP: LMC Experimental Study

10
Task

11
Expected values of the selected gambles

12
Results: Expected values – Choice proportions

13
Consumer products How Can We Explain the Probabilistic Character of Choice?

14
Random utility theories: identically and independently extreme value distributed Explaining Probabilistic Character of Choice Logit model

15
Random utility theories: identically and independently normal distributed Probit Model

16
Cognitive Approach to Decision Making Considering the information processing steps leading to a decision Sequential sampling models - Vickers, 1970; Ratcliff, 1978 - Busemeyer & Townsend, 1993 - Usher & McClelland, 2004

17
Sequential Sampling Models People evaluate options by continuously sampling information about the options’ attributes Which attribute receives the attention of the decision maker fluctuates The probability that an attribute receives the attention of the decision maker is a function of the attribute‘s importance When the overall evaluation crosses a decision threshold a decision is made Rieskamp, Busemeyer, & Mellers (2006) Journal of Economic Literature

18
(adapted from Busemeyer & Johnson, 2004) Threshold Bound (internally controlled stopping-rule) Dynamic Development of Preference (adapted from Busemeyer & Johnson, 2004)

19
Dynamic Development of Preference (adapted from Busemeyer & Johnson, 2004) Time Limit (externally controlled stopping-rule)

20
Decision Making Under Risk - DFT vs. Cumulative Prospect Theory Rieskamp (2008), JEP:LMC - DFT vs. Proportional Difference Model Scheibehenne, Rieskamp, & Gonzalez-Vallejo, 2009, Cognitive Science - Hierarchical Bayesian approach examining the limitations of cumulative prospect theory Nilsson, Rieskamp, & Wagenmakers (2011), JMP

21
Consumer Behavior How good are sequential sampling models to predict consumer behavior? - Multi-attribute decision field theory Roe, Busemeyer, & Townsend, 2001 versus - Logit and Probit Model Standard random utility models

22
Multi-attribute Decision Field Theory Decay The preference state decays over time Interrelated evaluations of options Options are compared with each other Similar alternatives compete against each other and have a negative influence on each other

23
1. Calibration Experiment –Participants (N=30) repeatedly decided between three digital cameras (72 choices) –Each camera was described by five attributes with two or three attribute values (e.g. mega pixel, monitor size) –Models` parameters were estimated following a maximum likelihood approach 2. Generalization Test Experiment Study 1

24
Task 24

25
Models’ parameters ModelsParameters Standard Random Utility Logit Weights given to the attributes extreme value distributed Probit Weights given to the attributes normal distributed Sequential Sampling MDFT Attention weights allocated to the attributes normal distributed Determines the rate at which similarity declines with distance Determines the memory of the previous preference state

26
Logit – Probit: r =.99 MDFT - Logit : r =.94 MDFT - Probit: r =.94 Attribute Weigths

27
Model Comparison Results: Likelihood Likelihood Differences

28
Results: Bayes Factor

29
Generalization Test Experiment 2 –Generating a new set of options on the basis of the estimated parameter values of experiment 1 –Comparing models‘ predictions without fitting Study 1 – Generalization

30
Results ModelDistance Log Likelihood Baseline0.19-702 Logit0.18-863 Probit0.11-468 MDFT0.12-490

31
Conclusion Calibration Design: –LL: MDFT>Logit>Probit –Bayes factor: Logit > Probit> MDFT Generalization Design: –Probit ≈ MDFT > Logit

32
Decision Field Theory - Interrelated evaluations of options 1. attention specific evaluations 2. competition between similar options Logit / Probit- Evaluation of options are independent of each other Study 2: Qualitative Predictions Interrelated Evaluations of Options

33
Interrelated Evaluation of Options

35
Tversky, 1972

36
Interrelated Evaluation of Options

39
Interrelated evaluation of options (Huber, Payne, & Puto, 1982)

40
Interrelated Evaluation of Options

42
Is it possible to show the interrelated evaluations of options for all three situations in a within-subject design? Does MDFT has a substantial advantage compared to the logit and probit model in predicting people’s decisions? Do the choice effects really matter? Research Question

43
Method: Matching Task Matching Task TARGETCOMPETITOR AB Weight 6.5 Kg8.0 Kg Price ??? 3'000 CHF Break Before the main study participants had to choose one attribute value to make both options equally attractive

44
Method: Matching Task Matching Task TARGETCOMPETITOR AB Weight 6.5 Kg8.0 Kg Price 4'000 CHF (matched) 3'000 CHF Break

45
Main Study Choice Task Matching Task TARGETCOMPETITOR DECOY AB C Weight 6.5 Kg8.0 Kg 6.6 Kg Price 4'000 CHF (matched) 3'000 CHF Break 4'100 CHF Break Choice Task: To the former 2 options (target + competitor) individual specified decoys were added. Always choices between three options.

46
The decoy was added either in relationship to option A or in relationship to option B Pecularity: Decoy position

47
Interrelated Evaluation of Options

50
If the third option had no effect on the preferences for A and B the average choice proportion for the target option should be 50%

51
Consumer Products: - bicycles - washing machines - notebooks - vacuum cleaners - color printers - digital cameras Choices: 6 products, 3 effects, 3 situations, 2 decoy positions 6 × 3 × 3 × 2 = 108 choice situations (triples) Main Study

52
Results

53
Target Attraction Effect

54
Results Target Attraction EffectCompromise Effect

55
Results Target Attraction EffectCompromise Effect Target Similarity Effect

56
Logit – Probit: r =.72 MDFT - Logit: r =.57 MDFT - Probit: r =.61 Attribute Weigths

57
Results

60
Sequential sampling models provide a way to describe the probabilistic character of choices For random choices situations Probit and MDFT are doing equally good for predicting people’s preferences In situations in which the interrelated evaluations of options play a major role MDFT has a substantial advantage compared to standard random utility models Conclusions

61
Thanks ! Nicolas Berkowitsch Maximilian Matthaeus Benjamin Scheibehenne

63
Interrelated Evaluation of Options (Huber, Payne, & Puto, 1982)

64
Expected values of the selected gambles

65
Results: Expected values – Choice proportions

66
-Each models’ parameters were estimated separately for each individual. -Goodness-of-fit: Maximum likelihood Estimating the models’ parameter(s)

67
Results: Sequential Sampling Model r =.83

68
Results: Cumulative Prospect Theory r =.88

69
-For 18 participants prospect theory had a better AIC value as compared to 12 participants for whom DFT was the better model (p =.36 sign test) -When fitting the models to the data there is a slight advantage of prospect theory in describing the data -No strong evidence in favor of one model Results

70
CPT - DFT r =.88

71
A good fit of a model does not tell us very much! Both cumulative prospect theory and the sequential sampling model are able to described the observed choices Conclusions

72
Goal: Conducting a study to test the models rigorously against each other Generalization Test: Constructing decision problems for which the two models made different predictions Study 2: Rigorous Model Comparison Test

73
Generating 10.000 pairs of gambles for each pair of gambles an experiment was simulated with 30 synthetic participants for each synthetic participant DFT‘s (or CPT‘s) parameter values were drawn with replacement from the distribution of parameter values of study 1 and the model‘s predictions were determined each simulated experiment was repeated 100 times the average choice probabilities were determined for DFT and CPT Selecting 180 gambles with different predictions of the two models. Independent Test of DFT and CPT in Study 2 Bootstrapping Method

74
–Thirty participants decided between 180 pairs of gambles –One gamble was selected and played at the end of the experiment and the winning amounts were paid to the participants Study 2: Experiment

75
Expected Values of Selected Gambles

76
Predictions: CPT - DFT r = -.87

77
Results: Expected values – Choice proportions r =.71

78
Results: Sequential Sampling Model r =.77

79
Results: Cumulative Prospect Theory r = -.67

80
Results Study 2 For all 30 participants DFT reached a better goodness-of-fit than CPT The most likely gambles predicted by DFT were chosen in 66% of all cases, whereas the most likely gambles predicted by CPT were chosen in only 34% of all cases

81
Limitations - The results depend on the estimation process for CPT‘s parameters in Study 1 - With six free parameters fitting the parameters individually might not lead to reliable estimates

82
Hierarchical Bayesian Approach - Estimating the posterior distribution of prospect theories‘ parameter Hierarchical Bayesian Approach: - The median estimates of the maximum likelihood approach did not differ for most parameters of CPT Nilsson, Rieskamp, & Wagenmakers (in press). Journal of Mathematical Psychology

83
Hierarchical Bayesian Approach

84
- Estimating the posterior distribution of prospect theories‘ parameter Hierarchical Bayesian Approach: However, it is in general difficult to receive reliable estimates for the loss aversion parameter of CPT Nilsson, Rieskamp, & Wagenmakers (in press). Journal of Mathematical Psychology

85
Alternative models - Heuristic model of decision making - the priority heuristic Rieskamp (2008), JEP:LMC - Proportional difference model Scheibehenne, Rieskamp, & Gonzalez-Vallejo, 2009, Cognitive Science

86
Conclusions - Sequential sampling models appear as valid alternatives to the the conventional expected utility and nonexpected utility approach such as CPT for explaining decision making under risk - Sequential sampling models provide a description of the cognitive process underlying decision making

Similar presentations

Presentation is loading. Please wait....

OK

Decision Analysis (cont)

Decision Analysis (cont)

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google