Naïve, Resolute or Sophisticated? A Study of Dynamic Decision Making John D Hey LUISS, Rome, Italy and University of York, UK Gianna Lotito Università.

Slides:



Advertisements
Similar presentations
From risk to opportunity Lecture 11 John Hey and Carmen Pasca.
Advertisements

From risk to opportunity Lecture 10 John Hey and Carmen Pasca.
Cognitive Modelling – An exemplar-based context model Benjamin Moloney Student No:
JOHN HEY THE CHALLENGES OF EXPERIMENTALLY INVESTIGATING DYNAMIC ECONOMIC BEHAVIOUR DEMADYN’15, Heidelberg, 2 nd – 4 th March 2015.
Bidding Strategy and Auction Design Josh Ruffin, Dennis Langer, Kevin Hyland and Emmet Ferriter.
Discrete Choice Model of Bidder Behavior in Sponsored Search Quang Duong University of Michigan Sebastien Lahaie
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Non-Probability-Multiple-Prior Models of Decision Making Under Ambiguity: new experimental evidence John Hey a and Noemi Pace b a University of York, UK.
Statistical Issues in Research Planning and Evaluation
Evaluating Non-EU Models Michael H. Birnbaum Fullerton, California, USA.
Selection of Research Participants: Sampling Procedures
Lecture 4 on Individual Optimization Risk Aversion
Confidence Intervals for Proportions
1-1 Copyright © 2015, 2010, 2007 Pearson Education, Inc. Chapter 18, Slide 1 Chapter 18 Confidence Intervals for Proportions.
BEE3049 Behaviour, Decisions and Markets Miguel A. Fonseca.
Introduction to Inference Estimating with Confidence Chapter 6.1.
Evaluating Hypotheses
Copyright © 2010 Pearson Education, Inc. Chapter 19 Confidence Intervals for Proportions.
Basic Tools of Finance Finance is the field that studies how people make decisions regarding the allocation of resources over time and the handling of.
Business Statistics: Communicating with Numbers
Standard Error of the Mean
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Copyright © Cengage Learning. All rights reserved. 8 Tests of Hypotheses Based on a Single Sample.
Behavior in the loss domain : an experiment using the probability trade-off consistency condition Olivier L’Haridon GRID, ESTP-ENSAM.
Chapter 9 Games with Imperfect Information Bayesian Games.
14 Elements of Nonparametric Statistics
Decision making Making decisions Optimal decisions Violations of rationality.
Auction Seminar Optimal Mechanism Presentation by: Alon Resler Supervised by: Amos Fiat.
RMTD 404 Lecture 8. 2 Power Recall what you learned about statistical errors in Chapter 4: Type I Error: Finding a difference when there is no true difference.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
McGraw-Hill/Irwin Copyright  2008 by The McGraw-Hill Companies, Inc. All rights reserved. GAME THEORY, STRATEGIC DECISION MAKING, AND BEHAVIORAL ECONOMICS.
Games with Imperfect Information Bayesian Games. Complete versus Incomplete Information So far we have assumed that players hold the correct belief about.
Stochastic choice under risk Pavlo Blavatskyy June 24, 2006.
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 4): Power Fall, 2008.
John Hey LUISS, Italy and University of York, UK Gianna Lotito, University of Piemonte Orientale IAREP/SABE 2008 World Meeting at LUISS in Rome Naïve,
1 Jekyll & Hyde Marie-Edith Bissey (Università Piemonte Orientale, Italy) John Hey (LUISS, Italy and University of York, UK) Stefania Ottone (Econometica,
Chapter Outline 2.1 Estimation Confidence Interval Estimates for Population Mean Confidence Interval Estimates for the Difference Between Two Population.
Ellsberg’s paradoxes: Problems for rank- dependent utility explanations Cherng-Horng Lan & Nigel Harvey Department of Psychology University College London.
Chapter 5 Parameter estimation. What is sample inference? Distinguish between managerial & financial accounting. Understand how managers can use accounting.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
We report an empirical study of buying and selling prices for three kinds of gambles: Risky (with known probabilities), Ambiguous (with lower and upper.
Confidence Interval Estimation For statistical inference in decision making:
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Auctions serve the dual purpose of eliciting preferences and allocating resources between competing uses. A less fundamental but more practical reason.
Copyright © 2009 Pearson Education, Inc. Chapter 19 Confidence Intervals for Proportions.
Stats 845 Applied Statistics. This Course will cover: 1.Regression –Non Linear Regression –Multiple Regression 2.Analysis of Variance and Experimental.
Axiomatic Theory of Probabilistic Decision Making under Risk Pavlo R. Blavatskyy University of Zurich April 21st, 2007.
Correlation & Regression Analysis
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Auctions serve the dual purpose of eliciting preferences and allocating resources between competing uses. A less fundamental but more practical reason.
Copyright © 2009 Pearson Education, Inc. 8.1 Sampling Distributions LEARNING GOAL Understand the fundamental ideas of sampling distributions and how the.
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 19 Confidence Intervals for Proportions.
Principal Component Analysis
Small Decision-Making under Uncertainty and Risk Takemi Fujikawa University of Western Sydney, Australia Agenda: Introduction Experimental Design Experiment.
Chapter 7: Hypothesis Testing. Learning Objectives Describe the process of hypothesis testing Correctly state hypotheses Distinguish between one-tailed.
Statistics for Business and Economics Module 1:Probability Theory and Statistical Inference Spring 2010 Lecture 4: Estimating parameters with confidence.
Statistics 19 Confidence Intervals for Proportions.
John D. Hey LUISS & University of York Julia A. Knoll University of Düsseldorf Strategies in Dynamic Decision Making An Experimental Investigation on the.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
McGraw-Hill/Irwin Copyright © 2008 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 4 Decision Analysis Building the Structure for Solving.
Confidence Intervals for Proportions
Decisions Under Risk and Uncertainty
Confidence Intervals for Proportions
More about Posterior Distributions
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Confidence Intervals for Proportions
Confidence Intervals for Proportions
Presentation transcript:

Naïve, Resolute or Sophisticated? A Study of Dynamic Decision Making John D Hey LUISS, Rome, Italy and University of York, UK Gianna Lotito Università del Piemonte Orientale ESA 2007 World Meeting, 28th June-1st July 2007

The issue An important issue in the analysis of dynamic decision making concerns the behaviour of dynamically inconsistent people. Do they know that they are dynamically inconsistent, and, if so, what do they do about it? One prior issue to be discussed before proceeding: Do the preferences of the decision maker satisfy Expected Utility theory or not? If they do, then the decision maker is not dynamically inconsistent. If they don’t, then the possibility of dynamically inconsistent behaviour arises.

Two points should be made in this latter case: 1)A non-Expected Utility decision maker may plan to make a particular decision when he or she reaches some decision node, but may prefer to implement another decision when that decision node is actually reached. 2)for a non-Expected Utility decision maker, the way that he or she processes the decision tree may make a difference to the solution adopted: choosing the best ex ante strategy might lead to one solution, while using backward induction might lead to another.

Economic theory has identified three possible responses to the dynamic inconsistency question: such decision makers act naively (ignoring their inconsistency); they act resolutely (imposing their first- period preferences and not letting their inconsistency affect their behaviour); they act sophisticatedly (anticipating their inconsistency, optimizing by taking it into account, then letting their inconsistency affect their behaviour).

We report on an experiment that lets us infer which of these responses describes the behaviour better. The experiment allows us not only to observe choices in dynamic decision problems but also to obtain subjects’ evaluations of such problems. Combining these two types of data we can not only estimate the preferences of the decision makers but also infer whether they are naïve, resolute or sophisticated

Background The present study builds upon two previous studies, both concerned with dynamic decision problems. The first of these is that of Cubitt, Starmer and Sugden (1998), who considered the issue of behaviour in dynamic decision problems; the second study is that of Hey and Paradiso (2006) who considered subjects’ evaluations of decision problems. Both studies used the same three decision trees which are representations of a dynamic decision problem identical from a strategic point of view (the set of available ex ante strategies is the same for all trees), but different in the temporal frame.

Cubitt et al. find that subjects behave differently in the different trees – which violates the hypothesis that the subjects’ preferences are EU, and suggests that temporal frame influences decisions. Hey and Paradiso find that the temporal frame has also some effects on the evaluations the subjects make of the problems.

Our experiment differs from both these previous studies. We add an explicit pre-commitment option to the decision problem. We also add a fourth tree which helps us in our task of distinguishing between naïve, resolute and sophisticated subjects. In addition, we observe not only behaviour but also preferences.

The design The experiment used four decision problems, T1, T2, T3 and T4 Each tree implies a choice between two or more of the following prospects, defined on set X = (a, b, c, d, e) of consequences where a > b > c > d >e = 0 and b= c+1 and d = e+1. K = c (that is, the certainty of c) L = (a,e; q, 1-q) M = (c, e; r, 1-r) N = (b, d; r, 1-r) O = (a, e; rq, 1-rq) Note that N first-order stochastically dominates M.

M O K L O M N O M N

T1 and T2 provide a standard test of EU theory: if a subject’s preferences satisfy EU then the subject should choose M (O) if and only if he or she chooses K (L). T3 is the same as tree T2 with the addition of a prior choice: the subject can either proceed to tree T2 or accept gamble N. T4 is the static version of T3. In T4 the agent faces a choice among the same prospects M, N and O as in T3, but the temporal order of the decision is reversed: the agent has to choose at the beginning of the decision problem, before any uncertainty is resolved.

Predictions - behaviour Expected utility T1T2T3T4 K is preferred to L, and hence M to O (N dominant over M) Up (M) Up (K) Down (N) Down (N) L is preferred to K, and hence O to M Down (O) Down (L) Up, Up (O) Up (O)

Non- Expected utility T1T2T3T4 K is preferred to L, but O to M (common violation) Down (O) Up (K) Naive:Up, Down Resolute:Up, Up (O) Sophist.:Down (N) Up (O) L is preferred to K, but M to O Up (M) Down (L) Naive: Down (N) Resolute:Down Sophist.:Down Down (N)

Predictions - bids Expected utility K is preferred to L, and hence M to O (N dominant over M) T1=T2 < T3=T4 (N dominates M) L is preferred to K, and hence O to M T1 = T2 = T3 = T4

Predictions - bids Non-Expected utility K is preferred to L, but O to M (common violation) Naive: T1=T2= T3=T4 Resolute: T1=T2= T3=T4 Sophisticated: T1=T4>T3>T2 L is preferred to K, but M to O T1=T2 < T3=T4 (N dominates M)

The implementation In the experiment we used three sets each with four trees, with the following values for the payoffs and probabilities. Note that the possible payoff varies from a minimum of £0 to a maximum of £150 (set 2). Parameter values Set 1Set 2Set 3 a £50£150£60 b £31£51£41 c £30£50£40 d £1 e £0 q r

The experiment was conducted at EXEC, University of York A total of 50 students, both graduate and undergraduate, took part in the experiment. They were given written instructions When all participants had finished reading the instructions, a PowerPoint presentation was played at a predetermined speed on their individual screens. After this, they could ask questions. The experiment then started.

In order to elicit the subjects’ evaluations for the three sets of four trees, we used the second-price sealed- bid auction method, which was implemented as follows. Subjects performed the experiment in groups of five. They were sat at individual computer terminals and were not allowed to communicate with each other. They individually made bids for each of the three sets of four trees (twelve trees overall) and were given 15 minutes to bid for each set.

During the bidding period the subjects were allowed to practice playing out the decision trees as much as they wanted in the time allowed. It was made clear that the outcomes of the practice did not affect the payments in any way. The number of seconds left for the practice and bidding was shown in the box at the top left-hand side of each decision tree. When the bidding time was over, the subjects played out all the twelve problems for real.

We displayed on each subject’s screen the results of his or her playing out plus the bids of all the 5 subjects in the group for each of the twelve trees. Then we invited one of the 5 subjects to select a ball at random from a bag containing 12 balls numbered from 1 to 12. This determined the problem on which the auction was held. The subject with the highest bid for the problem paid us the bid of the second highest bidder As all subjects were given a £20 participation fee, 4 of the 5 members earned £20, while the fifth earned £20 minus the bid of the second highest bidder plus the outcome of the decision problem.

Both in the instructions and in the presentation it had been emphasised through different examples that the bid for each problem should be equal to the willingness to pay for that decision problem. It was emphasised that in case the subject’s bid was the highest for the chosen problem, he or she would end up with the participation fee, minus the second highest bid, plus the outcome. This could be less than the participation fee and the subject could end up losing money.

Descriptive results - behaviour Set 1Set 2Set 3 Expected utility K preferred to L, and M to O L preferred to K, and O to M Total number Non-Expected utility K preferred to L, but O to M (common violation) 1518 L preferred to K, but M to O Total number

Consistency to theory - behaviour Set 1 Cons total Set 2 Con total Set 3 Con Total Expected utility K and M L and O Non- Expected utility K but O (common violation) Naive Resolute Sophisticated L but M

Consistency to theory - bids Predicted bidsSet 1 Con tot Set 2 Con tot Set 3 Con tot Expected utility K and MT1=T2 <T3=T L and OT1=T2 =T3=T Non- Expected utility K but O (common violation) Naive or Resolute T1=T2 =T3=T Sophisticated T1=T4 >T3>T2 L but MT1=T2 <T3=T

A formal analysis of the data There are two problems with the above description of the data. First, it is partial and does not use all the data from each subject in a systematic fashion. Second, it ignores the existence of noise in the subjects’ responses: some of the information that we have obtained from the subjects may be just noise or error. To remedy these two problems we provide a systematic analysis which uses all the data from each subject and explicitly incorporates noise into the analysis. Our objective is the same, however – to try and infer whether subjects are naïve, resolute or sophisticated.

We assume that the subjects each have a well-defined preference function and that each is either naïve, resolute or sophisticated. For each subject we find the best-fitting preference function and the best-fitting specification (naïve, resolute or sophisticated) to represent the subject’s responses on the experiment. The responses are the bids (for the 4 trees in each of the 3 sets) and the decisions (in 4 trees on each of the 3 sets) – a total of 24 observations for each subject. We note that the nature of the data is different – for the bids we have a number, while for the decisions we have their choice. The former is essentially a continuous variable while the latter is a discrete variable (taking either 1 of 2 values or 1 of 3 values). The analysis of the two kinds of data has to be different.

We need to assume a preference functional which could be a non-Expected Utility functional but which could also be Expected Utility. We therefore assume that the preference functional is that of Rank Dependent Expected Utility, and we denote the utility function by u(.) and the probability weighting function by w(.). The Rank Dependent Expected Utility of a gamble G = (x 1, x 2,…,x I ; p 1, p 2, …, p I ), where the prospects are indexed in order from the worst x 1 to the best x I, is given by

We note that Rank Dependent Expected Utility preferences reduce to Expected Utility preferences when the weighting function is given by w(p)=p.

To fully characterise the preferences of a subject obeying the Rank Dependent Expected Utility model, we need to know the utility function u(.) and the weighting function w(.). We assume particular functional forms for these two functions and estimate the parameters of the functions. We assume the CARA and CRRA specifications for the utility function and the Quiggin (1982) and Power specification for the weighting function. The Quiggin specification allows for an S- shaped weighting function while the Power specification does not

Moreover, we need to specify the stochastic structure of the data. We assume that, when evaluating any gamble (whether static or dynamic), the subject makes a measurement error. More specifically, if u(.) is the utility function of the individual, and u -1 (.) its inverse, then we assume that the certainty equivalent of any gamble G is given by u -1 (V(G)) + e, where V(G) is the rank dependent expected utility of the gamble (using equation above) and where e is an error – a measurement error.

To complete, we need to specify the distribution of e. We assume that e has an Extreme Value distribution with parameters 0 and 1/s.

Thus the cumulative distribution function, F(.) of e is given by: The probability density function f(.) is given by Mean and variance of e are γ/s and π 2 /(6s 2 ), respectively s is inversely proportional to the standard deviation, and can indicate the precision of the distribution: the larger s, the smaller the standard deviation, the more precise the valuation made by the subject

We estimate the parameters of the utility function, the parameters of the weighting function and the precision parameter s using maximum likelihood (implemented in GAUSS). To do this we need to specify the likelihood of the observations. This is different for the decision and for the bids in essence the likelihood of a decision is the probability that that decision is taken given the parameters and the stochastic specification; the likelihood of a bid is the probability density at the bid given the parameters and the stochastic specification.

In order to ensure the robustness of our results we use all four possible combinations: CRRA with Power; CRRA with Quiggin; CARA with Power; and CARA with Quiggin As will be seen, there are some variations in the results with the different combinations but they are relatively minor.

We could simply place each subject in a particular category according to the highest value of the maximised log-likelihood, though we feel that that might give a distorted picture, particularly as there are subjects for whom the maximised log-likelihood on the three specifications are very close Instead we chose the following methodology For each subject (on any one utility function/weighting function combination) we have three maximised log-likelihoods. Let us denote these by ll(n), ll(r) and ll(s) for the naïve, resolute and sophisticated specifications respectively.

If we adopt a Bayesian interpretation of the results, and if we start with equal priors on the three specifications, then the posterior probabilities of the naïve, resolute and sophisticated specifications being the correct ones are…. Hence, for example, for Subject 1 on the ‘CRRA plus Power’ combination we have ll(s) = , ll(n) = and ll(r) = applying formula we have that the posterior probabilities for the three specifications are P(s) = , P(n) = and P(r) = The resolute specification is almost certainly the correct one, though the other two specifications have some residual claim

We have applied this analysis to all the subjects for each of the four utility function/weighting function combinations. Graphically, we use triangles, and represent the probability of the sophisticated specification being correct on the horizontal axis and the probability of the resolute specification being correct on the vertical axis. The probability of the naïve specification being correct is the residual and is represented by the perpendicular distance from the hypotenuse. In each of these triangles subjects are indicated by a number. The triangles are divided into three areas – the one to the top being where the resolute specification is most probable, the one to the right being where the sophisticated specification is most probable and the one nearest the origin being where the naïve is most probable.

CRRA and Power weighting function results

CRRA and Quiggin weighting function results

CARA and Power weighting function results

CARA and Quiggin weighting function results

The following results are apparent depending upon the particular combination: Combination Number in ‘naive most probable area’ Number in ‘resolute most probable area’ Number in ‘sophisticated most probable area’ CRRA and Power CRRA and Quiggin CARA and Power 23 4 CARA and Quiggin 16304

It is interesting to note that the sophisticated specification performs consistently worse than the other two, that the resolute specification appears to be better for the majority of the subjects though there is some movement between the naïve and the resolute. This could be a consequence of the fact that there is very little data that allows us to discriminate between these latter two specifications. It is clear from this and the earlier analysis that the resolute and naïve specifications perform particularly well.

Conclusions In conducting experiments to try to determine whether subjects are naïve, resolute or sophisticated, experimenters have a serious difficulty in designing the experiment to observe plans and hence to see whether they are implemented. The experiment tries to surmount these difficulties with a unique design in which not only behaviour is observed but also preferences over different representations of the dynamic decision problem are observed.

Moreover, we have constructed the decision problems in such a way that we can distinguish between naïve decision makers (those who ignore any possible future inconsistencies), resolute decision makers (those who are resolute in implementing their a priori plans) and sophisticated decision makers (who anticipate their future inconsistencies but who are not sufficiently resolute to overcome them). We have used the data to infer the type of each subject.

The picture is somewhat clouded as we do not know ex ante the preference functional of each individual, but we have explored the robustness of our categorisation. While the final picture is not totally clear, it seems to be the case that around 58% of our 50 subjects are resolute, 36.5% naïve and just 5.5% sophisticated. The large number of resolute subjects and the small number of sophisticated subjects in our experiment surprised us, as we thought ex ante that it would be difficult for subjects to be resolute.

The implications for economic theory are significant. If we look at models which incorporate dynamically inconsistent behaviour (such as the literature on quasi-hyperbolic discounting in the context of a life-cycle saving model), it will be seen that most of these models assume sophisticated behaviour. Our results suggest that this might be descriptively implausible.