Presentation is loading. Please wait.

Presentation is loading. Please wait.

Agents and Causes: Reconciling Competing Theories of Causal Reasoning Michael R. Waldmann Cognitive and Decision Sciences Department of Psychology University.

Similar presentations


Presentation on theme: "Agents and Causes: Reconciling Competing Theories of Causal Reasoning Michael R. Waldmann Cognitive and Decision Sciences Department of Psychology University."— Presentation transcript:

1 Agents and Causes: Reconciling Competing Theories of Causal Reasoning Michael R. Waldmann Cognitive and Decision Sciences Department of Psychology University of Göttingen Agents and Causes: Reconciling Competing Theories of Causal Reasoning Michael R. Waldmann Cognitive and Decision Sciences Department of Psychology University of Göttingen With: Ralf Mayrhofer

2 1.Causal Reasoning: Two Frameworks Causal Bayes Nets as Psychological Models Overview of empirical evidence Markov violations in causal reasoning Dispositional Theories Force dynamics Agents and patients 2. How Dispositional Intuitions Guide the Structuring of Causal Bayes Nets Experiments: Markov Violation Error attribution in an extended causal Bayes net Overview

3 Causal Reasoning: Two Frameworks

4 Causal Models: Psychological Evidence People are sensitive to the directionality of the causal arrow (Waldmann & Holyoak, 1992; Waldmann, 2000, 2001) People estimate causal power based on covariation information, and control for co-factors (Waldmann & Hagmayer, 2001) Causal Bayes nets as models of causal learning (Waldmann & Martignon, 1998) People (and rats) differentiate between observational and interventional predictions (Waldmann & Hagmayer, 2005; Blaisdell, Sawa, Leising, & Waldmann, 2006) Counterfactual causal reasoning (Meder, Hagmayer, & Waldmann, 2008, 2009) Categories and concepts: The neglected direction (Waldmann & Hagmayer, 2006) A computational Bayesian model of diagnostic reasoning (Meder, Mayrhofer, & Waldmann, 2009) Abstract knowledge about mechanisms influences the parameterization of causal models (Waldmann, 2007)

5 „The Bayesian Probabilistic Causal Networks framework has stimulated a productive research program on human inferences on causal networks. Such inferences have clear analogues in everyday judgments about social attributions, medical diagnosis and treatment, legal reasoning, and in many other domains involving causal cognition. So far,research suggests two persistent deviations from the normative model. People‘s inferences about one event are often inappropiately influenced by other events that are normatively irrelevant; they are unconditionally independent or are „screened off“ by intervening nodes. At the same time, people‘s inferences tend to be weaker than are warranted by the normative framework.“ Rottman, B., & Hastie, R. (2013). Reasoning about causal relationships: Inferences on causal networks. Psychological Bulletin. Causal Bayes Net Research: Summary

6 Markov Violations in Causal Reasoning

7 The Causal Markov Condition Definition: Conditional upon its parents (“direct causes”) each variable X is independent of all other variables that are not causal descendants of X (i.e., a cause “screens off” each of its effects from the rest of the network) 6

8 But recent research shows… Recent research shows that human reasoners do consider the states of other effects of a target effect’s cause when inferring from the cause to a single effect (see Rehder & Burnett, 2005; Walsh & Sloman, 2007) vs.

9 An Augmented Causal Bayes Net? Rehder & Burnett, 2005

10 The Causal Markov Condition: Psychological Evidence The Causal Markov Condition: Psychological Evidence Subjects typically translate causal model instructions into representation that on the surface violate the Markov condition. Humans seem to add assumptions about hidden mechanisms that lead to violations of screening-off, even when the cover stories are abstract. It is unclear where the assumptions about hidden structure come from. People typically have only sparse knowledge about mechanisms (Rozenblit & Keil, 2002).

11 Dispositional Theories

12 Abstract Dispositions, Force Dynamics, and the Distinction between Agents and Patients Causation as the product of an interaction between causal participants (agents, patients) which are endowed with dispositions, powers, or capacities. –e.g., Aspirin has the capacity to relieve headaches. Brains have the capacity to be influenced by Aspirin. Agents (who don‘t have to be humans) are the active entities emitting forces. Patients are the entities acted upon by the agents. Patients more or less resist the influence of the agents. Intuitions about abstract properties of agents and patients may guide causal reasoning in the absence of further mechanism knowledge.

13 Patient tendency for endstate Affector (i.e., agent)- patient concordance Endstate approached CauseNo Yes Allow (enable) Yes PreventYesNo Wolff’s Theory of Force Dynamics (Wolff, 2007) Examples „Winds caused the boat to heel“ (cause) „Vitamin B allowed the body to digest“ (allow) „Winds prevented the boat from reaching the harbor“ (prevent)

14 Where does the knowledge about tendencies come from if covariation information is excluded? How can predictive and diagnostic inferences within complex causal models be explained? How do we know whether a causal participant plays the role of an agent or patient? Problems

15 How Dispositional Intuitions Guide the Structuring of Causal Bayes Nets

16 EE.g., Hypotheses 1.Both agents and patients are represented as capacity placeholders for hidden internal mechanisms. 2.There is a tendency to blame the agent to a large extent for both successful and unsuccessful causal transmissions. 3.These intuitions can be represented by elaborating or re-parameterizing the causal Bayes net.

17 Experiments: Markov Violation

18 An Unfamiliar Domain: Mind Reading Aliens (see also Steyvers et al., 2003) An Unfamiliar Domain: Mind Reading Aliens (see also Steyvers et al., 2003) POR=food (in alien language)

19 EE.g., Dissociating Causes and Agents I. CauseEffect AgentPatient CauseEffect PatientAgent II.

20 Manipulating the Agent Role 1.Sender Condition (Cause Object as Agent) „Gonz is capable of sending out his thoughts, and hence transmit them into the heads of Murks, Brrrx, and Zoohng.“ 1.Reader Condition (Effect Objects as Agents) „Murks, Brrrx, and Zoohng are capable of reading the thoughts of Gonz. “ Gonz Murks Brrrx Zoohng

21 Experiment 1a: Which Alien is the Cause? (Intervention Question) Imagine „POR“ was implanted in head of cause/effect alien. How probable is it that the other alien thinks of „POR“.

22 Experiment 1b: Blame Attributions Who is more responsible, if cause is present and effect absent, the cause alien or the effect alien?

23 ? Markov Violations: Experiment 2 Instruction: 4 aliens either think of POR or not; thoughts of pink top alien (cause) covary with thoughts of bottom aliens (effects); aliens think of POR in 70% of their time. ?? Test Question: “Imagine 10 situations with this configuration. In how many instances does the right alien think of POR?”

24 Predictions ? Sender Condition: The pattern seems to indicate that something is wrong with Gonz‘s capacity to send. Hence, the probability of Murks having Gonz‘s thought should be low (i.e., strong Markov violation). Reader Condition: The pattern seems to indicate that something is wrong with Brrrx‘s and Zoohng‘s capacity to read. Hence, the probability of Murks having Gonz‘s thought should be relatively intact (i.e., weak Markov violation). Gonz Murks

25 Results: Experiment 2 ?

26 Error attribution in an extended causal Bayes net

27 Stan Error attribution in causal Bayes nets 15 E C wCwC Standard Model:

28 Distinguishing between two types of error sources 15 E C FCFC wCwC C FCFC E FEFE Differentiating between Cause (F C )- and Effect (F E )-Based Preventers: Simplified Version:

29 Error Attribution in a Common Cause-Model 15 E2E2 C EnEn FCFC E1E1 … wCwC F C is an unobserved common preventer, and must be inferred from the states of C and its effects When C involves the agent, the strength of F C is high (i.e., error is mainly attributed to C), when E involves the agent, the strength of F C is low (hence errors are primarily attributed to the individual effects (i.e., F E, that is, w C ).

30 Model Predictions C E1E1 E2E2 E3E3 FCFC The strength of the F C (red – green – blue) influences the size of the Markov violation (i.e., slope). 17 w F_C 

31 Further Predictions (1): A/B Case In the basic experiments an asymmetry between the two states of the cause was found In the absent case the cause is not active, thus mechanism assumptions cannot have an influence 18 ? Prediction: When both states of the cause are described as active, the differential assumptions about error attribution should matter in both cases

32 Markov-Experiment A/B: Results ? N=56 19

33 Further Predictions (2): Causal Chains If each C comes with its own F C, the difference between reading and sending conditions should completely disappear in a causal chain situation IC 1 DCIC 2 E FCFC FCFC FCFC 20 ?

34 Chain Experiment: Results ? N=50 21

35 Causal model instructions are typically augmented with hidden structure. In the absence of specific mechanism knowledge intuitions about abstract dispositional properties of causal participants guide the structuring of the models. Summary


Download ppt "Agents and Causes: Reconciling Competing Theories of Causal Reasoning Michael R. Waldmann Cognitive and Decision Sciences Department of Psychology University."

Similar presentations


Ads by Google