Download presentation

1
**Department of Computer Science**

THE MATHEMATICS OF CAUSAL INFERENCE IN STATISTICS Judea Pearl Department of Computer Science UCLA

2
**OUTLINE Statistical vs. Causal Modeling:**

distinction and mental barriers N-R vs. structural model: strengths and weaknesses Formal semantics for counterfactuals: definition, axioms, graphical representations Graphs and Algebra: Symbiosis translation and accomplishments

3
**TRADITIONAL STATISTICAL**

INFERENCE PARADIGM Data Inference Q(P) (Aspects of P) P Joint Distribution e.g., Infer whether customers who bought product A would also buy product B. Q = P(B | A)

4
**FROM STATISTICAL TO CAUSAL ANALYSIS:**

1. THE DIFFERENCES Probability and statistics deal with static relations P Joint Distribution P Joint Distribution Q(P) (Aspects of P) Data change Inference What happens when P changes? e.g., Infer whether customers who bought product A would still buy A if we were to double the price.

5
**FROM STATISTICAL TO CAUSAL ANALYSIS:**

1. THE DIFFERENCES What remains invariant when P changes say, to satisfy P (price=2)=1 P Joint Distribution P Joint Distribution Q(P) (Aspects of P) Data change Inference Note: P (v) P (v | price = 2) P does not tell us how it ought to change e.g. Curing symptoms vs. curing diseases e.g. Analogy: mechanical deformation

6
**FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES (CONT)**

Spurious correlation Randomization Confounding / Effect Instrument Holding constant Explanatory variables STATISTICAL Regression Association / Independence “Controlling for” / Conditioning Odd and risk ratios Collapsibility Causal and statistical concepts do not mix.

7
**} FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES (CONT) **

Spurious correlation Randomization Confounding / Effect Instrument Holding constant Explanatory variables STATISTICAL Regression Association / Independence “Controlling for” / Conditioning Odd and risk ratios Collapsibility Causal and statistical concepts do not mix. No causes in – no causes out (Cartwright, 1989) statistical assumptions + data causal assumptions causal conclusions } Causal assumptions cannot be expressed in the mathematical language of standard statistics.

8
**} FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES (CONT) **

Spurious correlation Randomization Confounding / Effect Instrument Holding constant Explanatory variables STATISTICAL Regression Association / Independence “Controlling for” / Conditioning Odd and risk ratios Collapsibility Causal and statistical concepts do not mix. No causes in – no causes out (Cartwright, 1989) statistical assumptions + data causal assumptions causal conclusions } Causal assumptions cannot be expressed in the mathematical language of standard statistics. Non-standard mathematics: Structural equation models (Wright, 1920; Simon, 1960) Counterfactuals (Neyman-Rubin (Yx), Lewis (x Y))

9
**TWO PARADIGMS FOR CAUSAL INFERENCE Observed: P(X, Y, Z,...)**

Conclusions needed: P(Yx=y), P(Xy=x | Z=z)... How do we connect observables, X,Y,Z,… to counterfactuals Yx, Xz, Zy,… ? N-R model Counterfactuals are primitives, new variables Super-distribution P*(X, Y,…, Yx, Xz,…) X, Y, Z constrain Yx, Zy,… Structural model Counterfactuals are derived quantities Subscripts modify the distribution P(Yx=y) = Px(Y=y)

10
**“SUPER” DISTRIBUTION IN N-R MODEL X Y Z Yx=0 Yx=1 Xz=0 Xz=1 Xy=0 U**

1 Y 1 Z 1 Yx=0 1 Yx=1 1 Xz=0 1 Xz=1 1 Xy=0 0 1 U u1 u2 u3 u4 inconsistency: x = 0 Yx=0 = Y Y = xY1 + (1-x) Y0

11
**TYPICAL INFERENCE IN N-R MODEL Find P*(Yx=y) given covariate Z,**

Assume ignorability: Yx X | Z Assume consistency: X=x Yx=Y Problems: Yx X | Z judgmental & opaque Try it: X Y Z ? 1) Yx X | Z judgmental & opaque X, Y and Yx? 2) Is consistency the only connection between Is consistency the only connection between

12
**THE STRUCTURAL MODEL PARADIGM Data Joint Generating Q(M) Distribution**

(Aspects of M) Data Inference M – Oracle for computing answers to Q’s. e.g., Infer whether customers who bought product A would still buy A if we were to double the price.

13
**ORACLE FOR MANIPILATION**

FAMILIAR CAUSAL MODEL ORACLE FOR MANIPILATION X Y Z Here is a causal model we all remember from high-school -- a circuit diagram. There are 4 interesting points to notice in this example: (1) It qualifies as a causal model -- because it contains the information to confirm or refute all action, counterfactual and explanatory sentences concerned with the operation of the circuit. For example, anyone can figure out what the output would be like if we set Y to zero, or if we change this OR gate to a NOR gate or if we perform any of the billions combinations of such actions. (2) Logical functions (Boolean input-output relation) is insufficient for answering such queries (3)These actions were not specified in advance, they do not have special names and they do not show up in the diagram. In fact, the great majority of the action queries that this circuit can answer have never been considered by the designer of this circuit. (4) So how does the circuit encode this extra information? Through two encoding tricks: 4.1 The symbolic units correspond to stable physical mechanisms (i.e., the logical gates) 4.2 Each variable has precisely one mechanism that determines its value. INPUT OUTPUT

14
**STRUCTURAL CAUSAL MODELS**

Definition: A structural causal model is a 4-tuple V,U, F, P(u), where V = {V1,...,Vn} are observable variables U = {U1,...,Um} are background variables F = {f1,..., fn} are functions determining V, vi = fi(v, u) P(u) is a distribution over U P(u) and F induce a distribution P(v) over observable variables

15
**STRUCTURAL MODELS AND CAUSAL DIAGRAMS U1 I W U2 Q P PAQ**

The arguments of the functions vi = fi(v,u) define a graph vi = fi(pai,ui) PAi V \ Vi Ui U Example: Price – Quantity equations in economics U1 I W U2 Q P PAQ

16
**STRUCTURAL MODELS AND INTERVENTION U1 I W U2 Q P**

Let X be a set of variables in V. The action do(x) sets X to constants x regardless of the factors which previously determined X. do(x) replaces all functions fi determining X with the constant functions X=x, to create a mutilated model Mx U1 I W U2 Q P

17
**STRUCTURAL MODELS AND INTERVENTION U1 I W U2 Q P P = p0**

Let X be a set of variables in V. The action do(x) sets X to constants x regardless of the factors which previously determined X. do(x) replaces all functions fi determining X with the constant functions X=x, to create a mutilated model Mx Mp U1 I W U2 Q P P = p0

18
**CAUSAL MODELS AND COUNTERFACTUALS**

Definition: The sentence: “Y would be y (in situation u), had X been x,” denoted Yx(u) = y, means: The solution for Y in a mutilated model Mx, (i.e., the equations for X replaced by X = x) with input U=u, is equal to y. Joint probabilities of counterfactuals: The super-distribution P* is derived from M. Parsimonous, consistent, and transparent

19
**AXIOMS OF CAUSAL COUNTERFACTUALS**

Y would be y, had X been x (in state U = u) Definiteness Uniqueness Effectiveness Composition Reversibility

20
**GRAPHICAL – COUNTERFACTUALS SYMBIOSIS**

Every causal graph expresses counterfactuals assumptions, e.g., X Y Z Missing arrows Y Z 2. Missing arcs Y Z consistent, and readable from the graph. Every theorem in SEM is a theorem in N-R, and conversely.

21
**STRUCTURAL ANALYSIS: SOME USEFUL RESULTS**

Complete formal semantics of counterfactuals Transparent language for expressing assumptions Complete solution to causal-effect identification Legal responsibility (bounds) Non-compliance (universal bounds) Integration of data from diverse sources Direct and Indirect effects, Complete criterion for counterfactual testability

22
**REGRESSION VS. STRUCTURAL EQUATIONS (THE CONFUSION OF THE CENTURY)**

Regression (claimless): Y = ax + a1z1 + a2z akzk + y a E[Y | x,z] / x = Ryx z Y y | X,Z Structural (empirical, falsifiable): Y = bx + b1z1 + b2z bkzk + uy b E[Y | do(x,z)] / x = E[Yx,z] / x Assumptions: Yx,z = Yx,z,w cov(ui, uj) = 0 for some i,j = =

23
**REGRESSION VS. STRUCTURAL EQUATIONS (THE CONFUSION OF THE CENTURY)**

Regression (claimless): Y = ax + a1z1 + a2z akzk + y a E[Y | x,z] / x = Ryx z Y y | X,Z Structural (empirical, falsifiable): Y = bx + b1z1 + b2z bkzk + uy b E[Y | do(x,z)] / x = E[Yx,z] / x Assumptions: Yx,z = Yx,z,w cov(ui, uj) = 0 for some i,j = = The mother of all questions: “When would b equal a?,” or “What kind of regressors should Z include?” Answer: When Z satisfies the backdoor criterion Question: When is b estimable by regression methods? Answer: graphical criteria available

24
**THE BACK-DOOR CRITERION**

Graphical test of identification P(y | do(x)) is identifiable in G if there is a set Z of variables such that Z d-separates X from Y in Gx. G Gx Z1 Z1 Z2 Z2 Z Z3 Z3 Z4 Z5 Z5 Z4 X Z6 Y X Z6 Y Moreover, P(y | do(x)) = å P(y | x,z) P(z) (“adjusting” for Z) z

25
**RECENT RESULTS ON IDENTIFICATION**

do-calculus is complete Complete graphical criterion for identifying causal effects (Shpitser and Pearl, 2006). Complete graphical criterion for empirical testability of counterfactuals (Shpitser and Pearl, 2007).

26
**DETERMINING THE CAUSES OF EFFECTS (The Attribution Problem)**

Your Honor! My client (Mr. A) died BECAUSE he used that drug.

27
**DETERMINING THE CAUSES OF EFFECTS (The Attribution Problem)**

Your Honor! My client (Mr. A) died BECAUSE he used that drug. Court to decide if it is MORE PROBABLE THAN NOT that A would be alive BUT FOR the drug! P(? | A is dead, took the drug) > 0.50 PN =

28
**THE PROBLEM Semantical Problem: What is the meaning of PN(x,y):**

“Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur.”

29
**THE PROBLEM Semantical Problem: What is the meaning of PN(x,y):**

“Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur.” Answer: Computable from M

30
**THE PROBLEM Semantical Problem: What is the meaning of PN(x,y):**

“Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur.” Under what condition can PN(x,y) be learned from statistical data, i.e., observational, experimental and combined. Analytical Problem:

31
**TYPICAL THEOREMS (Tian and Pearl, 2000)**

Bounds given combined nonexperimental and experimental data Identifiability under monotonicity (Combined data) corrected Excess-Risk-Ratio

32
**CAN FREQUENCY DATA DECIDE LEGAL RESPONSIBILITY?**

Experimental Nonexperimental do(x) do(x) x x Deaths (y) Survivals (y) 1,000 1,000 1,000 1,000 Nonexperimental data: drug usage predicts longer life Experimental data: drug has negligible effect on survival Plaintiff: Mr. A is special. He actually died He used the drug by choice Court to decide (given both data): Is it more probable than not that A would be alive but for the drug?

33
**SOLUTION TO THE ATTRIBUTION PROBLEM**

WITH PROBABILITY ONE 1 P(yx | x,y) 1 Combined data tell more that each study alone

34
**CONCLUSIONS Structural-model semantics, enriched with logic**

and graphs, provides: Complete formal basis for the N-R model Unifies the graphical, potential-outcome and structural equation approaches Powerful and friendly causal calculus (best features of each approach)

Similar presentations

OK

Judea Pearl Computer Science Department UCLA www.cs.ucla.edu/~judea DIRECT AND INDIRECT EFFECTS.

Judea Pearl Computer Science Department UCLA www.cs.ucla.edu/~judea DIRECT AND INDIRECT EFFECTS.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on silicon controlled rectifier Download ppt on biomass energy Ppt on indian entertainment and media industry Ppt on role of computer in different fields Ppt on france in french Ppt on developing leadership skills A ppt on leadership Ppt on network theory migration Ppt on relays and circuit breakers Transparent lcd display ppt online