Presentation is loading. Please wait.

Presentation is loading. Please wait.

From Association Analysis to Causal Discovery Prof Jiuyong Li University of South Australia.

Similar presentations


Presentation on theme: "From Association Analysis to Causal Discovery Prof Jiuyong Li University of South Australia."— Presentation transcript:

1 From Association Analysis to Causal Discovery Prof Jiuyong Li University of South Australia

2 Association analysis Diapers -> Beer Bread & Butter -> Milk

3 Positive correlation of birth rate to stork population increasing the stork population would increase the birth rate?

4 Further evidence for Causality ≠ Associations Simpson paradox RecoveredNot recoveredSumRecover rate Drug20 4050% No Drug16244040% 364480 FemaleRecoveredNot recoveredSumRecover rate Drug281020% No Drug9213030% 112940 MaleRecoveredNot recoveredSumRecover rate Drug18123060% No Drug731070% 251540

5 Association and Causal Relationship Two variables X and Y. Prob(Y | X) ≠ P(Y), X is associated with Y (association rules) Prob(Y | do X) ≠ Prob(Y | X) How does Y vary when X changes? The key, How to estimate Prob(Y | do X)? In association analysis, the relationship of X and Y is analysed in isolation. However, the relationship between X and Y is affected by other variables. 5

6 Causal discovery 1 Randomised controlled trials – Gold standard method – Expensive – Infeasible Association = causation

7 Causal discovery 2 Bayesian network based causal inference – Do-calculus (Pearl 2000) – IDA (Maathuis et al. 2009) – To infer causal effects in a Bayesian network. – However – Constructing a Bayesian network is NP hard – Low scalability to large number of variables

8 Leaning causal structures PC algorithm (Spirtes, Glymour and Scheines) – Not (A ╨ B | Z), there is an edge between A and B. – The search space exponentially increases with the number of variables. Constraint based search – CCC (G. F. Cooper, 1997) – CCU (C. Silverstein et. al. 2000) – Efficiently removing non- causal relationships. AC B ABCABC CCU AC B A  B  C, A  B  C, C  A  B CCC

9 Association rules Many efficient algorithms Hundreds of thousands to millions of rules. – Many are spurious. Interpretability – Association rules do not indicate causal effects.

10 Causal rules Discover causal relationships using partial association and simulated cohort study. Do not rely on Bayesian network structure learning. The discovery of causal rules also have strong theoretical support. Discover both single cause and combined causes. Can be discovered efficiently. Z. Jin, J. Li, L. Liu, T. D. Le, B. Sun, and R. Wang, Discovery of causal rules using partial association. ICDM, 2012 J. Li, T. D. Le, L. Liu, J. Liu, Z. Jin, and B. Sun. Mining causal association rules. In Proceedings of ICDM Workshop on Causal Discovery (CD), 2013.

11 Problem ABCDEFY#repeats 111111114 10111118 110101115 01111118 01000005 00001016 10000104 10111003 01011003 01001005 Discover causal rules from large databases of binary variables A  Y C  Y BF  Y DE  Y

12 Partial association test IJ K IJK IJ K M. W. Birch, 1964. Nonzero partial association

13 Partial association test – an example 4. Partial association test. ABCDEFYG#repeat 1111111014 101111108 1101011015 011111108 010000005 000010106 100001004 101110003 111101113 010010005

14 Fast partial association test K denotes all possible variable combinations, the number is very large. Counting the frequencies of the combinations is also time consuming. Our solution: – Sort data and count frequencies of the equivalence classes. – Only use the combinations existing in the data set.

15 Pruning strategies Definition (Redundant causal rules): Assume that X ⊂ W, if X → Y is a causal rule, rule W → Y is redundant as it does not provide new information. Definition (Condition for testing causal rules): We only test a combined causal rule XV → Y if X and Y have a zero association and V and Y have a zero association (cannot pass the qui- square test in step 3).

16 Algorithm ABCDEFGY#repeats 1111110114 101111018 1101010115 011111018 010000005 000010016 100001004 101110003 111111103 010010005 1. Prune the variable set (support) 2. Create the contingency table for each variable X x Y=1Y=0Total X=1n 11 n 12 n 1. X=0n 21 n 22 n 2. Totaln.1 n.2 n 3. Calculate the If go to next step 4. Partial association test. If PA(X, Y, K) is nonzero then X  Y is a causal rule. 5. Repeat 1-4 for each variable which is the combination of variables in set N If move X to a set N positive association zero association

17 Experimental evaluations We use the Arrhythmia data set in UCI machine learning repository. – We need to classify the presence and absence of cardiac arrhythmia. The data set contains 452 records and each record obtains 279 data attributes and one class attribute Our results are quite consistent with the results from CCC method. Some rules in CCC are removed by our method as they cannot pass the partial association test. Our method can discover the combined rules. CCC and CCU methods are not set to discover these rules.

18 Comparison with CCC and CCU

19 Experimental evaluations Figure 1: Extraction Time Comparison (20K Records)Figure 1: Extraction Time Comparison (100K Records)

20 Summary 1 Simpson paradox – Associations might be inconsistent in subsets Partial association test – Test the persistency of associations in all possible partitions. – Statistically sound. – Efficiency in sparse data. What else?

21 Cohort study 1 Defined population Expose Not expose Not have a disease Have a disease Not have a disease Have a disease Prospective: follow up. Retrospective: look back. Historic study.

22 Cohort study 2 Cohorts: share common characteristics but exposed or not exposed. Determine how the exposure causes an outcome. Measure: odds ratio = (a/b) / (c/d) DiseasedHealthy Exposedab Not exposedcd

23 Limitations of cohort study Need to know a hypothesis beforehand Domain experts determine the control variables. Collect data and test the hypothesis. Not for data exploration. We need – Given a data set without any hypotheses. – An automatic method to find and validate hypotheses. – For data exploration.

24 Control variables If we do not control covariates (especially those correlated to the outcome), we could not determine the true cause. Too many control variables result too few matched cases in data. – How many people with the same race, gender, blood type, hair colour, eye colour, education level, …. Irrelevant variables should not be controlled. – Eye colour may not relevant to the study. Cause Outcome Other factors

25 Matches Exact matching – Exact matches on all covariates. Infeasible. Limited exact matching – Exact matches on a few key covariates. Nearest neighbour matching – Find the closest neighbours Propensity score matching – Based on the predicted effect of a treatment of covariates.

26 Method1 ABCDEFY 1111111 1011111 1101011 0111111 0100000 0000101 1000010 1011100 0101100 0100100 Discover causal association rules from large databases of binary variables A  Y ABCDEFY 1111111 1010111 1101010 1010100 0111110 0010110 0101011 0010101 Fair dataset

27 Methods ABCDEFY 1111111 1010111 1101010 1010100 0111110 0010110 0101011 0010101 Fair dataset A: Exposure variable {B,C,D,E,F}: controlled variable set. Rows with the same color for the controlled variable set are called matched record pairs. A=0 A=1Y=1Y=0 Y=1n 11 n 12 Y=0n 21 n 22 An association rule is a causal association rule if: A  Y

28 Algorithm 28 ABCDEFGY 11111101 ……… 11010101 1.Remove irrelevant variables (support, local support, association) 2.Find the exclusive variables of the exposure variable (support, association), i.e. G, F. The controlled variable set = {B, C, D, E}. x 3. Find the fair dataset. Search for all matched record pairs 4. Calculate the odds-ratio to identify if the testing rule is causal 5. Repeat 2-4 for each variable which is the combination of variables. Only consider combination of non-causal factors. For each association rule (e. g. ) A  Y ABCDEY 111111 ……… 011110 …… x

29 Experimental evaluations

30 Figure 1: Extraction Time Comparison (20K Records) CAR CCC CCU

31 Experimental evaluations

32 Causality – Judea Pearl Judea Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000. 32 X1X1 X2X2 …X n-1 XnXn 5.27.56.55.2 5.67.26.65.3 …………… 5.47.1 5.7 6.9 5.8 +1 +0.8

33 Methods IDA – Maathuis, H. M., Colombo, D., Kalisch, M., and Buhlmann, P. (2010). Predicting causal effects in large- scale systems from observational data. Nature Methods, 7(4), 247–249. 33

34 Conclusions Association analysis has been widely used in data mining, but associations do not indicate causal relationships. Association rule mining can be adapted for causal relationship discovery by combining some statistical methods. – Partial association test – Cohort study They are efficient alternatives for causal Bayesian network based methods. They are capable of finding combined causal factors.

35 Discussions Causality and classification – Estimate prob (Y| do X) instead of prob (Y|X). Feature section versus controlled variable selection. Evaluation of causes. – Not classification accuracy – Bayesian networks??

36 Research Collaborators Jixue Liu Lin Liu Thuc Le Jin Zhou Bin-yu Sun

37 Thank you for listening Questions please ??


Download ppt "From Association Analysis to Causal Discovery Prof Jiuyong Li University of South Australia."

Similar presentations


Ads by Google