Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Categorical Data Analysis July 22, 2004

Similar presentations


Presentation on theme: "Introduction to Categorical Data Analysis July 22, 2004"— Presentation transcript:

1 Introduction to Categorical Data Analysis July 22, 2004

2 Categorical data The t-test, ANOVA, and linear regression all assumed outcome variables that were continuous (normally distributed). Even their non-parametric equivalents assumed at least many levels of the outcome (discrete quantitative or ordinal). We haven’t discussed the case where the outcome variable is categorical.

3 Types of Variables: a taxonomy
discrete random variables Categorical Quantitative binary nominal ordinal discrete continuous 2 categories + more categories + order matters + numerical + uninterrupted

4 Overview of statistical tests
Independent variable=predictor Dependent variable=outcome e.g., BMD= pounds age amenorrheic (1/0) Continuous predictors Binary predictor Continuous outcome

5 Types of variables to be analyzed
Statistical procedure or measure of association Predictor (independent) variable/s Outcome (dependent) variable Categorical Continuous ANOVA Dichotomous Continuous T-test Continuous Continuous Simple linear regression Multivariate Continuous Multiple linear regression Categorical Categorical Chi-square test Dichotomous Dichotomous Odds ratio, Mantel-Haenszel OR, Relative risk, difference in proportions Multivariate Dichotomous Logistic regression Kaplan-Meier curve/ log-rank test Categorical Time-to-event Time-to-event Multivariate Cox-proportional hazards model

6 done Today and next week Last part of course
Types of variables to be analyzed Statistical procedure or measure of association Predictor (independent) variable/s Outcome (dependent) variable done Categorical Continuous ANOVA Dichotomous Continuous T-test Continuous Continuous Simple linear regression Multivariate Continuous Multiple linear regression Today and next week Categorical Categorical Chi-square test Dichotomous Dichotomous Odds ratio, Mantel-Haenszel OR, Relative risk, difference in proportions Multivariate Dichotomous Logistic regression Kaplan-Meier curve/ log-rank test Last part of course Categorical Time-to-event Time-to-event Multivariate Cox-proportional hazards model

7 Difference in proportions
Example: You poll 50 people from random districts in Florida as they exit the polls on election day You also poll 50 people from random districts in Massachusetts. 49% of pollees in Florida say that they voted for Kerry, and 53% of pollees in Massachusetts say they voted for Kerry. Is there enough evidence to reject the null hypothesis that the states voted for Kerry in equal proportions?

8 Null distribution of a difference in proportions
Standard error of a proportion= Standard error can be estimated by= (still normally distributed) Standard error of the difference of two proportions= The variance of a difference is the sum of variances (as with difference in means). Analagous to pooled variance in the ttest

9 Null distribution of a difference in proportions
Difference of proportions For our example, null distribution=

10 Answer to Example We saw a difference of 4% between Florida and Massachusetts Null distribution predicts chance variation between the two states of 10%. P(our data/null distribution)=P(Z>.04/.10=.4)>.05 Not enough evidence to reject the null.

11 Chi-square test for comparing proportions (of a categorical variable) between groups
I. Chi-Square Test of Independence When both your predictor and outcome variables are categorical, they may be cross-classified in a contingency table and compared using a chi-square test of independence. A contingency table with R rows and C columns is an R x C contingency table.

12 Example Asch, S.E. (1955). Opinions and social pressure. Scientific American, 193,

13 The Experiment A Subject volunteers to participate in a “visual perception study.” Everyone else in the room is actually a conspirator in the study (unbeknownst to the Subject). The “experimenter” reveals a pair of cards…

14 The Task Cards Standard line Comparison lines A, B, and C

15 The Experiment Everyone goes around the room and says which comparison line (A, B, or C) is correct; the true Subject always answers last – after hearing all the others’ answers. The first few times, the 7 “conspirators” give the correct answer. Then, they start purposely giving the (obviously) wrong answer. 75% of Subjects tested went along with the group’s consensus at least once.

16 Further Results In a further experiment, group size (number of conspirators) was altered from 2-10. Does the group size alter the proportion of subjects who conform?

17 Number of group members?
The Chi-Square test Conformed? Number of group members? 2 4 6 8 10 Yes 20 50 75 60 30 No 80 25 40 70 Apparently, conformity less likely when less or more group members…

18 = 235 conformed out of 500 experiments. Overall likelihood of conforming = 235/500 = .47

19 Number of group members?
Expected frequencies if no association between group size and conformity… Conformed? Number of group members? 2 4 6 8 10 Yes 47 No 53

20 Do observed and expected differ more than expected due to chance?
Do observed and expected differ more than expected due to chance?

21 Chi-Square test Degrees of freedom = (rows-1)*(columns-1)=(2-1)*(5-1)=4 Rule of thumb: if the chi-square statistic is much greater than it’s degrees of freedom, indicates statistical significance. Here 85>>4.

22 The Chi-Square distribution: is sum of squared normal deviates
The expected value and variance of a chi-square: E(x)=df Var(x)=2(df)

23 Chi-Square test Degrees of freedom = (rows-1)*(columns-1)=(2-1)*(5-1)=4 Rule of thumb: if the chi-square statistic is much greater than it’s degrees of freedom, indicates statistical significance. Here 85>>4.

24 Caveat **When the sample size is very small in any cell (<5), Fischer’s exact test is used as an alternative to the chi-square test.

25 Example of Fisher’s Exact Test

26 Fisher’s “Tea-tasting experiment”
Claim: Fisher’s colleague (call her “Cathy”) claimed that, when drinking tea, she could distinguish whether milk or tea was added to the cup first. To test her claim, Fisher designed an experiment in which she tasted 8 cups of tea (4 cups had milk poured first, 4 had tea poured first). Null hypothesis: Cathy’s guessing abilities are no better than chance. Alternatives hypotheses: Right-tail: She guesses right more than expected by chance. Left-tail: She guesses wrong more than expected by chance

27 Fisher’s “Tea-tasting experiment”
Experimental Results: Milk Tea 3 1 Guess poured first Poured First 4

28 Fisher’s Exact Test Step 1: Identify tables that are as extreme or more extreme than what actually happened: Here she identified 3 out of 4 of the milk-poured-first teas correctly. Is that good luck or real talent? The only way she could have done better is if she identified 4 of 4 correct. Milk Tea 3 1 Guess poured first Poured First 4 Milk Tea 4 Guess poured first Poured First

29 Fisher’s Exact Test Step 2: Calculate the probability of the tables (assuming fixed marginals) Milk Tea 3 1 Guess poured first Poured First 4 Milk Tea 4 Guess poured first Poured First

30 “right-hand tail probability”: p=.243
Step 3: to get the left tail and right-tail p-values, consider the probability mass function: Probability mass function of X, where X= the number of correct identifications of the cups with milk-poured-first: “right-hand tail probability”: p=.243 “left-hand tail probability” (testing the null hypothesis that she’s systematically wrong): p=.986

31 SAS code and output for generating Fisher’s Exact statistics for 2x2 table
Milk Tea 3 1 4

32 data tea; input MilkFirst GuessedMilk Freq; datalines; 1 1 3 1 0 1 0 1 1 0 0 3 run; data tea; *Fix quirky reversal of SAS 2x2 tables; set tea; MilkFirst=1-MilkFirst; GuessedMilk=1-GuessedMilk;run; proc freq data=tea; tables MilkFirst*GuessedMilk /exact; weight freq;run;

33 SAS output Statistics for Table of MilkFirst by GuessedMilk
Statistic DF Value Prob ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ Chi-Square Likelihood Ratio Chi-Square Continuity Adj. Chi-Square Mantel-Haenszel Chi-Square Phi Coefficient Contingency Coefficient Cramer's V WARNING: 100% of the cells have expected counts less than 5. Chi-Square may not be a valid test. Fisher's Exact Test ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ Cell (1,1) Frequency (F) Left-sided Pr <= F Right-sided Pr >= F Table Probability (P) Two-sided Pr <= P Sample Size = 8

34 Introduction to the 2x2 Table

35 Introduction to the 2x2 Table
Exposure (E) No Exposure (~E) Disease (D) a b a+b = P(D) No Disease (~D) c d c+d = P(~D) a+c = P(E) b+d = P(~E) Marginal probability of disease Marginal probability of exposure

36 Cohort Studies Exposed Not Exposed Disease-free cohort Disease
Target population Disease Disease-free TIME

37 The Risk Ratio, or Relative Risk (RR)
Exposure (E) No Exposure (~E) Disease (D) a b No Disease (~D) c d a+c b+d risk to the exposed risk to the unexposed

38 Hypothetical Data Normal BP Congestive Heart Failure No CHF 1500 3000
Normal BP Congestive Heart Failure No CHF 1500 3000 High Systolic BP 400 1100 2600

39 Case-Control Studies Sample on disease status and ask retrospectively about exposures (for rare diseases) Marginal probabilities of exposure for cases and controls are valid. Doesn’t require knowledge of the absolute risks of disease For rare diseases, can approximate relative risk

40 Case-Control Studies Exposed in past Disease (Cases) Not exposed
Target population Exposed No Disease (Controls) Not Exposed

41 The Odds Ratio (OR) Exposure (E) No Exposure (~E) Disease (D)
Exposure (E) No Exposure (~E) Disease (D) a = P (D& E) b = P(D& ~E) No Disease (~D) c = P (~D&E) d = P (~D&~E)

42 The Odds Ratio Via Bayes’ Rule 1 1 When disease is rare: P(~D)  1
“The Rare Disease Assumption” 1

43 Properties of the OR (simulation)

44 Properties of the lnOR Standard deviation = Standard deviation =

45 Hypothetical Data 30 30 Smoker Non-smoker Lung Cancer 20 10
Smoker Non-smoker Lung Cancer 20 10 No lung cancer 6 24 30 30 Note that the size of the smallest 2x2 cell determines the magnitude of the variance NOTE how this means the smallest cell really determines the magnitude of the variance.

46 Example: Cell phones and brain tumors (cross-sectional data)
Brain tumor No brain tumor Own a cell phone 5 347 352 Don’t own a cell phone 3 88 91 8 435 453

47 Same data, but use Chi-square test or Fischer’s exact
Brain tumor No brain tumor Own 5 347 352 Don’t own 3 88 91 8 435 453

48 Same data, but use Odds Ratio
Brain tumor No brain tumor Own a cell phone 5 347 352 Don’t own a cell phone 3 88 91 8 435 453


Download ppt "Introduction to Categorical Data Analysis July 22, 2004"

Similar presentations


Ads by Google