Gerald Dyer, Jr., MPH October 20, 2016

Slides:



Advertisements
Similar presentations
Statistics Review – Part II Topics: – Hypothesis Testing – Paired Tests – Tests of variability 1.
Advertisements

Chapter 6 Sampling and Sampling Distributions
Correlation Chapter 9.
The Two Sample t Review significance testing Review t distribution
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
Inferences About Process Quality
BCOR 1020 Business Statistics Lecture 20 – April 3, 2008.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Hypothesis Testing Using The One-Sample t-Test
Chapter 9: Introduction to the t statistic
Chapter 12: Analysis of Variance
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 14 Analysis.
Correlation.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Understanding the Variability of Your Data: Dependent Variable Two "Sources" of Variability in DV (Response Variable) –Independent (Predictor/Explanatory)
Copyright © 2012 by Nelson Education Limited. Chapter 7 Hypothesis Testing I: The One-Sample Case 7-1.
Meta-analysis and “statistical aggregation” Dave Thompson Dept. of Biostatistics and Epidemiology College of Public Health, OUHSC Learning to Practice.
Testing Hypotheses about Differences among Several Means.
© Copyright McGraw-Hill 2000
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
Statistical Inference for the Mean Objectives: (Chapter 9, DeCoursey) -To understand the terms: Null Hypothesis, Rejection Region, and Type I and II errors.
Issues concerning the interpretation of statistical significance tests.
Testing Differences between Means, continued Statistics for Political Science Levin and Fox Chapter Seven.
Chapter Eight: Using Statistics to Answer Questions.
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 14 th February 2013.
© Copyright McGraw-Hill 2004
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 11: Models Marshall University Genomics Core Facility.
The Analysis of Variance ANOVA
Chapter 13 Understanding research results: statistical inference.
HYPOTHESIS TESTING FOR DIFFERENCES BETWEEN MEANS AND BETWEEN PROPORTIONS.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 10: Comparing Models.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
1 Consider the k studies as coming from a population of effects we want to understand. One way to model effects in meta-analysis is using random effects.
Stats Methods at IC Lecture 3: Regression.
GS/PPAL Section N Research Methods and Information Systems
Dependent-Samples t-Test
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 20th February 2014  
Power Analysis and Meta-analysis
Sampling Distributions and Estimation
Chapter 11: Simple Linear Regression
i) Two way ANOVA without replication
CHAPTER 11 Inference for Distributions of Categorical Data
Meta-analysis statistical models: Fixed-effect vs. random-effects
Lecture 4: Meta-analysis
12 Inferential Analysis.
Elementary Statistics
POSC 202A: Lecture Lecture: Substantive Significance, Relationship between Variables 1.
Correlation and Regression
Analyzing Reliability and Validity in Outcomes Assessment Part 1
Gerald - P&R Chapter 7 (to 217) and TEXT Chapters 15 & 16
Systematic review and meta-analysis
Chapter 9 Hypothesis Testing.
Chapter 10 Correlation and Regression
AP Stats Check In Where we’ve been… Chapter 7…Chapter 8…
Analysis of Variance (ANOVA)
Quantitative Methods in HPELS HPELS 6210
CHAPTER 11 Inference for Distributions of Categorical Data
Chapter 10 Analyzing the Association Between Categorical Variables
12 Inferential Analysis.
Product moment correlation
EAST GRADE course 2019 Introduction to Meta-Analysis
CHAPTER 11 Inference for Distributions of Categorical Data
CHAPTER 10 Comparing Two Populations or Groups
Facts from figures Having obtained the results of an investigation, a scientist is faced with the prospect of trying to interpret them. In some cases the.
CHAPTER 11 Inference for Distributions of Categorical Data
Chapter Nine: Using Statistics to Answer Questions
CHAPTER 11 Inference for Distributions of Categorical Data
Chapter 9 Test for Independent Means Between-Subjects Design
Presentation transcript:

Gerald Dyer, Jr., MPH October 20, 2016 HETEROGENEITY Gerald Dyer, Jr., MPH October 20, 2016

OVERVIEW P&R CHAPTER 7 (Up to Page 217) Chapter 15 Chapter 16 Chapter 18 (Brief Quiz Game)

HETEROGENEITY??

YOUR FIRST THOUGHTS?

DEFINITION Heterogeneity is defined in the text as “we use heterogeneity to mean heterogeneity in true effects only.” pg. 106 Simply Complex Includes Several Measures Maybe more than just vertical and horizontal lines…

WHAT IS HETEROGENIETY?? -_-

2 Types of Heterogeneity P&R Chapter 7 2 Types of Heterogeneity Differences in studies by methods, participants, and other unknown sources. Differences in studies due to their quantitative findings, known as statistical heterogeneity. Systematic reviews make efforts to limit heterogeneity by methods and participants by utilizing very precise inclusion criteria. However, some use broad inclusion criteria; therefore, heterogeneity is expected. Stat heterogeneity can be due to differences in baseline characteristics of the population and methodological differences.

EXPECT CONSIDERABLE HETEROGENEITY P&R Chapter 7 Primary Study Complexities Social interventions are complex from the content of the intervention, study population, outcomes, effectiveness of intervention, and variation of the intervention while being implemented. EXPECT CONSIDERABLE HETEROGENEITY

SOCIAL HETEROGENEITY VARIABILITY Study populations, interventions, and settings Outcomes Study Designs “Social heterogeneity may incorporate not only socio-demographic and individual differences, but also, historical, cultural, spatial, and other differences, which may affect both the delivery and impact of interventions being reviewed.” (See Page 216)

A META-ANALYST’S INTEREST If it is greater than chance, then the results suggest that the studies may not be similar enough to permit comparisons. P&R describe the Q statistic and the I^2 statistic. The Q stat can test for heterogeneity, but it is considered low power. Even when there is no evidence of stat heterogeneity, a meta-analysis may not be appropriate because similar effect sizes may be obtained from studies that are conceptually different. The I^2 stat was developed to detect the degree of inconsistencies between studies. It should be noted that the quantification of heterogeneity is only one component of a wider variability across studies. INTERESTED IN WHETHER STATISTICAL HETEROGENEITY IS GREATER THAN CHANCE

CHAPTER 15 Central theme: “The goal of a synthesis is not simply to compute a summary effect, but rather to make sense of a pattern of effects.” Problem addressed: “The observed variation in the estimated effect sizes is partly spurious.” Array of dispersion measures: Q Statistic, p-value, T2, T, and I2

CHAPTER 16

ISOLATING THE VARIATION IN TRUE EFFECTS “When we speak about the heterogeneity in effect sizes, we mean the variation in the true effect sizes.” (TEXT page 108) Extraction Mechanism of true between-studies variation from the observed variation Compute the total amount of study-to-study variation observed. Estimate how much the ovserved effects would be expected to vary from each other if the true effect was actually the same in all studies. The excess variation (if any) is assumed to reflect real differences in effect size (heterogeneity).

Q (Part 1) “Statistic that is sensitive to the ratio of the observed variation to the within-study error.” TEXT page 109 Q is a standardized measure, and it is determined on the assumption that all studies share a common effect size. Q depends on the degrees of freedom: df = k - 1

Q (Part 2) Q is not an intuitive measure. It is a sum and depends strongly on the number of studies. Q is on a standardized scale, but it is used to test the assumption of homogeneity. Q can be used to calculate T, T2, and I2 SEE FIGURE 16.3 on TEXT PAGE 111

Q - Conclusion Concluding Remarks: A significant p-value provides evidence that the true effects vary. A non-significant p-value should not be taken as evidence that the effect sizes are consistent. The test assesses the viability of the null hypothesis (all studies share a common effect size) and not to estimate the magnitude of true dispersion.

TAU-SQUARED (T2) Defined as the estimate of the variance of the true effect sizes T2 reflects the absolute variation and is in the same metric (squared) as the effect size itself. T2 is used to assign weights under the random-effects.

T (Tau) Estimate for the standard deviation. T can be used to describe the distribution of effect sizes about the mean effect. Note that if we wanted to make predictions about the distribution of true effects we would need to take account of the error in estimating both the effect size and T. See Page 117

I2 I2 allows us to discuss the amount of variance on a relative scale. Example: After determining what proportion of the observed variance is real from I2, we can know is there is any explanation for the variance. I2 ~ 0 -> There is nothing to explain. Values of 25%, 50%, and 75% might be benchmarks for low, moderate, and high, respectively.

Let’s Play a Brief, Little Game! =) CHAPTER 18 Let’s Play a Brief, Little Game! =)

NAME!!!

THAT!!!

HETEROGENEITY!!! HETEROGENEITY!!!

NAME THAT HETEROGENEITY? See Figure 16.4 on Page 113 Focus on Plots A and C Impact of Q and study number. What are the trends in the impact of Q? Answer: Additional precision found in plot C makes the p-value move away from zero, compared to A. The p-value for the columns moved toward 1.0 as we added studies. What do the degrees of freedom mean in this example? Since Q is less than the degrees of freedom, the additional evidence strengthens the case that the excess dispersion is zero. What assumptions should we be cautious of when interpreting the Q and p-values? -A significant p-value provides evidence that the true effects vary. -A non-significant p-value should not be taken as evidence that the effect sizes are consistent. -The test assesses the viability of the null hypothesis (all studies share a common effect size) and not to estimate the magnitude of true dispersion.

THANK YOU!!