Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation Research and Meta-analysis

Similar presentations


Presentation on theme: "Evaluation Research and Meta-analysis"— Presentation transcript:

1 Evaluation Research and Meta-analysis

2 Significance and effect sizes
What is the problem with just using p-levels to determine whether one variable has an effect on another? Be careful with comparisons--sample results: For boys, r (87) = .31, p = .03 For girls, r (98) = .24, p = .14 How does sample size affect effect size? Significance? Why are effect sizes important? What is the difference between statistical, practical, and clinical significance?

3 What should you report? 2 group comparison—treatment vs. control on anxiety symptoms 3 group comparison—positive prime vs. negative prime vs. no prime on number of problems solved 2 continuous variables—relationship between neuroticism and goal directedness 3 continuous variables—anxiety as a function of self-esteem and authoritarian parenting 2 categorical variables—relationship between answers to 2 multiple choice questions

4 Narrative vs. quantitative reviews
When was the first meta-analysis? When was the term first used? What are the advantages of quantitative reviews? What are particular critiques of them?

5 Steps to meta-analysis

6 1. Define your variables/question
1 df contrasts What is a contrast?

7 2. Decide on inclusion criteria
What factors do you want to consider here?

8 3. Collect studies systematically
Where do you find studies? File drawer problem

9 4. Check for publication bias
Rosenthal’s fail-safe N # studies needed at p < .05= (K/2.706) (K(mean Z squared) = 2.706) Z = Z for that level of p K = number of studies in meta-analysis Funnel plot (and recent controversy) Rank correlation test for pub bias

10 Fig. 3. Funnel plots of 11 (subsets of) meta-analyses from 2011 and Greenwald, Poehlman, Uhlman, and Banaij (2009). Funnel plots of 11 (subsets of) meta-analyses from 2011 and Greenwald, Poehlman, Uhlman, and Banaij (2009). I-Chi(1) represents Ioannidis and Trikalinos’ (2007) test for an excess of significant results and BIAS Z represents Sterne and Egger’s (2005) test for funnel plot asymmetry. Marjan Bakker et al. Perspectives on Psychological Science 2012;7: Copyright © by Association for Psychological Science

11 What can you do if publication bias is a problem?
Trim and fill Sensitivity analysis Weight studies PET-PEESE (Figure 1) Bayesian approaches

12 5. Calculate effect sizes
If there is more than 1 effect per study, what do you do? What does the sign mean on an effect size? What are small, medium, and large effects? How can you convert from one to another? r or d?

13 Families of effect sizes
2 group comparisons (difference between the means) Cohen’s d Hedge’s g Glass’s d or delta Continuous or multi-group (proportion of variability) Eta squared η2 Partial eta-squared ηp2 Generalized eta-squared ηG2 r, fisher’s z, R2, adjusted R2 ω2 and its parts difference between η2 and R2 family

14 Nonparametric effect sizes
Nonnormal data: convert z to r or d Categorical data: Rho Cramer’s V Goodman-Kruskal’s Lambda How can you increase your effect sizes? How can you calculate confidence intervals around your effect sizes?

15 Interpretation of effect sizes
Recommended for at least most important findings PS U Binomial effect size display (p. 14) Relative risk Odds ratio Risk difference

16 6. Look at heterogeneity of effect sizes
Chi-square test I2 (measure based on Chi-square) Cochran’s Q Standard deviations of effect sizes Stem and leaf plot (p. 671) Box plot Forest plot

17 Forest plot

18 7. Combine effect sizes When should you do fixed vs. random effects?
Should you weight effect sizes, and if so, on what? How can you deal with dependent effect sizes? Hunter and Schmidt method vs. Hedges et al. method

19 8. Calculate confidence intervals
Credibility intervals vs. confidence intervals

20 9. Look for moderators What are common moderators you might test?
How do you compare moderators?

21 “little ‘m’ meta-analysis”
Comparing and combining effect sizes on a smaller level—when might you want to do this? How would you do it? Average within-cell r’s with fisher z transforms To compare independent r’s: Z = z1-z2/sqrt ((1/n-3) + (1/n-3)) To combine independent r’s: z = z1+z2/2

22 Write-up Inclusion criteria, search, what effect size
Which m-a tech and why Stem and leaf plots of effect sizes (and maybe mods) Forest plots Stats on variability of effect sizes, estimate of pop effect size and confidence intervals Publication bias analyses

23 Terms Evolutionary epistemology Evidence-based practice
Systems thinking Dynamical systems approaches Evaluation research

24 Issues with evaluation research
What questions are asked? What methods are used? What unique issues emerge?

25 Types of evaluation Formative Summative Needs assessment
Evaluability assessment Structured conceptualization Implementation evaluation Process evaluation Summative Outcome evaluation Impact evaluation Cost-benefit analysis Secondary analysis Meta-analysis

26 Methods used for different ?s
What is the scope of the problem? How big is the problem? How should we deliver the program? How well did we deliver it? What type of evaluation can we do? Was the program effective? What parts of the program work? Should we continue the program?

27 Evidence based medicine (Sackett et al.)
Convert problem into question Find evidence Evaluate validity, impact, applicability Integrate patient experience and clinical judgment Review evaluation

28 What does the book author
Mean by an “evaluation culture”? Is it a good thing?

29 Next week Rough draft to me and partner on Monday
Comment on it and meet with partner on Wednesday Send me commented on RD and summary on Wednesday At the end of the paper, write a short summary of your feedback, including 2-3 strengths of the paper as well as the 2-3 things you think the author most needs to work on. Write clearly.


Download ppt "Evaluation Research and Meta-analysis"

Similar presentations


Ads by Google