Presentation is loading. Please wait.

Presentation is loading. Please wait.

2 June 2008 1 ESRC Workshop Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University.

Similar presentations


Presentation on theme: "2 June 2008 1 ESRC Workshop Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University."— Presentation transcript:

1 2 June 2008 1 ESRC Workshop Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University of Oxford

2 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  What is meta-analysis,  When and why we use meta-analysis,  Examples of meta-analyses,  Benefits and pitfalls of using meta-analysis,  Defining a population of studies and finding publications,  Coding materials,  Inter-rater reliability,  Computing effect sizes,  Structuring a database,  A conceptual introduction to analysis and interpretation of results based on fixed effects, random effects, and multilevel models, and  Supplementary analyses 2

3 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) Traditionally, education researchers collect and analyse their own data (referred to as primary data). Secondary data analysis is based on data collected by someone else (or, perhaps, reanalysis of your own published data). There are at least four logical perspectives to this issue: 1. Meta-analysis -- systematic, quantitative review of published research in a particular field, the focus of this presentation. 2. Systematic review -- systematic, qualitative review of published research in a particular field 3. Secondary Data Analyses -- using large (typically public) databases 4. Reanalyses of published studies -- (often in ways critical of the original study). 3

4 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) Wilson & Lipsey (2001) synthesised 319 meta-analyses of intervention studies. Across the studies, roughly equal amounts of variance were due to:  substantive features of the intervention (true differences),  method effects (idiosyncratic study features and potential biases – particularly research design and operationalisation of outcome measures), and  sampling error. They concluded: These results underscore the difficulty of detecting treatment outcomes, the importance of cautiously interpreting findings from a single study, and the importance of meta-analysis in summarizing results across studies (p.413). 4

5 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Meta-analysis is an increasingly popular tool for summarising research findings  Cited extensively in research literature  Relied upon by policymakers  Important that we understand the method, whether we conduct or simply consume meta-analytic research  Should be one of the topics covered in all introductory research methodology courses 5

6  What is meta-analysis?  When and why we use meta-analysis? 6

7 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Systematic synthesis of various studies on a particular research question Do boys or girls have higher self-concepts?  Collect all studies relevant to a topic Find all published journal articles on the topic  An effect size is calculated for each outcome Determine the size/direction of gender difference for each study  “Content analysis” Code characteristics of the study; age, setting, ethnicity, self- concept domain (math, physical, social), etc.  Effect sizes with similar features are grouped together and compared; tests moderator variables Do gender differences vary with age, setting, ethnicity, self-concept, domain, etc? 7

8 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Coding: the process of extracting the information from the literature included in the meta-analysis. Involves noting the characteristics of the studies in relation to a priori variables of interest (qualitative)  Effect size: the numerical outcome to be analysed in a meta-analysis; a summary statistic of the data in each study included in the meta-analysis (quantitative)  Summarise effect sizes: central tendency, variability, relations to study characteristics (quantitative) 8

9 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 1904: quant. lit. review by Pearson 1977: first modern meta- analysis published by Smith & Glass (1977) Mid-1980s, methods develop: E.g., Hedges, Olkin, Hunter, & Schmidt 1990s: explosion in popularity, esp. in medical research 9

10 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 10 Karl Pearson conducted what is reputed to be the first meta- analysis (although not called this) comparing effects of inoculation in different settings. 10

11 Gene Glass coined the phrase meta-analysis in classic study of the effects of psychotherapy. Because most individual studies had small sample sizes, the effects typically were not statistically significant.  Results of 375 controlled evaluations of psychotherapy and counselling were coded and integrated statistically. The findings provide convincing evidence of the efficacy of psychotherapy.  On the average, the typical therapy client is better off than 75% of untreated individuals.  Few important differences in effectiveness could be established among many quite different types of psychotherapy (e.g., behavioral and non-behavioral). 11 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 11

12 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) The essence of good science is replicable and generalisable results. Do we get the same answer to important research questions when we run the study again? The primary aims of meta-analysis is to test the generalisability of results across a set of studies designed to answer the same research question. Are the results consistent? If not, what are the differences in the studies that explain the lack of consistency? 12

13 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  A primary aim is to reach a conclusion to a research question from a sample of studies that is generalisable to the population of all such studies.  Meta-analysis tests whether study-to-study variation in outcomes is more than can be explained by random chance.  When there is systematic variation in outcomes from different studies, meta-analysis tries to explain these differences in terms of study characteristics: e.g. measures used; study design; participant characteristics; controls for potential bias. 13

14 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  There exists a critical mass of comparable studies designed to address a common research question.  Data are presented in a form that allows the meta- analyst to compute an effect size for each study.  Characteristics of each study are described in sufficient detail to allow meta-analysts to compare characteristics of different studies and to judge the quality of each study. 14

15 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 15 The number of meta- analyses is increasing at a rapid rate.

16 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 16 Where are meta- analyses done? All over the world. 16

17 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 17 All disciplines do meta- analyses, but very popular in medicine

18 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 18 The number & frequency of citations are increasing in Education

19 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 19 The number & frequency of citations are increasing in Psychology

20 20

21 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Amato, P. R., & Keith, B. (1991). Parental divorce and the well-being of children: A meta-analysis. Psychological Bulletin, 110, 26-46. Times Cited: 471  Linn, M. C., & Petersen, A. C. (1985). Emergence and characterization of sex differences in spatial ability: A meta-analysis. Child Development, 56, 1479-1498. Times Cited: 570  Johnson, D. W., & et al (1981). Effects of cooperative, competitive, and individualistic goal structures on achievement: A meta-analysis. Psychological Bulletin, 89, 47-62. Times Cited: 426  Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742 Times Cited: 387  Hyde, J. S., & Linn, M. C. (1988). Gender differences in verbal ability: A meta-analysis. Psychological Bulletin, 104, 53-69. Times Cited: 316  Iaffaldano, M. T., & Muchinsky, P. M. (1985). Job satisfaction and job performance: A meta-analysis. Psychological Bulletin, 97, 251-273. Times Cited: 263. 21

22 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  De Wolff, M., & van IJzendoorn, M. H. (1997). Sensitivity and attachment: A meta-analysis on parental antecedents of infant attachment. Child Development, 68, 571-591. Times Cited: 340  Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory- of-mind development: The truth about false belief. Child Development, 72, 655-684. Times Cited: 276  Cohen, E. G. (1994). Restructuring the classroom: Conditions for productive small groups. Review of Educational Research, 64, 1-35. Times Cited: 235  Hansen, W. B. (1992). School-based substance abuse prevention: A review of the state of the art in curriculum, 1980-1990. Health Education Research, 7, 403-430. Times Cited: 207  Kulik, J. A., Kulik, C-L., Cohen, P. A. (1980). Effectiveness of Computer- Based College Teaching: A Meta-Analysis of Findings. Review of Educational Research, 50, 525-544. Times Cited: 198. 22

23 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Sheppard, B. H., Hartwick, J., & Warshaw, P. R. (1988). The theory of reasoned action: A meta-analysis of past research with recommendations for modifications and future research. Journal of Consumer Research, 15, 325-343. Times Cited: 515  Jackson, S. E., & Schuler, R. S. (1985). A meta-analysis and conceptual critique of research on role ambiguity and role conflict in work settings. Organizational Behavior and Human Decision Processes, 36, 16-78. Times Cited: 401  Tornatzky Lg, Klein Kj. (1994). Innovation characteristics and innovation adoption-implementation - A meta-analysis of findings. IEEE Transactions On Engineering Management, 29, 28-4. Times Cited: 269.  Lowe KB, Kroeck KG, Sivasubramaniam N. (1996). Effectiveness correlates of transformational and transactional leadership: A meta- analytic review of the MLQ literature. Leadership Quarterly, 7, 385-425. Times Cited: 203.  Churchill GA, Ford NM, Hartley SW, et al. (1985). Title: The determinants of salesperson performance - A meta-analysis. Journal Of Marketing Research, 22, 103-118. Times Cited: 189. 23

24 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Jadad AR, Moore RA, Carroll D, et al. (1996). Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Controlled Clinical Trials, 17, 1-12. Times Cited: 2,008  Boushey Cj, Beresford Saa, Omenn Gs, Et. Al. (1995). A quantitative assessment of plasma homocysteine as a risk factor for vascular-disease - Probable benefits of increasing folic-acid intakes. JAMA-journal Of The American Medical Assoc, 274, 1049- 1057. Times Cited: 2,128  Alberti W, Anderson G, Bartolucci A, et al. (1995). Chemotherapy in non-small-cell lung-cancer - A metaanalysis using updated data on individual patients from 52 randomized clinical-trials. British Medical Journal, 311, 899-909. Times Cited: 1,591  Block G, Patterson B, Subar A (1992). Fruit, vegetables, and cancer prevention - A review of the epidemiologic evidence. Nutrition And Cancer-an International Journal, 18, 1-29. Times Cited: 1,422 24

25 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Question: Does feedback from university students’ evaluations of teaching lead to improved teaching?  Teachers are randomly assigned to experimental (feedback) and control (no feedback) groups  Feedback group gets ratings, augmented, perhaps, with personal consultation  Groups are compared on subsequent ratings and, perhaps, other variables  Feedback teachers improved their teaching effectiveness by.3 standard deviations compared to control teachers on the Overall Rating item; even larger differences for ratings of Instructor Skill, Attitude Toward Subject, Student Feedback  Studies that augmented feedback with consultation produced substantially larger differences, but other methodological variations had little effect. 25

26  Question: What is the correlation between university teaching effectiveness and research productivity?  Based on 58 studies and 498 correlations:  The mean correlation between teaching effectiveness (mostly based on Students’ evaluations of teaching) and research productivity was almost exactly zero;  This near-zero correlation was consistent across different disciplines, types of university, indicators of research, and components of teaching effectiveness.  This meta-analysis was followed by Marsh & Hattie (2002) primary data study to more fully evaluate theoretical model 26 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)

27  Contention about global self-esteem versus multidimensional, domain- specific self-concept  Traditional reviews and previous meta-analyses of self-concept interventions have underestimated effect sizes by using an implicitly unidimensional perspective that emphasizes global self-concept.  We used meta-analysis and a multidimensional construct validation approach to evaluate the impact of self-concept interventions for children in 145 primary studies (200 interventions).  Overall, interventions were significantly effective (d =.51, 460 effect sizes).  However, in support of the multidimensional perspective, interventions targeting a specific self-concept domain and subsequently measuring that domain were much more effective (d = 1.16).  This supports a multidimensional perspective of self-concept 27 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)

28  Examined predictors of sexual, nonsexual violent, and general (any) recidivism  82 recidivism studies  Identified deviant sexual preferences and antisocial orientation as the major predictors of sexual recidivism for both adult and adolescent sexual offenders. Antisocial orientation was the major predictor of violent recidivism and general (any) recidivism  Concluded that many of the variables commonly addressed in sex offender treatment programs (e.g., psychological distress, denial of sex crime, victim empathy, stated motivation for treatment) had little or no relationship with sexual or violent recidivism 28

29 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  “Epidemiologic studies have suggested that folate intake decreases risk of cardiovascular diseases. However, the results of randomized controlled trials on dietary supplementation with folic acid to date have been inconsistent”.  Included 12 studies with randomised control trials.  The overall relative risks of outcomes for patients treated with folic acid supplementation compared with controls were non-significant for cardiovascular diseases, coronary heart disease, stroke, and for all-cause mortality.  Concluded folic acid supplementation does not reduce risk of cardiovascular diseases or all-cause mortality among participants with prior history of vascular disease. 29

30 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  In lekking species (those that gather for competitive mating), a male's mating success can be estimated as the number of females that he copulates with.  Aim of the study was to find predictors of lekking species’ mating success through analysis of 48 studies.  Behavioural traits such as male display activity, aggression rate, and lek attendance were positively correlated with male mating success. The size of "extravagant" traits, such as birds tails and ungulate antlers, and age were also positively correlated with male mating success.  Territory position was negatively correlated with male mating success, such that males with territories close to the geometric centre of the leks had higher mating success than other males.  Male morphology (measure of body size) and territory size showed small effects on male mating success. 30

31 31

32 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Compared to traditional literature reviews:  (1) there is a definite methodology employed in the research analysis (more like that used in primary research); and  (2) the results of the included studies are quantified to a standard metric thus allowing for statistical techniques for further analysis.  Therefore process of reviewing research literature is more objective, transparent, and replicable; less biased and idiosyncratic to the whims of a particular researcher 32

33  Cameron, J., & Pierce, W. D (1994). Reinforcement, reward, and intrinsic motivation: A meta-analysis. Review of Educational Research, 64, 363- 423.  Ryan, R., & Deci, E. L. (1996). When paradigms clash: Comments on Cameron and Pierce's claim that rewards do not undermine intrinsic motivation. Review of Educational Research, 66, 33-38  Cameron, J., & Pierce, W. D (1996). The debate about rewards and intrinsic motivation: Protests and accusations do not alter the results. Review of Educational Research, 66, 39-51.  Deci, E. L., Koestner, R., & Ryan, R. (2001). Extrinsic rewards and intrinsic motivation in education: reconsidered once again. Review of Educational Research, 71, 1-27.  Cameron, J. (2001). Negative effects of reward on intrinsic motivation: a limited phenomenon: comment on Deci, Koestner, and Ryan. Review of Educational Research, 71, 29-42. 33 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 33

34 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Increased power: by combining information from many individual studies, the meta-analyst is able to detect systematic trends not obvious in the individual studies.  Conclusions based on the set of studies are likely to be more accurate than any one study.  Improved precision: based on information from many studies, the meta-analyst can provide a more precise estimate of the population effect size (and a confidence interval).  Provides potential corrections for potential biases, measurement error and other possible artefacts  Identifies directions for further primary studies to address unresolved issues. 34 XLS

35 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Able to establish generalisability across many studies (and study characteristics).  Typically there is study-to-study variation in results. When this is the case, the meta-analyst can explore what characteristics of the studies explain these differences (e.g., study design) in ways not easy to do in individual studies.  Easy to interpret summary statistics (useful if communicating findings to a non-academic audience). 35

36 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Studies that are published are more likely to report statistically significant findings. This is a source of potential bias.  The debate about using only published studies:  peer-reviewed studies are presumably of a higher quality VERSUS  significant findings are more likely to be published than non-significant findings  There is no agreed upon solution. However, one should retrieve all studies that meet the eligibility criteria, and be explicit with how they dealt with publication bias. Some methods for dealing with publication bias have been developed (e.g., Fail- safe N, Trim and Fill method). 36

37  Meta-analyses are mostly limited to studies published in English.  Juni et al. (2002) evaluated the implications of excluding non-English publications in meta-analyses of randomised clinical trials in 50 meta-analyses  treatment effects were modestly larger in non-English publications (16%).  However, study quality was also lower in non-English publications.  Effects were sufficiently small not to have much influence on treatment effect estimates, but may make a difference in some reviews. 37 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 37

38 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Increasingly, meta-analysts evaluate the quality of each study included in a meta-analysis.  Sometimes this is a global holistic (subjective) rating. In this case it is important to have multiple raters to establish inter-rater agreement (more on this later).  Sometimes study quality is quantified in relation to objective criteria of a good study, e.g.  larger sample sizes;  more representative samples;  better measures;  use of random assignment;  appropriate control for potential bias;  double blinding, and  low attrition rates (particularly for longitudinal studies) 38

39  In a meta-analysis of Social Science meta-analyses, Wilson & Lipsey (1993) found an effect size of.50. They evaluated how this was related to study quality;  For meta-analyses providing a global (subjective) rating of the quality of each study, there was no significant difference between high and low quality studies; the average correlations between effect size and quality was almost exactly zero.  Almost no difference between effect sizes based on random- and non-random assignment (effect sizes slightly larger for random assignment).  Only study quality characteristic to make a difference was positively biased effects due to one-group pre/post design with no control group at all 39 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 39

40  Goldring (1990) evaluated the effects of gifted education programs on achievement. She found a positive effect, but emphasised that findings were questionable because of weak studies;  21 of the 24 studies were unpublished and only one used random assignment.  Effects varied with matching procedures:  largest effects for achievement outcomes were for studies in which all non-equivalent groups' differences controlled by only one pretest variable.  Effect sizes reduced as the number of control variables increase and  disappeared altogether with random assignment.  Goldring (1990, p. 324) concluded policy makers need to be aware of the limitations of the GAT literature. 40

41  Schulz (1995) evaluated study quality in 250 randomized clinical trials (RCTs) from 33 meta-analyses. Poor quality studies led to positively biased estimates:  lack of concealment (30-41%),  lack of double-blind (17%),  participants excluded after randomization (NS).  Moher et al. (1998) reanalysed 127 RCTs randomized clinical trials from 11 meta-analyses for study quality.  Low quality trials resulted in significantly larger effect sizes, 30-50% exaggeration in estimates of treatment efficacy.  Wood et al. (2008) evaluated study quality (1346 RCTs from 146 meta-analyses.  subjective outcomes: inadequate/unclear concealment & lack of blinding resulted in substantial biases.  objective outcomes: no significant effects.  conclusion: Systematic reviewers should assess risk of bias. 41

42  Meta-analyses should always include subjective and/or objective indicators of study quality.  In Social Sciences there is some evidence that studies with highly inadequate control for pre- existing differences leads to inflated effect sizes. However, it is surprising that other indicators of study quality make so little difference.  In medical research, studies largely limited to RCTs where there is MUCH more control than in social science research. Here there is evidence that inadequate concealment of assignment and lack of double-blind inflate effect sizes, but perhaps only for subjective outcomes.  These issues are likely to be idiosyncratic to individual discipline areas and research questions. 42

43  Defining a population of studies and finding publications  Coding materials  Inter-rater reliability  Computing effect sizes  Structuring a database 43

44 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) Establish research question Define relevant studies Develop code materials Locate and collate studies Pilot coding; coding Data entry and effect size calculation Main analyses Supplementary analyses 44

45 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Comparison of treatment & control groups? What is the effectiveness of a reading skills program for treatment group compared to an inactive control group?  Pretest-posttest differences? Is there a change in motivation over time?  What is the correlation between two variables? What is the relation between teaching effectiveness and research productivity?  Moderators of an outcome? Does gender moderate the effect of a peer-tutoring program on academic achievement? 45

46 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Do you wish to generalise your findings to other studies not in the sample?  Do you have multiple outcomes per study? e.g.:  achievement in different school subjects  5 different personality scales  multiple criteria of success  Such questions determine the choice of meta- analytic model  fixed effects  random effects  multilevel 46

47 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Need to have explicit inclusion and exclusion criteria  The broader the research domain, the more detailed they tend to become  Refine criteria as you interact with the literature  Components of a detailed search criteria  distinguishing features  research respondents  key variables  research methods  cultural and linguistic range  time frame  publication types 47

48 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Search electronic databases (e.g., ISI, Psychological Abstracts, Expanded Academic ASAP, Social Sciences Index, PsycINFO, and ERIC)  Examine the reference lists of included studies to find other relevant studies  If including unpublished data, email researchers in your discipline, take advantage of Listservs, and search Dissertation Abstracts International 48

49 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) The following is one possible way to write up the search procedure (see LeBlanc & Ritchie, 2001) 1. Electronic search strategy (e.g., PsycINFO & Dissertation Abstracts). Provide years included in database 2. Keywords and limitations of the search (e.g., language) 3. Additional search methods (e.g., mailing lists) 4. Exclusion criteria (e.g., must contain control group) 5. Yield of the search—number of studies found. Ideally should also mention how many were excluded from the meta-analysis and why 49

50 1 2 3 45 50 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)

51  Inclusion process usually requires several steps to cull inappropriate studies  Example from Bazzano, L. A., Reynolds, K., Holder, K. N., & He, J. (2006).Effect of Folic Acid Supplementation on Risk of Cardiovascular Diseases: A Meta- analysis of Randomized Controlled Trials. JAMA, 296, 2720-2726 51

52 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) You can report the inclusion/exclusion process using text rather than a flow chart, but is not as easy to follow if it is an elaborate process. Should report original sample and final yield as a minimum (in this case, original = 139, final = 22) 52

53 __Study ID _ _Year of publication __Publication type (1-5) __Geographical region (1-7) _ _ _ _Total sample size _ _ _Total number of males _ _ _Total number of females 53 Publication type (1-5) 1.Journal article 2.Book/book chapter 3.Thesis or doctoral dissertation 4.Technical report 5.Conference paper 1 99 2 1 87 41 46 Code Sheet Code Book/manual ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)

54 Coding characteristics should be mentioned in the paper. If the editor allows, a copy of the actual coding materials can be included as an appendix Mode of therapy, Duration of therapy, Participant characteristics, Publication characteristics, Design characteristics 54

55 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Random selection of papers coded by both coders (e.g., 30% of publications are double- coded)  Meet to compare code sheets  Where there is discrepancy, discuss to reach agreement  Amend code materials/definitions in code book if necessary  May need to do several rounds of piloting, each time using different papers 55

56 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Percent agreement: Common but not recommended  Cohen’s kappa coefficient  Kappa is the proportion of the optimum improvement over chance attained by the coders, where a value of 1 indicates perfect agreement and a value of 0 indicates that agreement is no better than that expected by chance  Kappa’s over.40 are considered to be a moderate level of agreement (but no clear basis for this “guideline”)  Correlation between different raters  Intraclass correlation. Agreement among multiple raters corrected for number of raters using Spearman-Brown formula ( r ) 56

57 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  The purpose of this exercise is to explore various issues of meta-analytic methodology  Discuss in groups of 3-4 people the following issues in relation to the gender differences in smiling study (LaFrance et al., 2003) 1. Did the aims of the study justify conducting a meta- analysis? 2. Was selection criteria and the search process explicit? 3. How did they deal with interrater (coder) reliability? 57

58 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 1. Extend previous meta-analyses, include previously untested moderators based on theory/empirical observations 2. Search process: detailed databases and 5 other sources of studies, search terms. Selection criteria: justification provided (e.g., for excluding under the age of 13). However, not clear how many studies were retrieved and then eventually included (compare with flow chart on slide 51) 3. Multiple coders (group of coders consisted of four people with two raters of each sex coding each moderator). Interrater reliability was calculated by taking the aggregate reliability of the four coders at each time using the Spearman–Brown formula 58

59 59

60 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  The effect size makes meta-analysis possible  It is based on the “dependent variable” (i.e., the outcome)  It standardizes findings across studies such that they can be directly compared  Any standardized index can be an “effect size” (e.g., standardized mean difference, correlation coefficient, odds-ratio), but must  be comparable across studies (standardization)  represent magnitude & direction of the relation  be independent of sample size  Different studies in same meta-analysis can be based on different statistics, but have to transform each to a standardized effect size that is comparable across different studies 60

61 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 61

62 62 XLS ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 62

63 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  O’Mara (2004) 63

64  Within the one meta-analysis, can include studies based on any combination of statistical analysis (e.g., t-tests, ANOVA, multiple regression, correlation, odds-ratio, chi-square, etc). However, you have to convert each of these to a common “effect size” metric.  Lipsey & Wilson (2001) present many formulae for calculating effect sizes from different information. The “art” of meta-analysis is how to compute effect sizes based on non-standard designs and studies that do not supply complete data.  However, need to convert all effect sizes into a common metric, typically based on the “natural” metric given research in the area. E.g. standardized mean difference; odds-ratio; correlation, etc. 64 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 64

65 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Standardized mean difference  Group contrast research  Treatment groups  Naturally occurring groups  Inherently continuous construct  Odds-ratio  Group contrast research  Treatment groups  Naturally occurring groups  Inherently dichotomous construct  Correlation coefficient  Association between variables research 65

66  Represents a standardized group contrast on an inherently continuous measure  Uses the pooled standard deviation (some situations use control group standard deviation)  Commonly called “d” In a gender difference study, the effect size might be: In an intervention study with experimental and control groups, the effect size might be: 66 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)

67 Means and standard deviations Correlations P-values F -statistics d t -statistics “other” test statistics Almost all test statistics can be transformed into an standardized effect size “d” ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 67

68 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 68

69 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Represents the strength of association between two inherently continuous measures  Generally reported directly as r (the Pearson product moment coefficient) 69

70 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  The odds-ratio is based on a 2 by 2 contingency table  The odds-ratio is the odds of success in the treatment group relative to the odds of success in the control group 70

71 71 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)

72 72 Alternatively: transform rs into Fisher’s Z r -transformed rs, which are more normally distributed ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 72

73 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Hedges proposed a correction for small sample size bias (n < 20)  Must be applied before analysis 73

74  The effect sizes are weighted by the inverse of the variance to give more weight to effects based on large sample sizes  Variance is calculated as  The standard error of each effect size is given by the square root of the sampling variance SE =  v i 74 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 74

75 N - ‘size’ M - ‘mean’ d = effect size Population The “likely” population parameter is the sample parameter ± uncertainty  Standard errors (s.e.)  Confidence intervals (C.I.) Interval estimates 75 Sample n - ‘size’ m - ‘mean’ d = effect size ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 75

76 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 76

77 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 77 Each study is one line in the data base Effect sizeDurationSample sizes Reliability of the instrument Variance of the effect size

78  Fixed effects model  Random effects model  Multilevel model 78

79 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Includes the entire population of studies to be considered; do not want to generalise to other studies not included (e.g., future studies).  All of the variability between effect sizes is due to sampling error alone. Thus, the effect sizes are only weighted by the within-study variance.  Effect sizes are independent. 79

80 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  There are 2 general ways of conducting a fixed effects meta-analysis: ANOVA & multiple regression  The analogue to the ANOVA homogeneity analysis is appropriate for categorical variables  Looks for systematic differences between groups of responses within a variable  Multiple regression homogeneity analysis is more appropriate for continuous variables and/or when there are multiple variables to be analysed  Tests the ability of groups within each variable to predict the effect size  Can include categorical variables in multiple regression as dummy variables. (ANOVA is a special case of multiple regression) 80

81 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) The homogeneity (Q) test asks whether the different effect sizes are likely to have all come from the same population (an assumption of the fixed effects model). Are the differences among the effect sizes no bigger than might be expected by chance? = effect size for each study (i = 1 to k) = mean effect size = a weight for each study based on the sample size However, this (chi-square) test is heavily dependent on sample size. It is almost always significant unless the numbers (studies and people in each study) are VERY small. This means that the fixed effect model will almost always be rejected in favour of a random effects model. 81

82 Run MATRIX procedure: ***** Meta-Analytic Results ***** ------- Distribution Description --------------------------------- N Min ES Max ES Wghtd SD 15.000.050 1.200.315 ------- Fixed & Random Effects Model ----------------------------- Mean ES -95%CI +95%CI SE Z P Fixed.4312.3383.5241.0474 9.0980.0000 Random.3963.2218.5709.0890 4.4506.0000 ------- Random Effects Variance Component ------------------------ v =.074895 ------- Homogeneity Analysis ------------------------------------- Q df p 44.1469 14.0000.0001 Random effects v estimated via noniterative method of moments. ------ END MATRIX ----- Significant heterogeneity in the effect sizes therefore random effects more appropriate and/or moderators need to be modelled 82 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)

83  Model moderators by grouping effect sizes that are similar on a specific characteristic  For example, group all effect size outcomes that come from studies using a placebo control group design and compare with effect sizes from studies using a waitlist control group design  So in this example, ‘Design’ is a dichotomous variable with the values 0 = placebo control and 1 = waitlist control 83 Exp. cond ES design

84 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  On the next slide, we will look at the outcomes of a study to show the importance of various moderator variables  Do Psychosocial and Study Skill Factors Predict College Outcomes? A Meta-Analysis  Robbins, Lauver, Le, Davis, Langley, & Carlstrom (2004). Psychological Bulletin, 130, 261–288  Aim:  To examine the relationship between psychosocial and study skill factors (PSFs) and college retention by meta- analyzing 109 studies 84

85 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 85 N = sample size for that variable k = number of correlation coefficients on which each distribution was based r = mean observed correlation CI r 10% = lower bound of the confidence interval for observed r CI r 90% = upper bound of the confidence interval for observed r Academic related skills largest effect size Institutional size smallest effect size Statistically significant because CI does not contain zero Not statistically significant because CI contains zero

86 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) Regression Coefficients and their standard errors B SE Sig? Target.4892.0552 yes Target-related.1097.0587 no Non-target.0805.0489 no From O’Mara, Marsh, Craven, & Debus (2006) 86 Target self-concept domains are those that are directly relevant to the intervention Target-related are those that are logically relevant to the intervention, but not focal Non-target are domains that are not expected to be enhanced by the intervention

87 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Is only a sample of studies from the entire population of studies to be considered. As a result, do want to generalise to other studies not included in the sample (e.g., future studies).  Variability between effect sizes is due to sampling error plus variability in the population of effects.  Effect sizes are independent. 87

88 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  If the homogeneity test is rejected (it almost always will be), it suggests that there are larger differences than can be explained by chance variation (at the individual participant level). There is more than one “population” in the set of different studies.  Now we turn to the random effects model to determine how much of this between-study variation can be explained by study characteristics that we have coded.  The total variance associated with the effect sizes has two components, one associated with differences within each study (participant level variation) and one between study variance: 88

89 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  The random error variance component is added to the variance calculated earlier  This means that the weighting for each effect size consists of the within-study variance ( v i ) and between-study variance ( v θ )  The new weighting for the random effects model ( w iRE ) is given by the formula: 89

90 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Do Self-Concept Interventions Make a Difference? A Synergistic Blend of Construct Validation and Meta- Analysis  O’Mara, Marsh, Craven, & Debus. (2006). Educational Psychologist, 41, 181–206  Aim:  To examine what factors moderate the effectiveness of self-concept interventions by meta-analyzing 200 interventions 90

91 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Q B = between group homogeneity. If the Q B value is significant, then the groups (categories) are significantly different from each other  Q W = within group homogeneity. If Q W is significant, then the effect sizes within a group (category) differ significantly from each other 91 Only 2 variables had significant QB in the random effects model. ‘Treatment characteristics’ also had significant QW. Note that the fixed effects are more significant than random effects

92 Run MATRIX procedure: ***** Meta-Analytic Results ***** ------- Distribution Description --------------------------------- N Min ES Max ES Wghtd SD 15.000.050 1.200.315 ------- Fixed & Random Effects Model ----------------------------- Mean ES -95%CI +95%CI SE Z P Fixed.4312.3383.5241.0474 9.0980.0000 Random.3963.2218.5709.0890 4.4506.0000 ------- Random Effects Variance Component ------------------------ v =.074895 ------- Homogeneity Analysis ------------------------------------- Q df p 44.1469 14.0000.0001 Random effects v estimated via noniterative method of moments. ------ END MATRIX ----- 92 Significant heterogeneity in the effect sizes therefore need to model moderators

93 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Meta-analytic data is inherently hierarchical (i.e., effect sizes nested within studies) and has random error that must be accounted for  Effect sizes are not necessarily independent  Allows for multiple effect sizes per study 93

94 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  New technique that is still being developed  Provides more precise and less biased estimates of between-study variance than traditional techniques 94

95 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Level 1: outcome-level component  Effect sizes  Level 2: study component  Publications 95

96 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Intercept-only model, which incorporates both the outcome-level and the study-level components (similar to a random effects model)  Expand model to include predictor variables, to explain systematic variance between the study effect sizes 96

97 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Acute Stressors and Cortisol Responses: A Theoretical Integration and Synthesis of Laboratory Research  Dickerson & Kemeny (2004). Psychological Bulletin, 130, 355–391.  Aim:  To examine methodological predictors of cortisol responses in a meta-analysis of 208 laboratory studies of acute psychological stressors 97

98 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Only 2 variables significant (Quad Time between stress onset & assessment; Time of day). The quadratic component is difficult to interpret as an unstandardized regression coefficient, but the graph suggests it is meaningfully large 98 Quadratic Function of time since Onset

99 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Fixed, random, or multilevel?  Generally, if more than one effect size per study is included in sample, multilevel should be used  However, if there is little variation at study level, the results of multilevel modelling meta-analyses are similar to random effects models 99

100 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Do you wish to generalise your findings to other studies not in the sample? No – fixed effects Yes – random effects or multilevel Yes – multilevel No – random effects or fixed effects  Do you have multiple outcomes per study? 100

101 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  The purpose of this exercise is to consider choice of meta-analytic method  Discuss in groups of 3-4 people the question in relation to the gender differences in smiling study (LaFrance et al., 2003)  Is there independence of effect sizes? What are the implications for model choice (fixed, random, multilevel)? 101

102 102 Fail-safe N Power analysis Trim-and-fill method

103 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  The fail-safe N (Rosenthal, 1991) determines the number of studies with an effect size of zero needed to lower the observed effect size to a specified (criterion) level.  For example, assume that you want to test the assumption that an effect size is at least.20.  If the observed effect size was.26 and the fail-safe N was found to be 44, this means that 44 unpublished studies with a mean effect size of zero would need to be included in the sample to reduce the observed effect size of.26 to.20. 103

104 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Power is a term used to describe the probability of a statistical test committing Type II error. That is, it indicates the likelihood that the test has failed to reject the null hypothesis, which implicitly suggests that there is no effect when in reality there is.  Power, sample size, significance level, and effect size are inter-related.  A lower powered study has to exhibit a much larger effect size to produce a significant finding. This has ramifications for publication bias.  Muncer, Craigie, & Holmes (2003) recommend conducting a power analysis on all studies included in the meta-analysis  Compare the observed value ( d ) against a theoretical value (includes information about sample size) 104

105 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Trim and fill procedure (Duval & Tweedie, 2000a, 2000b) calculates the effect of potential data censoring (including publication bias) on the outcome of the meta-analyses.  Nonparametric, iterative technique examines the symmetry of effect sizes plotted by the inverse of the standard error. Ideally, the effect sizes should mirror on either side of the mean. 105

106 Examining the methods and output of published meta-analysis 106

107 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Discuss in groups of 3-4 people the following question in relation to the gender differences in smiling study (LaFrance et al., 2003)  How did they deal with publication bias? Does this seem appropriate? 107

108 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  The purpose of this exercise is to practice reading meta-analytic results tables.  This study, by Reger et al. (2004), examines the relationship between neuropsychological functioning and driving ability in dementia. 1. In Table 3, which variables are homogeneous for the “on-road tests” driving measure in the “All Studies” column? What does this tell you about those variables? 2. In Table 4, look at the variables that were homogeneous in question (1) for the “on-road tests” using “All Studies”. Which variables have a significant mean ES? Which variable has the largest mean ES? 108

109 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg) 1. Homogeneous variables (non-significant Q - values): Mental status–general cognition, Visuospatial skills, Memory, Executive functions, Language 2. All of the relevant mean effect sizes are significant. Memory and language are tied as the largest mean ESs for homogeneous variables ( r =.44) 109

110 110

111 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  We established what meta-analysis is, when and why we use meta-analysis, and the benefits and pitfalls of using meta-analysis  Summarised how to conduct a meta-analysis  Provided a conceptual introduction to analysis and interpretation of results based on fixed effects, random effects, and multilevel models  Applied this information to examining the methods of a published meta-analysis 111

112 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Comparing apples and oranges  Quality of the studies included in the meta-analysis  What to do when studies don’t report sufficient information (e.g., “non-significant” findings)?  Including multiple outcomes in the analysis (e.g., different achievement scores)  Publication bias 112

113 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  With meta-analysis now one of the most popularly published research methods, it is an exciting time to be involved in meta-analytic research  The hottest topics in meta-analysis are:  Multilevel modelling to address the issue of independence of effect sizes  New methods in publication bias assessment (Trim-and- fill method, post hoc power analysis)  Also receiving attention:  Establishing guidelines for conducting meta-analysis (best practice)  Meta-analyses of meta-analyses 113

114 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Purpose-built  Comprehensive Meta-analysis (commercial)  Schwarzer (free, http://userpage.fu- berlin.de/~health/meta_e.htm)  Extensions to standard statistics packages  SPSS, Stata and SAS macros, downloadable from http://mason.gmu.edu/~dwilsonb/ma.html  Stata add-ons, downloadable from http://www.stata.com/support/faqs/stat/meta.html  HLM – V-known routine  MLwiN  Mplus  Please note that we do not advocate any one programme over another, and cannot guarantee the quality of all of the products downloadable from the internet. This list is not exhaustive. 114

115 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Cooper, H., & Hedges, L. V. (Eds.) (1994). The handbook of research synthesis (pp. 521–529). New York: Russell Sage Foundation.  Hox, J. (2003). Applied multilevel analysis. Amsterdam: TT Publishers.  Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park: Sage Publications.  Lipsey, M. W., & Wilson, D. B. (2001). Practical meta- analysis. Thousand Oaks, CA: Sage Publications. 115

116 ESRC RDI One Day Meta-analysis workshop (Marsh, O’Mara, Malmberg)  Pick up a brochure about our intermediate and advanced meta-analysis courses  Visit our website http://www.education.ox.ac.uk/research/resgroup/self/training.php 116


Download ppt "2 June 2008 1 ESRC Workshop Researcher Development Initiative Prof. Herb MarshMs. Alison O’MaraDr. Lars-Erik Malmberg Department of Education, University."

Similar presentations


Ads by Google