Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reporting results of systematic reviews

Similar presentations


Presentation on theme: "Reporting results of systematic reviews"— Presentation transcript:

1 Reporting results of systematic reviews
Karin Hannes Centre for Methodology of Educational Research

2 Overview Anatomy of a Systematic Review Dissemination Channels
Inclusion of Process and Implementation Aspects in a systematic Review

3 Anatomy of a Systematic Review
Background/Introduction Establish need Distinguish from previous review efforts State objectives and review questions Methods Criteria for inclusion and exclusion Type of population Type of studies Type of intervention (+ comparison) Type of outcomes

4 Anatomy of a Systematic Review (cont.)
Methods Locating studies: Consulted data sources (databases, grey literature, reference searches, expert consulting etc., Search strategy (final) This is one example of a review that explicitely mentions a revised search, based on what has been found in a first round.

5 Anatomy of a Systematic Review (cont.)
Inclusion criteria review on QES in the literature (update Dixon-Woods) Methods Proces of selecting studies Include screening instrument in annex 1. Published between 2005 (jan) and 2008 (dec) Already filtered out. 2. Conducted within health care or a health care context Include: Syntheses of qualitative with quantitative research) by synthesis methods other than informal review. Exclude: Papers commenting on methodological issues but without including details of the outcomes of the synthesis. Papers that do not explicitly describe or name a method for synthesis. Reviews on concepts/definitions used within health care or research issues 3. Published in English language 4. Published in a peer-reviewed journal Not acceptable in a Cochrane or Campbell review Possible screening criteria Timespan Language restrictions Discipline / Scientific field INCLUSION and EXCLUSION CRITERIA

6 Anatomy of a Systematic Review (cont.)
Methods: Data Extraction Introduce coding form Describe and define coding categories Describe process of data extraction  ‘at least two independent reviewers’ Example of a data-extraction sheet including (based on EPOC review group documents): Screening form Critical Appraisal checklist Data-extraction sheet ‘We used the EPOC guidance on data-extraction (reference)’

7

8

9 Descriptive

10

11 Statistical part

12 Anatomy of a Systematic Review (cont.)
Results Descriptive results Inferential results (if applicable) Discussion Conclusions Implications for practice Implications for research References Appendix: search strings, critical appraisal checklist, list with excluded studies (usually a flow chart), coding/extraction sheets, outcomes of meta-synthesis exercise etc. The Campbell Collaboration would ask for a user sheet (short summary avoiding scientific jargon) Descriptive and Inferential Statistics When analysing data, for example, the marks achieved by 100 students for a piece of coursework, it is possible to use both descriptive and inferential statistics in your analysis of their marks. Typically, in most research conducted on groups of people, you will use both descriptive and inferential statistics to analyse your results and draw conclusions. So what are descriptive and inferential statistics? And what are their differences? Descriptive Statistics Descriptive statistics is the term given to the analysis of data that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data. Descriptive statistics do not, however, allow us to make conclusions beyond the data we have analysed or reach conclusions regarding any hypotheses we might have made. They are simply a way to describe our data. Descriptive statistics are very important, as if we simply presented our raw data it would be hard to visulize what the data was showing, especially if there was a lot of it. Descriptive statistics therefore allow us to present the data in a more meaningful way which allows simpler interpretation of the data. For example, if we had the results of 100 pieces of students' coursework, we may be interested in the overall performance of those students. We would also be interested in the distribution or spread of the marks. Descriptive statistics allow us to do this. How to properly describe data through statistics and graphs is an important topic and discussed in other Laerd Statistics Guides. Typically, there are two general types of statistic that are used to describe data: Measures of central tendency: these are ways of describing the central position of a frequency distribution for a group of data. In this case, the frequency distribution is simply the distribution and pattern of marks scored by the 100 students from the lowest to the highest. We can describe this central position using a number of statistics, including the mode, median, and mean. You can read about measures of central tendency here. Measures of spread: these are ways of summarizing a group of data by describing how spread out the scores are. For example, the mean score of our 100 students may be 65 out of 100. However, not all students will have scored 65 marks. Rather, their scores will be spread out. Some will be lower and others higher. Measures of spread help us to summarize how spread out these scores are. To describe this spread, a number of statistics are available to us, including the range, quartiles, absolute deviation, variance and standard deviation. When we use descriptive statistics it is useful to summarize our group of data using a combination of tabulated description (i.e. tables), graphical description (i.e. graphs and charts) and statistical commentary (i.e. a discussion of the results). Inferential Statistics Whilst descriptive statistics examine our immediate group of data (for example, the 100 students' marks), inferential statistics aim to make inferences from this data in order to make conclusions that go beyond this data. In other words, inferential statistics are used to make inferences about a population from a sample in order to generalize (make assumptions about this wider population) and/or make predictions about the future. For example, a Board of Examiners may want to compare the performance of 1000 students that completed an examination. Of these, 500 students are girls and 500 students are boys. The 1000 students represent our "population". Whilst we are interested in the performance of all 1000 students, girls and boys, it may be impractical to examine the marks of all of these students because of the time and cost required to collate all of their marks. Instead, we can choose to examine a "sample" of these students and then use the results to make generalizations about the performance of all 1000 students. For the purpose of our example, we may choose a sample size of 200 students. Since we are looking to compare boys and girls, we may randomly select 100 girls and 100 boys in our sample. We could then use this, for example, to see if there are any statistically significant differences in the mean mark between boys and girls, even though we have not measured all 1000 students.

13 Disseminating Systematic Reviews: Organizations
Campbell Collaboration Cochrane Collaboration Joanna Briggs Institute EPPI-centre and many more organizations that produce and publish their own...

14 Disseminating Systematic Reviews: Journals
Most journals welcome article forms of full Cochrane or Campbell Reviews Some do not wish to publish them if they are public available in a database Some are sensitive to the argument that Cochrane and Campbell type of reviews require too much time and effort: Rapid reviews Narrow inclusion criteria Best Practice Sheets or Critical Appraisals of Reviews Check potential copyright issues!

15 Inclusion of Process and Implementation Aspects
Is there a need? It is important to know what works, but it is equally important to know what sort of programmes to put limited resources into The school feeding program review…. Scared Straight Programs (Petrosini Review) have proven to cause more harm than benefits. There are some good theories to potentially explain this, however too little empirical data in the trials to test them…. A Cochrane review on school feeding programmes recently concluded that these feeding programs significantly improved the growth and cognitive performance of disadvantaged children (Kristjansson et al, 2007). The highly heterogeneous trials in their review were further explored in a separate study to evaluate what works, for whom, and in what circumstances. The realist synthesis produced reported that the trials included in the review had many different designs and were implemented in varying social contexts and educational systems; by staff with different backgrounds, skills, and cultural beliefs; and with huge variation in the prevailing social, economic, and political context (Greenhalgh et al, 2007). Process data from some trials suggested that in situations of absolute poverty even severely malnourished children may not benefit from school feeding programmes because of substitution at home. The findings from the realist synthesis complement those of the Cochrane Review by illustrating that feeding programs may be more effective for some participants than others, and that effectiveness is influenced by how programs are implemented . The authors further stated that policymakers need to know not merely whether school feeding programmes work but what sort of programme to put resources into  There need to be a clear link to the complementarity function of the two reviews – as noted in the last sentence in the preceding section. Another example of a review acknowledging the potential impact of process related aspects is the one from Petrosino and colleagues (2003) on the effect of ‘Scared Straight’ and other juvenile awareness programs for preventing juvenile delinquency. This particular review concluded that juvenile awareness programs failed to deter crime. Moreover, they seemed to lead to more offending behaviour and caused harm to the very citizens they pledged to protect . In their attempt to discuss why the authors pointed out that there were many good post-hoc theories, however, the evaluations were not structured to provide the kind of mediating variables or ‘causal models’ necessary for an empirical response to this question in a systematic review (Petrosino et al, 2000). One factor that was believed to impact on the negative effect was the degree of harshness in the inmate presentation. However, the one trial involving a tour of a reformatory with no presentation reported one of the largest negative effects (Michigan Department of Correction 1967). Petrosino and colleagues have foregone their plans to compare results across the different designs included, stating that others may wish to take this up in the future.  The same has been found for alcohol prevention and substance misuse prevention programs. Shocking isn’t it!! reviewers evaluating complex interventions often experience difficulties in determining what exactly the intervention entailed, whether it was implemented fully or adhered to good practice guidelines and whether there were confounding factors in the wider social context that would affect the outcome of the intervention. Roen and colleagues (2006) explored evidence on implementation in reviews evaluating interventions to reduce unintentional injuries amongst children and young people. Some studies described the interventions conducted, identified strength and weaknesses of the intervention, considered the broader context or the relationship between implementation and outcomes or exploring reasons for anomalous findings. However, both research teams concluded that only a minority of the original studies described how implementation of the intervention may have influenced outcomes.

16 Inclusion of Process and Implementation Aspects
What is the problem? We do address variation in the effects of interventions: Factors related to patient or client groups Timing and intensity of programs The potential impact of co-interventions …. (Glasziou 2002; Higgins et al 2002) Meta-regression or sensitivity analysis or use of individual patient/client data Reviewers evaluating complex interventions often experience difficulties in determining: what exactly the intervention entailed whether it was implemented as intended whether there were confounding factors in the wider social context that would affect the outcomes … (Egan et al, 2009) Issues beyond those related to program design/logic

17 Inclusion of Process and Implementation Aspects
Cargo et al. (2011) developed a new instrument, the ‘Extraction tool for Campbell Collaboration Review of childhood and youth reviews’ to assist with the extraction of process and implementation variables in systematic reviews. We explored to what degree process and implementation variables are present in published educational reviews (N=10) The aspects that are articulated most whether consideration of these items in reviews is possible, given the data provided by its primary studies

18 Process and Implementation Aspects
Process and Implementation Aspects Theory or change models shaping the intervention Characteristics of: Implementing organisation Partnering organisation Implementers Participants, clients, patients Protocols for the intervention Context: Ecological External Process and Implementation factors Design and Methodological Issues Sensitivity analysis Quality assessment and risk of bias

19 Presence of Process and Implementation Variables in Systematic Reviews by the Campbell Collaboration
Could be extracted in most reviews: age, gender, grade or grade level, ethnicity of the participants, who the implementers were, implementer training, the intervention protocol, the intervention setting, attrition, dose delivered Could be extracted in some reviews: information regarding the organisation providing the intervention or service, the presence or absence of partnering organizations, role of implementers, SES of the participants, the engagement of the implementer, the presence and content of co-intervention Many reviews performed a sensitivity analysis. Could not be extracted (or only to a limited extent): information on the service delivery protocol, ethnicity, SES, age and gender of the implementer, minimum dose, reach of the intervention From the process and implementation section, recruitment, minimum attrition, minimum dose received, minimum fidelity, and participant engagement were not mentioned in any of the reviews.

20 Presence of process and implementation variables in the primary studies used by systematic reviews.
Could be extracted in most studies: the use of available resources like staff, building or materials, the implementer’s occupation, previous training or experience , the intervention protocol as well as the service delivery protocol , the characteristics of the participants or students that enrolled in an intervention, the place and/or setting, the country and its degree of urbanization, the length of the program, the frequency and type as an aspect of dose delivered Could be extracted in some studies: the quality of the intervention materials (e.g. Curricula), the funding sources, the use of joined forces and expertise of other instances, a clear explication of the change process envisioned Could not be extracted (or only to a limited extent): A diagram of the change model, leadership and technical support, alliances between the intervention program and other instances involved

21 To what extent were process and implementation variables from the primary studies included in the systematic review? Goerlich et al. (2006): Some aspects from the implementing organization such as adequacy of resources (e.g. staff) and quality of intervention materials were present in all three primary studies but not in the systematic review. Attrition, reach and minimum dose delivered from the process and implementation section, were not considered in the review, although they were mentioned in all studies. Zwi et al. (2007): Reporting of items from the process and implementation section was mostly in accordance with the presence of this item in primary studies. Only dose delivered was not considered in the systematic review although it was presented in all primary studies. Twelve out of thirteen studies reported a change model. This was, however, not considered in the review. The implementer was discussed, but no clear provider type was specified (considered in ten studies out of thirteen). Age, gender and grade or grade level of the participant were considered most in the original studies and were also present in the review. Ethnicity and SES were reported in nine studies but not mentioned in the review.

22

23

24

25

26

27

28

29


Download ppt "Reporting results of systematic reviews"

Similar presentations


Ads by Google