Reporting results of systematic reviews

Slides:



Advertisements
Similar presentations
Dr. Padam Simkhada Dr Jane Knight
Advertisements

Evidence-based Dental Practice Developing guidelines or clinical recommendations Slide #1 This lecture follows the previous online lecture on evidence.
Yiu-fai Cheung, MD Department of Paediatrics and Adolescent Medicine LKS Faculty of Medicine The University of Hong Kong Hong Kong, China Sharing in GRF.
Integrating the gender aspects in research and promoting the participation of women in Life Sciences, Genomics and Biotechnology for Health.
Research article structure: Where can reporting guidelines help? Iveta Simera The EQUATOR Network workshop.
Protocol Development.
What do I do with the literature when I’ve found it? Alison Brettle, Lecturer (Information Specialist) School of Nursing and Midwifery University of Salford.
Appraisal of Literature. Task 4 The task requires that you:  Obtain a piece of literature from a journal, book or internet source. The literature should.
Student Learning Development, TCD1 Systematic Approaches to Literature Reviewing Dr Tamara O’Connor Student Learning Development Trinity College Dublin.
Doug Altman Centre for Statistics in Medicine, Oxford, UK
Systematic Approaches to Literature Reviewing
Reporting systematic reviews and meta-analyses: PRISMA
Mixed methods in health services research: Pitfalls and pragmatism Robyn McDermott Mixed Methods Seminar JCU October 2014.
Centre for Reviews and Dissemination An overview of development and progress May 2013 PROSPERO International prospective register of systematic reviews.
8. Evidence-based management Step 3: Critical appraisal of studies
Reading the Dental Literature
Evidenced Based Practice; Systematic Reviews; Critiquing Research
How do nurses use new technologies to inform decision making?
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
From Evidence to EMS Practice: Building the National Model Eddy Lang, MD, CFPC (EM), CSPQ SMBD-Jewish General Hospital, McGill University Montreal, Canada.
Developing Research Proposal Systematic Review Mohammed TA, Omar Ph.D. PT Rehabilitation Health Science.
PROSPERO International prospective register of systematic reviews.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Systematic Review of the Literature: A Novel Research Approach.
How to Write a Critical Review of Research Articles
Systematic Reviews.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Data Evaluation: Initial screening and Coding Adapted from David.
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Zoe G. Davies Centre for Evidence-Based Conservation University of Birmingham, UK Systematic Review Methodology: a brief summary.
Session I: Unit 2 Types of Reviews September 26, 2007 NCDDR training course for NIDRR grantees: Developing Evidence-Based Products Using the Systematic.
Appraising Randomized Clinical Trials and Systematic Reviews October 12, 2012 Mary H. Palmer, PhD, RN, C, FAAN, AGSF University of North Carolina at Chapel.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
Criteria to assess quality of observational studies evaluating the incidence, prevalence, and risk factors of chronic diseases Minnesota EPC Clinical Epidemiology.
DEPARTMENT OF MIDWIFERY STUDIES: WISH Institute: ReaCH group Systematic reviews: some initial thoughts COST meeting Sept 2010 With thanks to all those.
Focusing the question Janet Harris Cochrane Qualitative Research Methods Group ESQUIRE Qualitative Systematic Review Workshop University of Sheffield 6.
EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Doing a Systematic Review Jo Hunter Linda Atkinson Oxford University Health Care Libraries 1 March 2006 Workshops in Information Skills and Electronic.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
Focusing the question Janet Harris
R. Heshmat MD; PhD candidate Systematic Review An Introduction.
Research article structure: Where can reporting guidelines help? Iveta Simera The EQUATOR Network workshop 10 October 2012, Freiburg, Germany.
Systematic Review Krit Pongpirul, MD, MPH. Johns Hopkins University.
Journal Club Alcohol, Other Drugs, and Health: Current Evidence November-December 2012.
LITERATURE REVIEW ARCHELLE JANE C. CALLEJO, PTRP,MSPH.
Guidelines Recommandations. Role Ideal mediator for bridging between research findings and actual clinical practice Ideal tool for professionals, managers,
NIHR Themed Call Prevention and treatment of obesity Writing a good application and the role of the RDS 19 th January 2016.
Unit 11: Evaluating Epidemiologic Literature. Unit 11 Learning Objectives: 1. Recognize uniform guidelines used in preparing manuscripts for publication.
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Systematic reviews and meta-analyses: when and how to do them Andrew Smith Royal Lancaster Infirmary 18 May 2015.
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
Is a meta-analysis right for me? Jaime Peters June 2014.
Depression Screening in Primary Care and Impact on Suicide Prevention Anne-Marie T. Mann, BSN, RN, DNP Candidate Diane Kay Boyle, PhD, RN, FAAN.
Organization and Implementation of a National Regulatory Program for the Control of Radiation Sources Program Performance Criteria.
Systematic Reviews of Evidence Introduction & Applications AEA 2014 Claire Morgan Senior Research Associate, WestEd.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 11. Reporting.
A1 & A2 The aim: (separate) Critique a Qualitative study on “Telemonitoring of blood glucose and blood pressure in type 2 diabetes.” Critique a Quantitative.
Resource 1. Involving and engaging the right stakeholders.
Literature review Methods
Supplementary Table 1. PRISMA checklist
Effective evidence-based occupational therapy
Lifestyle factors in the development of diabetes among African immigrants in the UK: A systematic review Alloh T. Folashade Faculty of Health and Social.
STROBE Statement revision
Systematic Review (Advanced_Course_Module_6_Appendix)
What are systematic reviews and why do we need them?
European Institute of Public Administration (NL)
Systematic Review (Advanced Course: Module 6 Appendix)
Presentation transcript:

Reporting results of systematic reviews Karin Hannes Centre for Methodology of Educational Research

Overview Anatomy of a Systematic Review Dissemination Channels Inclusion of Process and Implementation Aspects in a systematic Review

Anatomy of a Systematic Review Background/Introduction Establish need Distinguish from previous review efforts State objectives and review questions Methods Criteria for inclusion and exclusion Type of population Type of studies Type of intervention (+ comparison) Type of outcomes

Anatomy of a Systematic Review (cont.) Methods Locating studies: Consulted data sources (databases, grey literature, reference searches, expert consulting etc., Search strategy (final) This is one example of a review that explicitely mentions a revised search, based on what has been found in a first round.

Anatomy of a Systematic Review (cont.) Inclusion criteria review on QES in the literature (update Dixon-Woods) Methods Proces of selecting studies Include screening instrument in annex 1. Published between 2005 (jan) and 2008 (dec) Already filtered out. 2. Conducted within health care or a health care context Include: Syntheses of qualitative with quantitative research) by synthesis methods other than informal review. Exclude: Papers commenting on methodological issues but without including details of the outcomes of the synthesis. Papers that do not explicitly describe or name a method for synthesis. Reviews on concepts/definitions used within health care or research issues 3. Published in English language 4. Published in a peer-reviewed journal Not acceptable in a Cochrane or Campbell review Possible screening criteria Timespan Language restrictions Discipline / Scientific field INCLUSION and EXCLUSION CRITERIA

Anatomy of a Systematic Review (cont.) Methods: Data Extraction Introduce coding form Describe and define coding categories Describe process of data extraction  ‘at least two independent reviewers’ Example of a data-extraction sheet including (based on EPOC review group documents): Screening form Critical Appraisal checklist Data-extraction sheet ‘We used the EPOC guidance on data-extraction (reference)’

http://epoc.cochrane.org/data-extraction

Descriptive

Statistical part

Anatomy of a Systematic Review (cont.) Results Descriptive results Inferential results (if applicable) Discussion Conclusions Implications for practice Implications for research References Appendix: search strings, critical appraisal checklist, list with excluded studies (usually a flow chart), coding/extraction sheets, outcomes of meta-synthesis exercise etc. The Campbell Collaboration would ask for a user sheet (short summary avoiding scientific jargon) Descriptive and Inferential Statistics When analysing data, for example, the marks achieved by 100 students for a piece of coursework, it is possible to use both descriptive and inferential statistics in your analysis of their marks. Typically, in most research conducted on groups of people, you will use both descriptive and inferential statistics to analyse your results and draw conclusions. So what are descriptive and inferential statistics? And what are their differences? Descriptive Statistics Descriptive statistics is the term given to the analysis of data that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data. Descriptive statistics do not, however, allow us to make conclusions beyond the data we have analysed or reach conclusions regarding any hypotheses we might have made. They are simply a way to describe our data. Descriptive statistics are very important, as if we simply presented our raw data it would be hard to visulize what the data was showing, especially if there was a lot of it. Descriptive statistics therefore allow us to present the data in a more meaningful way which allows simpler interpretation of the data. For example, if we had the results of 100 pieces of students' coursework, we may be interested in the overall performance of those students. We would also be interested in the distribution or spread of the marks. Descriptive statistics allow us to do this. How to properly describe data through statistics and graphs is an important topic and discussed in other Laerd Statistics Guides. Typically, there are two general types of statistic that are used to describe data: Measures of central tendency: these are ways of describing the central position of a frequency distribution for a group of data. In this case, the frequency distribution is simply the distribution and pattern of marks scored by the 100 students from the lowest to the highest. We can describe this central position using a number of statistics, including the mode, median, and mean. You can read about measures of central tendency here. Measures of spread: these are ways of summarizing a group of data by describing how spread out the scores are. For example, the mean score of our 100 students may be 65 out of 100. However, not all students will have scored 65 marks. Rather, their scores will be spread out. Some will be lower and others higher. Measures of spread help us to summarize how spread out these scores are. To describe this spread, a number of statistics are available to us, including the range, quartiles, absolute deviation, variance and standard deviation. When we use descriptive statistics it is useful to summarize our group of data using a combination of tabulated description (i.e. tables), graphical description (i.e. graphs and charts) and statistical commentary (i.e. a discussion of the results). Inferential Statistics Whilst descriptive statistics examine our immediate group of data (for example, the 100 students' marks), inferential statistics aim to make inferences from this data in order to make conclusions that go beyond this data. In other words, inferential statistics are used to make inferences about a population from a sample in order to generalize (make assumptions about this wider population) and/or make predictions about the future. For example, a Board of Examiners may want to compare the performance of 1000 students that completed an examination. Of these, 500 students are girls and 500 students are boys. The 1000 students represent our "population". Whilst we are interested in the performance of all 1000 students, girls and boys, it may be impractical to examine the marks of all of these students because of the time and cost required to collate all of their marks. Instead, we can choose to examine a "sample" of these students and then use the results to make generalizations about the performance of all 1000 students. For the purpose of our example, we may choose a sample size of 200 students. Since we are looking to compare boys and girls, we may randomly select 100 girls and 100 boys in our sample. We could then use this, for example, to see if there are any statistically significant differences in the mean mark between boys and girls, even though we have not measured all 1000 students.

Disseminating Systematic Reviews: Organizations Campbell Collaboration www.campbellcollaboration.org Cochrane Collaboration www.cochrane.org Joanna Briggs Institute http://connect.jbiconnectplus.org/JBIReviewsLibrary.aspx EPPI-centre and many more organizations that produce and publish their own...

Disseminating Systematic Reviews: Journals Most journals welcome article forms of full Cochrane or Campbell Reviews Some do not wish to publish them if they are public available in a database Some are sensitive to the argument that Cochrane and Campbell type of reviews require too much time and effort: Rapid reviews Narrow inclusion criteria Best Practice Sheets or Critical Appraisals of Reviews Check potential copyright issues!

Inclusion of Process and Implementation Aspects Is there a need? It is important to know what works, but it is equally important to know what sort of programmes to put limited resources into The school feeding program review…. Scared Straight Programs (Petrosini Review) have proven to cause more harm than benefits. There are some good theories to potentially explain this, however too little empirical data in the trials to test them…. A Cochrane review on school feeding programmes recently concluded that these feeding programs significantly improved the growth and cognitive performance of disadvantaged children (Kristjansson et al, 2007). The highly heterogeneous trials in their review were further explored in a separate study to evaluate what works, for whom, and in what circumstances. The realist synthesis produced reported that the trials included in the review had many different designs and were implemented in varying social contexts and educational systems; by staff with different backgrounds, skills, and cultural beliefs; and with huge variation in the prevailing social, economic, and political context (Greenhalgh et al, 2007). Process data from some trials suggested that in situations of absolute poverty even severely malnourished children may not benefit from school feeding programmes because of substitution at home. The findings from the realist synthesis complement those of the Cochrane Review by illustrating that feeding programs may be more effective for some participants than others, and that effectiveness is influenced by how programs are implemented . The authors further stated that policymakers need to know not merely whether school feeding programmes work but what sort of programme to put resources into  There need to be a clear link to the complementarity function of the two reviews – as noted in the last sentence in the preceding section. Another example of a review acknowledging the potential impact of process related aspects is the one from Petrosino and colleagues (2003) on the effect of ‘Scared Straight’ and other juvenile awareness programs for preventing juvenile delinquency. This particular review concluded that juvenile awareness programs failed to deter crime. Moreover, they seemed to lead to more offending behaviour and caused harm to the very citizens they pledged to protect . In their attempt to discuss why the authors pointed out that there were many good post-hoc theories, however, the evaluations were not structured to provide the kind of mediating variables or ‘causal models’ necessary for an empirical response to this question in a systematic review (Petrosino et al, 2000). One factor that was believed to impact on the negative effect was the degree of harshness in the inmate presentation. However, the one trial involving a tour of a reformatory with no presentation reported one of the largest negative effects (Michigan Department of Correction 1967). Petrosino and colleagues have foregone their plans to compare results across the different designs included, stating that others may wish to take this up in the future.  The same has been found for alcohol prevention and substance misuse prevention programs. Shocking isn’t it!! reviewers evaluating complex interventions often experience difficulties in determining what exactly the intervention entailed, whether it was implemented fully or adhered to good practice guidelines and whether there were confounding factors in the wider social context that would affect the outcome of the intervention. Roen and colleagues (2006) explored evidence on implementation in reviews evaluating interventions to reduce unintentional injuries amongst children and young people. Some studies described the interventions conducted, identified strength and weaknesses of the intervention, considered the broader context or the relationship between implementation and outcomes or exploring reasons for anomalous findings. However, both research teams concluded that only a minority of the original studies described how implementation of the intervention may have influenced outcomes.

Inclusion of Process and Implementation Aspects What is the problem? We do address variation in the effects of interventions: Factors related to patient or client groups Timing and intensity of programs The potential impact of co-interventions …. (Glasziou 2002; Higgins et al 2002) Meta-regression or sensitivity analysis or use of individual patient/client data Reviewers evaluating complex interventions often experience difficulties in determining: what exactly the intervention entailed whether it was implemented as intended whether there were confounding factors in the wider social context that would affect the outcomes … (Egan et al, 2009). Issues beyond those related to program design/logic

Inclusion of Process and Implementation Aspects Cargo et al. (2011) developed a new instrument, the ‘Extraction tool for Campbell Collaboration Review of childhood and youth reviews’ to assist with the extraction of process and implementation variables in systematic reviews. We explored to what degree process and implementation variables are present in published educational reviews (N=10) The aspects that are articulated most whether consideration of these items in reviews is possible, given the data provided by its primary studies

Process and Implementation Aspects Margaret.cargo@unisa.edu.au Process and Implementation Aspects Theory or change models shaping the intervention Characteristics of: Implementing organisation Partnering organisation Implementers Participants, clients, patients Protocols for the intervention Context: Ecological External Process and Implementation factors Design and Methodological Issues Sensitivity analysis Quality assessment and risk of bias

Presence of Process and Implementation Variables in Systematic Reviews by the Campbell Collaboration Could be extracted in most reviews: age, gender, grade or grade level, ethnicity of the participants, who the implementers were, implementer training, the intervention protocol, the intervention setting, attrition, dose delivered Could be extracted in some reviews: information regarding the organisation providing the intervention or service, the presence or absence of partnering organizations, role of implementers, SES of the participants, the engagement of the implementer, the presence and content of co-intervention Many reviews performed a sensitivity analysis. Could not be extracted (or only to a limited extent): information on the service delivery protocol, ethnicity, SES, age and gender of the implementer, minimum dose, reach of the intervention From the process and implementation section, recruitment, minimum attrition, minimum dose received, minimum fidelity, and participant engagement were not mentioned in any of the reviews.

Presence of process and implementation variables in the primary studies used by systematic reviews. Could be extracted in most studies: the use of available resources like staff, building or materials, the implementer’s occupation, previous training or experience , the intervention protocol as well as the service delivery protocol , the characteristics of the participants or students that enrolled in an intervention, the place and/or setting, the country and its degree of urbanization, the length of the program, the frequency and type as an aspect of dose delivered Could be extracted in some studies: the quality of the intervention materials (e.g. Curricula), the funding sources, the use of joined forces and expertise of other instances, a clear explication of the change process envisioned Could not be extracted (or only to a limited extent): A diagram of the change model, leadership and technical support, alliances between the intervention program and other instances involved

To what extent were process and implementation variables from the primary studies included in the systematic review? Goerlich et al. (2006): Some aspects from the implementing organization such as adequacy of resources (e.g. staff) and quality of intervention materials were present in all three primary studies but not in the systematic review. Attrition, reach and minimum dose delivered from the process and implementation section, were not considered in the review, although they were mentioned in all studies. Zwi et al. (2007): Reporting of items from the process and implementation section was mostly in accordance with the presence of this item in primary studies. Only dose delivered was not considered in the systematic review although it was presented in all primary studies. Twelve out of thirteen studies reported a change model. This was, however, not considered in the review. The implementer was discussed, but no clear provider type was specified (considered in ten studies out of thirteen). Age, gender and grade or grade level of the participant were considered most in the original studies and were also present in the review. Ethnicity and SES were reported in nine studies but not mentioned in the review.

Margaret.cargo@unisa.edu.au