Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rapid Searches for Rapid Reviews

Similar presentations


Presentation on theme: "Rapid Searches for Rapid Reviews"— Presentation transcript:

1 Rapid Searches for Rapid Reviews
Liz Dennett, Dagmara Chojecki June 24, 2012

2 Outline What are rapid reviews and why do them? What does the lit say?
Methodological issues Rapid review survey of IRG members IHE & rapid reviews

3 Rapid Reviews 101 Increasing demand for rapid access
to current research Clinical urgency/ intense demand for public uptake of a technology Evidence informed decision making/policy RRs accelerate/streamline traditional SR practices Methodological concessions that can introduce bias No standard methodologies The audience mainly consists of government policy makers, healthcare institutions, health professionals and patient associations at federal, regional or local jurisdictions (much like general HTAs- very policy driven). RRs utilize methodological shortcuts, whether it be to the search process or other processes such as evidence analysis/synthesis methods, these shortcuts may introduce some problems of bias in the results of a review and should be clearly articulated to the audience along with resulting biases. Standard reviews may take 6 months or longer (as identified by the lit in this area) rapid reviews aim to be complete in 1-6 months (although some may take longer). No standards that we are aware of for conducting rapid reviews and there are multiple shortcuts that can be used.

4 What Does the Lit Say? Little empirical evidence comparing RRs vs. SRs
Lack of transparency in RRs (Gannan et al. 2010) Variety of types of RRs and methods (Cameron 2007 ASERNIPS report) Leave out economic factors, social issues, clinical outcomes Fewer details than SRs Only published lit (no grey lit) No quality assessment (Moher 1998) Date, language, study type limits on search In general there is little evidence comparing the methodologies utilized in RRs vs those in SRs and the possible biases introduced by taking shortcuts (whether in the search or analysis portions of the process). Well this is not surprising considering the fact that a rapid review is rarely or even never followed by a standard HTA/SR (the nature of HTA doesn’t lend it self to do so, once a policy question is answered why would further funds and resources be used so that another review be done). An effective comparison of what shortcomings or effects on review results occur as part of of a RR vs a standard SR is only possible by comparing the same subject done by the same group, otherwise there are too many confounding factors that could be introduced. In summary we don’t really know much about how the results of an RR may differ from a standard HTA or SR because of the methodological concessions made. So really there is little research on what constitutes an effective methodological shortcut for an RR, especially one that does not compromise the methodological integrity of the review process. Ie introducing too much bias No real proven methods that cut time and yet don’t compromise the results of the review in some fashion In fact the research has actually identified that there is in general a lack of transparency in RRs as to what concessions were made, the timelines of the RR and little indication of possible biases introduced. This last point is particularly problematic as consumers may not be aware of the shortcomings of the reports and perhaps have over confidence in results. This can be said to be a problem of SRs in general though as detailed reporting is often lacking. As we have previously noted there is no standard methodology and timeline for RR, they can range even from 1-9 months (taking as long as a traditional SR)- Gannan et. al. and concessions can be made anywhere not necessarily just in the search, although for the purpose of this workshop we will focus on the search process and possible problems/biases with employing various methodological shortcuts there I will just list some possible methods in general (search and analysis) that have been found to be used to get a flavour of how cuts have been made: This information mainly comes from an ASERNIPS report where SRs and RRs prodcued by the agency on the same topic were compared. -they tend to leave out economics, social/systems, information and sometimes even data on clinical outcomes in an effort to deal with the question of effectiveness only and not peripheral info -they tend to provide fewer details Leaving out grey literature is another rcommon practice (the implications of which I will speak to briefly) One of the concerns identified when examing RR practrices was a lack of quality assessment in some of these reviews. If the quality assessment process is eliminated or not articulated, this has much more substantial implications for the results of the review. A study by Moher et al. found that trial quality can significantly impact benefit effect sizes therby distorting the results of a review. If quality assessment is not part of the rapid review approach, there are clearly substantial limitations associated with the literature synthesis process and the utility of the results.

5 What Does the Lit Say? RRs not appropriate for all technology assessments (Cameron 2007, Watt et al ) Standardization of methods may not even be appropriate (Gannan 2010, Cameron 2007) RR conclusions may be harder to generalize and less certain (Watt et al. 2008) RRs may also not be appropriate for assessing all technologies, Watt argues that- some topics are complex in nature and RRs often cannot address them adequately and provide valid and reliable evidence in such a short We see over and over again that various methods are used, and no standard methodology has been developed, many authors argue that standardization may not even be appropriate as the topic of the review, timelines given and resources at hand will dictate what methodological cuts will me made, these can vary from report to report and agency to agency. Watt et al. conducted a review of current methods and practice in HTAs, and suggests that due to the limitations associated with rapid reviews, conclusions may be less able to be generalized and may provide less certainty than those of traditional systematic reviews [4]. Rapid reviews with shorter timeframes (one to three months) were often less systematic in their search for evidence than those with longer timeframes (three to six months). Watt et al. suggested that this might lead to uncertainty around the conclusions drawn and inability to answer certain types of questions (e.g., economic analyses). Due to the limitations in drawing conclusions and ability to answer questions, rapid reviews should only be viewed as interim guidance until more systematic reviews can be conducted (Cameron et al. Watt et al.)

6 Possible Biases Introduced
Selection bias (Butler et al. 2005, Egger 1998) Publication bias Language of publication bias Database bias (Egger 1998, Sampson 2003) Medline or Embase or both?? Biases encountered when only searching/choosing for using the most easily accessible literature (limited searches and or known articles identified by experts) Selection bias – systematic error in reviews due to how studies are selected for inclusion. By selecting only a known set of articles, or articles from a few already known resources (no grey lit, limited databases, no hand searching), you are possibly leaving out articles that may present alternate results /viewpoints from the those already established/known to reviewers. Publication bias–only including published literature vs. published lit and grey lit. Published studies may not be truly representative of all valid studies undertaken, and this bias may distort meta-analyses and systematic reviews . Publication bias has been defined as the tendency on the part of investigators to submit, or the reviewers and editors, to accept manuscripts based on the direction or strength of the study findings.1 This definition concentrates on the fact that the strongest and most positive studies are most likely to be published. The problem may be particularly significant when the research is sponsored by entities that may have a financial or ideological interest in achieving favorable results. Ie. Usually only favourable results are published not negative results. Studies have found that inclusion of grey literature decreases publication bias and provides more conservative treatment effects. Language of Publication bias- by limiting to a certain language a whole body of literature could be left out, more effect in certain topic areas than others, acupuncture for example (most in chinese, whole body of evidence missing) Database Bias Studies that are published in journals not indexed in one of the major databases means that these data are effectively hidden from reviewers and meta-analysts. A minority of trials will be published in indexed local or international journals, but results and other characteristics are likely to differ between these two groups. Indeed, trials with significant results might be more likely to be published in an indexed journal, whereas trials with null results are published in non-indexed journals. Only 2% of research from the developing world is indexded in the major databases (Medline, Embase of Web of Science) Sampson et al. found that searching Medline but not Embase has the potential to impact meta-analysis effect size estimates

7 Searching for Rapid Reviews at IHE
Checklist of databases and grey literature sources that we consider searching Differs for each rapid review product at IHE (based on project time more than anything else) Employ other strategies to reduce results and time: use of existing strategy, title searching, restricted date, pub type, geography, language Pub type (Systematic Reviews and HTAs rather than primary studies). Also no conferences abstracts. For our rapidest response – we don’t go to the full text and we just use abstracts (not really a searching thing)

8 How are searches altered for rapid reviews?
Very little information available on how searches are being altered for rapid reviews Wanted more details – (e.g. which strategies most common? What limits were used? Any strategies that we were not using that we could?) Decided to survey other HTA organizations in order to identify trends ASERNIPS report: survey showed rapid HTA reports more likely to be done on a “restricted number of databases” and less likely to include grey literature or hand searching. Ganaan briefly summarized strategies (i.e. limit by language, date, geography, restrict number of databases, limited grey lit search

9 Survey on adaptation of search strategies for Rapid HTAs
Methods: Created list of strategies for reducing time it takes to do search and/or to reduce the number of results the search returns. Created a survey instrument (fluidsurveys) and pretested with former HTA searchers Received ethics approval Sent out on IRG listserv Survey was open from mar 26 - April 6

10 Definitions for Survey
Full HTA: An HTA report that starts with a comprehensive search, involving numerous databases and grey literature sources, in order to find all relevant research. The included primary studies are critically appraised and synthesized and meta-analysis is performed if possible. (Because of the methodological rigour involved, these reviews often take longer than six months to produce, but may take less depending on available resources or nature of question). Rapid HTA: an HTA report where methodological compromises are made in order to meet shorter timelines. (These reports would generally take at least 1 week but no more than 6 months to produce.) These were our definitions. We came up with them with help from one of the researchers at IHE who has done a project where she reviewed quite a bit of literature in this area and had written a paper that pointed out some flaws of other definitions (the main flaw is that you can’t define rapid reviews strictly by time as it greatly depends on the resources that are available to an organizaiton.

11 Results Received 17 completed responses from 16 unique organizations
Responses from Canada, England, Scotland, Germany, Sweden, Norway, Netherlands, Malaysia, Spain, Argentina, Poland 11 produce both full HTA and rapid HTA, 2 only rapid HTA, 3 only full HTA 13 rapid HTA producers combine to produce 32 different rapid HTA products Timelines range from 1 day to 9 months At first we were a little disappointed by the number of Responses. But the number of responses was very similar to ASERNIPs survey and also a survey recently done by Shannon Kelly (a researcher at CADTH) on rapid HTAs

12 Number of Databases 10 out of 12 (83%) provided a lower minimum or maximum number for at least one of their rapid HTA reports as compared to full HTA report (e.g 5-7 for rapid HTA; 8-11 for full HTA) However 10 out of 12 had at least one rapid HTA product whose range for the number of databases searched overlapped with the range for full HTAs While many organizations may search fewer databases for rapid HTAs, they do not always do so. When asked the number of databases that people searched for rapid and full HTAs most organizations provided a range (for example 5-7 databases for a particular rapid HTA, 7-10 for another one and 6-12 for full HTAs)

13 Grey Literature 9 of 12 (75%) of IS reduce the number of grey literature sources they search However 32 out of 32 (100%) of the rapid HTA products included some grey literature 31 out of 32 (97%) include a search for publications from other HTA agencies

14 Other strategies 7 of 12 (58%) of IS try to make their strategy more precise 5 or 12 (42%) use a more precise version of methodological filter 9 of 14 (64%) use an unmodified (or only slightly modified) preexisting search strategy if it is available and appropriate. Do not use the most sensitive filter available.

15 Main points All respondents use some strategies to speed up or reduce results for a rapid HTA search Great variability between organisations (and possibly different IS from same organisation) Appears clear that other factors (such as the nature of the question) affect the choice of time saving strategies

16 Unanswered questions Should best practices for searching for rapid reviews be established? What impact does each of the different strategies have on the comprehensiveness of the search? Evidence for some, but not for others What are the most effective strategies (i.e. save the most time but lose the fewest relevant articles)? Would it be useful for a group such as the IRG to come up with best practices or is there just too much heterogeneity for it to be possible? Does effectiveness of strategy depend too much on the nature of the question for us to ever get an answer to this question?

17 Next steps for IHE Review rapid review protocols for all types of rapid HTAs in light of survey results and evidence Evidence base may not be perfect but we see this survey and the subsequent reading of the research that we did as an opportunity to improve our practice at IHE.

18 References Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56. doi: / Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, Facey K, Hailey D, Norderhaug I, Maddern G: Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care 2008, 24: Butler G, Hodgkinson J, Holmes E, Marshall S: Evidence based approaches to reducing gang violence. West Midlands, UK: Government Social Research Unit; 2004. Cameron A: Rapid versus full systematic reviews: an inventory of current methods and practice in Health Technology Assessment. Australia: ASERNIPS; 2007:1-119. Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, Tugwell P, Klassen TP: Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, Facey K, Hailey D, Norderhaug I, Maddern G: Rapid versus full systematic reviews: validity in clinical practice? ANZ J Surg 2008, 78: Egger M, Smith GD: Bias in location and selection of studies. BMJ 1998, 316:61-66 Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R, St John PD, Viola R, Raina P: Should meta-analysts search Embase in addition to Medline? Clin Epidemiol 2003, 56:

19 Thank you very much for your attention. www. ihe. ca dchojecki@ihe
Thank you very much for your attention!


Download ppt "Rapid Searches for Rapid Reviews"

Similar presentations


Ads by Google