Presentation on theme: "Peer reviewer training part I: What do we know about peer review?"— Presentation transcript:
1 Peer reviewer training part I: What do we know about peer review? Dr Trish GrovesDeputy editor, BMJ
2 What do editors want from papers? ImportanceOriginalityRelevance to readersUsefulness to readers and, ultimately, to patientsTruthExcitement/ “wow” factorClear and engaging writingWe make many of these judgments ourselves, but also rely on reviewers’ opinions
3 Peer review As many processes as journals or grant giving bodies No operational definition--usually implies “external review”Largely unstudied till 1990sBenefits through improving what’s published rather than sorting wheat from chaffAt BMJ we do a lot of internal review as well, and consider it an important part of our peer review process
4 What is peer review? Review by peers Includes: internal review (by editorial staff)external review (by experts in the field)Peer review is not used only by journals - also by grant applications, ethics committees, and for conference papers and abstracts.But we're talking here about peer review and critical appraisal for publication.
5 BMJ papersAll manuscripts handled by our online editorial office atThe website uses a system called BenchpressReviewers recruited by invitation, through volunteering, and by authors’ suggestionsDatabase also includes all authorsWe monitor reviewers’ workload for BMJWe rate reviewers’ reports using a 3 point scale
6 BMJ peer review process I 7000 research papers, 7% acceptedapproximate numbers at each stage:1000 rejected by one editor within 48 hoursfurther 3000 rejected with second editorwithin one week of submission 3000 read by senior editor; further 1500 rejected1500 sent to two reviewers; then 500 more rejectedapprox 1000 screened by clinical epidemiology editor and more rejected
7 BMJ peer review process II to weekly manuscript meeting attended by the Editor, an external editorial adviser (a specialist or primary care doctor) and a statistician..…and the full team of BMJ research editors, plus the BMJ clinical epidemiology editor350 research articles accepted, usually after revisionvalue added by commissioned editorials and commentaries
8 BMJ peer review process III always willing to consider first appeals--but must revise the paper, respond to criticisms, not just say subject’s importantperhaps 20% accepted on appealno second appeals; always ends in tears; plenty of other journalsMost papers end up being published somewhere.See:Lock, S. A difficult balance: editorial peer review in medicine. London: BMJ, % of papers rejected by the BMJ during 7 months of 1979 were eventually published elsewhere, most in specialist journals. A quarter, however, remained unpublished.Relman AS. Are journals really quality filters? In: Goffman W, Bruer JT, Warren KS, eds. Research on selective information systems. New York: Rockefeller Foundation, Random sample of 300 papers rejected by NEJM in Questionnaire showed that, among the 55% who replied, four fifths of authors said their papers had been published elsewhere. Only a fifth of these, however, had been revised according to peer review comments received before submission to final journal.Must interpret these findings cautiously. Both studies are old - things may have changed since.
9 What we know about peer review Research evidence
10 Peer review processes“Stand at the top of the stairs with a pile of papers and throw them down the stairs. Those that reach the bottom are published.”“Sort the papers into two piles: those to be published and those to be rejected. Then swap them over.”Quotes by former editors of the BMJ and Lancet – tongue in cheek, of courseBut how do we know that peer review is more reliable than this?
11 Some problems Means different things at different journals Slow ExpensiveSubjectiveBiasedOpen to abusePoor at detecting errorsAlmost useless at detecting fraud
12 Is peer review reliable? (How often do two reviewers agree?) NEJM (Ingelfinger F 1974)Rates of agreement only “moderately better than chance” (Kappa = 0.26)Agreement greater for rejection than acceptanceGrant reviewCole et al, 1981 – real vs sham panel, agreed on 75% of decisionsHodgson C, 1997 – two real panels reviewing the same grants, 73% agreementAre two reviewers enough?Fletcher and Fletcher need at least six reviewers, all favouring rejection or acceptance, to yield a stats significant conclusion (p<0.05)Ingelfinger FJ. Peer review in biomedical publication. Am J Med 1974;56:Cole S, Cole J, Simon G. Chance and consensus in peer review. Science 1981;214:881-6.Hodgson C. How reliable is peer review? A comparison of operating grant proposals simultaneously submitted to two similar peer review systems. J Clin Epidemiol 1997;50:Fletcher RH, Fletcher SW. The effectiveness of editorial peer review. In: Peer review in health sciences.Godlee F, Jefferson T eds. London BMJ Publishing Group,1999:45-56
13 Should we mind if reviewers don’t agree? Very high reliability might mean that all reviewers think the sameReviewers may be chosen for differing positions or areas of expertisePeer review decisions are like diagnostic tests: false positives and false negatives are inevitable (Kassirer and Campion, 1994)Larger journals ask reviewers to advise on publication, not to decideKassirer JP, Campion EW. Peer review:crude and understudied, but indispensable. JAMA 1994;272:96-7
14 Bias Author-related Prestige (author/institution) Gender Where they live and workPaper-relatedPositive resultsEnglish languageAuthor-related biasBias towards successful researchers - Merton RK. Science 1968;159:56-6Bias against women (applicants to Swedish MRC for postdoc fellowships) - Wenneras C, Wold A. Nature 1997;387:341-3Bias against institution - next slidePublication bias - if the research question is important and interesting, the answer should be less important
15 Prestigious institution bias Peters and Ceci, 1982Resubmitted 12 altered articles to psychology journals that had already published themChanged:title/abstract/introduction - only slightlyauthors’ namesname of institution, from prestigious to unknown fictitious name (eg. “Tri-Valley Center for Human Potential”)Peters DP, Ceci SJ. Behavioural and Brain Sciences 1982;5:187-95Randomly selected one paper from each of 13 influential peer reviewed psychology journals with high reject rates (>80%). Authors were from prestigious institutions.All papers published in past months, all with above average citationsGot permission from authors but not from editors or reviewers
16 Peters and Ceci - results Three articles recognised as resubmissionsOne acceptedEight rejected (all because of poor study design, inadequate statistical analysis, or poor quality: none on grounds of lack of originality)One paper had to be withdrawn because journal had changed its policy on type of paper accepted, leaving 12 in study.Study much criticised and called unethical (editors and reviewers not consented, copyright law probably violated)
17 How easy is it to hide authors’ identity? Not easyIn RCTs of blinded peer review, reviewers correctly identified author or institution in 24-50% of casesAuthors tend to cite their own work in referencesRCTs:McNutt RA, Evans AT, Fletcher RH, Fletcher SW.The effects of blinding on the quality of peer review: a randomized controlled trial. JAMA 1990;263:1371-6An RCT.Godlee F, Gale CR, Martyn C. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA 1998;280:237-40van Rooyen S, Godlee F, Evans S, Smith R, Black N.Effect of blinding and unmasking on the quality of peer review: a randomized controlled trial. JAMA 1998;280:234-7 (second paper from RCT above. done at the BMJ)Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D. Does masking author identity improve peer review quality? A randomized controlled trial. JAMA 1998;280:240-2Soprano Elizabeth Schwarzkopf requested all her own records on the BBC radio programme Desert Island Discs: BMJ has thought of giving a Schwarzkopf award for self-citationAnd, anyway, reviewers tend to know who's doing what work in their own field
18 Reviewers identified (open review) – results of RCTs Asking reviewers to sign their reportsin RCTs made no difference to the qualityof reviews or recommendations madeGodlee et al, 1998van Rooyen et al, 1998van Rooyen et al ,1999Same references as earlier slide plusvan Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomized controlled trial. BMJ 1999;318:23-7
19 Open review on the webVarious experiments and evaluations are underway…Long history of this in other disciplines eg physics researchOpen review of articles posted on the web can be done before publication or afterwards.It can be completely open - a free for all - completely closed, or something in between. It can also be moderated by invited reviewers.MJA online peer review trial.Medical Journal of Australia study - articles were electronically published and then, for a few weeks, the journal invited postpublication review on web from readers. Authors were encouraged to revise the paper accordingly before final print publication.
20 What makes a good reviewer? – results of RCTs Aged under 40Good institutionMethodological training (statistics & epidemiology)Black N, van Rooyen S, Godlee F, Smith R, Evans S. What makes a good reviewer and a good review in a general medical journal. JAMA 1998;280:231-3.Evans et al.The characteristic of peer reviewers who produce good-quality reviews J Gen Intern Med 1993;8:422-8.226 reviewers of 131 papers submitted to the journal. 43% of reviews were good (on a 5 pt editors' scale). The characteristics on this slide had 87% chance of predicting a good review.Could editors be biased if they know reviewers??
21 What might improve the quality of reviews? Reward/credit/acknowledgement?Careful selection?Training?Greater accountability (open review on web)?Interaction between author and reviewer (real time open review)?BMJ pays reviewers but does not give feedback on performance