Presentation is loading. Please wait.

Presentation is loading. Please wait.

Peer reviewer training part I: What do we know about peer review?

Similar presentations


Presentation on theme: "Peer reviewer training part I: What do we know about peer review?"— Presentation transcript:

1 Peer reviewer training part I: What do we know about peer review?
Dr Trish Groves Deputy editor, BMJ

2 What do editors want from papers?
Importance Originality Relevance to readers Usefulness to readers and, ultimately, to patients Truth Excitement/ “wow” factor Clear and engaging writing We make many of these judgments ourselves, but also rely on reviewers’ opinions

3 Peer review As many processes as journals or grant giving bodies
No operational definition--usually implies “external review” Largely unstudied till 1990s Benefits through improving what’s published rather than sorting wheat from chaff At BMJ we do a lot of internal review as well, and consider it an important part of our peer review process

4 What is peer review? Review by peers Includes:
internal review (by editorial staff) external review (by experts in the field) Peer review is not used only by journals - also by grant applications, ethics committees, and for conference papers and abstracts. But we're talking here about peer review and critical appraisal for publication.

5 BMJ papers All manuscripts handled by our online editorial office at The website uses a system called Benchpress Reviewers recruited by invitation, through volunteering, and by authors’ suggestions Database also includes all authors We monitor reviewers’ workload for BMJ We rate reviewers’ reports using a 3 point scale

6 BMJ peer review process I
7000 research papers, 7% accepted approximate numbers at each stage: 1000 rejected by one editor within 48 hours further 3000 rejected with second editor within one week of submission 3000 read by senior editor; further 1500 rejected 1500 sent to two reviewers; then 500 more rejected approx 1000 screened by clinical epidemiology editor and more rejected

7 BMJ peer review process II
to weekly manuscript meeting attended by the Editor, an external editorial adviser (a specialist or primary care doctor) and a statistician.. …and the full team of BMJ research editors, plus the BMJ clinical epidemiology editor 350 research articles accepted, usually after revision value added by commissioned editorials and commentaries

8 BMJ peer review process III
always willing to consider first appeals--but must revise the paper, respond to criticisms, not just say subject’s important perhaps 20% accepted on appeal no second appeals; always ends in tears; plenty of other journals Most papers end up being published somewhere. See: Lock, S. A difficult balance: editorial peer review in medicine. London: BMJ, % of papers rejected by the BMJ during 7 months of 1979 were eventually published elsewhere, most in specialist journals. A quarter, however, remained unpublished. Relman AS. Are journals really quality filters? In: Goffman W, Bruer JT, Warren KS, eds. Research on selective information systems. New York: Rockefeller Foundation, Random sample of 300 papers rejected by NEJM in Questionnaire showed that, among the 55% who replied, four fifths of authors said their papers had been published elsewhere. Only a fifth of these, however, had been revised according to peer review comments received before submission to final journal. Must interpret these findings cautiously. Both studies are old - things may have changed since.

9 What we know about peer review
Research evidence

10 Peer review processes “Stand at the top of the stairs with a pile of papers and throw them down the stairs. Those that reach the bottom are published.” “Sort the papers into two piles: those to be published and those to be rejected. Then swap them over.” Quotes by former editors of the BMJ and Lancet – tongue in cheek, of course But how do we know that peer review is more reliable than this?

11 Some problems Means different things at different journals Slow
Expensive Subjective Biased Open to abuse Poor at detecting errors Almost useless at detecting fraud

12 Is peer review reliable? (How often do two reviewers agree?)
NEJM (Ingelfinger F 1974) Rates of agreement only “moderately better than chance” (Kappa = 0.26) Agreement greater for rejection than acceptance Grant review Cole et al, 1981 – real vs sham panel, agreed on 75% of decisions Hodgson C, 1997 – two real panels reviewing the same grants, 73% agreement Are two reviewers enough? Fletcher and Fletcher need at least six reviewers, all favouring rejection or acceptance, to yield a stats significant conclusion (p<0.05) Ingelfinger FJ. Peer review in biomedical publication. Am J Med 1974;56: Cole S, Cole J, Simon G. Chance and consensus in peer review. Science 1981;214:881-6. Hodgson C. How reliable is peer review? A comparison of operating grant proposals simultaneously submitted to two similar peer review systems. J Clin Epidemiol 1997;50: Fletcher RH, Fletcher SW. The effectiveness of editorial peer review. In: Peer review in health sciences.Godlee F, Jefferson T eds. London BMJ Publishing Group,1999:45-56

13 Should we mind if reviewers don’t agree?
Very high reliability might mean that all reviewers think the same Reviewers may be chosen for differing positions or areas of expertise Peer review decisions are like diagnostic tests: false positives and false negatives are inevitable (Kassirer and Campion, 1994) Larger journals ask reviewers to advise on publication, not to decide Kassirer JP, Campion EW. Peer review:crude and understudied, but indispensable. JAMA 1994;272:96-7

14 Bias Author-related Prestige (author/institution) Gender
Where they live and work Paper-related Positive results English language Author-related bias Bias towards successful researchers - Merton RK. Science 1968;159:56-6 Bias against women (applicants to Swedish MRC for postdoc fellowships) - Wenneras C, Wold A. Nature 1997;387:341-3 Bias against institution - next slide Publication bias - if the research question is important and interesting, the answer should be less important

15 Prestigious institution bias
Peters and Ceci, 1982 Resubmitted 12 altered articles to psychology journals that had already published them Changed: title/abstract/introduction - only slightly authors’ names name of institution, from prestigious to unknown fictitious name (eg. “Tri-Valley Center for Human Potential”) Peters DP, Ceci SJ. Behavioural and Brain Sciences 1982;5:187-95 Randomly selected one paper from each of 13 influential peer reviewed psychology journals with high reject rates (>80%). Authors were from prestigious institutions. All papers published in past months, all with above average citations Got permission from authors but not from editors or reviewers

16 Peters and Ceci - results
Three articles recognised as resubmissions One accepted Eight rejected (all because of poor study design, inadequate statistical analysis, or poor quality: none on grounds of lack of originality) One paper had to be withdrawn because journal had changed its policy on type of paper accepted, leaving 12 in study. Study much criticised and called unethical (editors and reviewers not consented, copyright law probably violated)

17 How easy is it to hide authors’ identity?
Not easy In RCTs of blinded peer review, reviewers correctly identified author or institution in 24-50% of cases Authors tend to cite their own work in references RCTs: McNutt RA, Evans AT, Fletcher RH, Fletcher SW.The effects of blinding on the quality of peer review: a randomized controlled trial. JAMA 1990;263:1371-6 An RCT. Godlee F, Gale CR, Martyn C. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA 1998;280:237-40 van Rooyen S, Godlee F, Evans S, Smith R, Black N.Effect of blinding and unmasking on the quality of peer review: a randomized controlled trial. JAMA 1998;280:234-7 (second paper from RCT above. done at the BMJ) Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D. Does masking author identity improve peer review quality? A randomized controlled trial. JAMA 1998;280:240-2 Soprano Elizabeth Schwarzkopf requested all her own records on the BBC radio programme Desert Island Discs: BMJ has thought of giving a Schwarzkopf award for self-citation And, anyway, reviewers tend to know who's doing what work in their own field

18 Reviewers identified (open review) – results of RCTs
Asking reviewers to sign their reports in RCTs made no difference to the quality of reviews or recommendations made Godlee et al, 1998 van Rooyen et al, 1998 van Rooyen et al ,1999 Same references as earlier slide plus van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomized controlled trial. BMJ 1999;318:23-7

19 Open review on the web Various experiments and evaluations are underway… Long history of this in other disciplines eg physics research Open review of articles posted on the web can be done before publication or afterwards. It can be completely open - a free for all - completely closed, or something in between. It can also be moderated by invited reviewers. MJA online peer review trial. Medical Journal of Australia study - articles were electronically published and then, for a few weeks, the journal invited postpublication review on web from readers. Authors were encouraged to revise the paper accordingly before final print publication.

20 What makes a good reviewer? – results of RCTs
Aged under 40 Good institution Methodological training (statistics & epidemiology) Black N, van Rooyen S, Godlee F, Smith R, Evans S. What makes a good reviewer and a good review in a general medical journal. JAMA 1998;280:231-3. Evans et al.The characteristic of peer reviewers who produce good-quality reviews J Gen Intern Med 1993;8:422-8. 226 reviewers of 131 papers submitted to the journal. 43% of reviews were good (on a 5 pt editors' scale). The characteristics on this slide had 87% chance of predicting a good review. Could editors be biased if they know reviewers??

21 What might improve the quality of reviews?
Reward/credit/acknowledgement? Careful selection? Training? Greater accountability (open review on web)? Interaction between author and reviewer (real time open review)? BMJ pays reviewers but does not give feedback on performance


Download ppt "Peer reviewer training part I: What do we know about peer review?"

Similar presentations


Ads by Google