What Are You Doing
Image by ken mccown via Flickr

Open peer-review does not affect the quality of reviews, but it does result in significantly longer reviewing time and makes it harder to recruit competent reviewers, a new study reports.

The article, Effect on peer review of telling reviewers that their signed reviews might be posted on the web,” was published on November 16th in BMJ by Susan van Rooyen and others.

Using a randomized controlled trial, reviewers were allocated to either the intervention group (open peer review) or the control group (reviews provided to the author).  But even getting to this stage was difficult.  A full 55% of reviewers refused to participate in the open peer-review experiment.

When reviews were completed, editors rated the quality of both the open review and control group as equal.  Telling reviewers that their report would be publicly available did not harm the quality of the report, nor however did it lead to its improvement.

Yet reviewers in the open peer-review group reported that their task took an additional 25 minutes on average to complete, and 41 additional minutes for papers that were eventually accepted.  The authors write:

Reviewers who knew that their report might be posted online spent longer on the task than those in the control group, so adopting open peer review might result in the process feeling even more arduous to reviewers than it currently does. This is a concern because willing reviewers are already the scarcest component in the peer review process.

These results should not be controversial.

Bernd Pulverer, head of scientific publications for EMBO, reported recently in Nature that requiring reviewers to create publicly-visible reviews that would accompany the published paper adds significant time to the process, while adding nothing to the quality of the review.

More importantly, requiring open peer-review may make it more difficult to attract competent reviewers, the best of whom are overloaded with requests.  A related experiment on open peer-review, in which reviewers were told that their identities would be revealed to the author (but not to the public), resulted in more declines to review and no difference in the quality of the reviews.  A similar study in a specialist journal reported that signed reviews were of slightly higher-quality, more courteous in tone, took longer to complete than unsigned reviews,  and were more likely to recommend acceptance, indicating that social factors may have been in play.

The latest study, published just last week, relies on data collected between 1999 and 2000.  The delay in publication was due, in part, to the death of the lead author, Susan van Rooyen.  I asked Tony Delamothe, deputy editor of the BMJ and corresponding author for the paper, whether the results of their study would look different if conducted today.  He responded that open-peer review was a pretty radical idea more than ten years ago, but as a result of changes in BMJ and other major journals over the last few years, Delamothe doesn’t think the idea of open peer-review is as controversial today.  He would expect fewer rejections to review but no change in the quality of review.  “The consensus seems to be in favour of more openness,” Delamothe believes.

In spite of the costs, there are clearly benefits to opening the “black box” of the peer-review process, the authors write. Potential authors would be able to glimpse into what peer-review actually means for a journal and how much value is added between initial submission and publication. More importantly, posting the signed comments beside accepted papers means that reviewers may start receiving credit for their contributions to science. Open peer-review — and the transparency it brings to the process — is clearly a case of weighing the costs and benefits.

The question is whether the benefits of open peer review are sufficient to outweigh this price of extra time and the associated reluctance of some reviewers to participate.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

16 Thoughts on "Weighing the Costs and Benefits of Open Peer-Review"

Reviewers receiving “credit” for their work is still a tricky business, even with signed reviews. Even if we knew exactly who was doing what level of reviewing, there’s still the question of how this fits into the current system of rewards (pay, tenure, funding, etc.). Does a funding agency dedicated to the eradication of a disease want to spend their precious dollars paying you for reviewing the work of others or for doing actual research to cure the disease yourself? Does a university want to reward you with tenure for reviewing the work of other professors at other universities or for producing original work yourself?

So even if all peer review was open, and you built a system to collate all data on reviewing, it’s still unclear exactly who might offer rewards for that work.

@David

I’d like to think that an enlightened funding agency dedicated to the eradication of a disease would be happy that they are funding researchers to review the efforts of others who are dedicated to the eradication of that disease. Or that Agency 2 is funding people to review work Agency 1 has funded, and vice versa – it should even out in the end, surely?

As regards credit, do you think that having signed reviews would lead to more/better collaborations for the good reviewers? For example: “That reviewer has consistently suggested good experiments/further work/avenues we hadn’t thought of, why don’t we just work with him/her?”

That would eventually lead to more/better research/publications/whatever metrics for the good reviewer.

Given how tight funding is these days, it may be difficult to convince agencies to put their dwindling funds into anything other than research.

And if you’re choosing your collaborators on their reviewing talents rather than an established body of published results, all I can say is good luck with that.

I would assume an open review process would also put more pressure on authors to submit high-quality manuscripts, thus making the review process more comfortable and speedier.

I don’t see how open peer review leads to better manuscripts. Could you work out the logic for us?

I wondered about this for a while and the best idea I had was that publishing the first reports (and first submission?) would show the world in what state you submitted the paper.

That might act as an incentive to submit a better first version, rather than treating peer-review as an editing/re-drafting service.

Given time pressures on most researchers, how likely is it that you are going to read the first draft of a paper rather than the final published version?

Neil,
Before we continue, we need to clarify that “open peer review” can mean many different things including:

1) revealing the reviewers to the authors
2) revealing the reviewers to each other
3) revealing the reviewers to the public
4) revealing the reviews, process files and all revisions to the public.

I think you mean #4 here, which is what EMBO is doing. However, you need to remember that none the open peer-review process is revealed if the paper is rejected, so there is little incentive to submit a poor first version to the journal in the first place.

Neil’s explanation captures the essence of my comment.

Thanks to its shame factor, making reviews public on the Web would have the advantage of being able to better deal with people having low publication ethics, like (a/o):

(1) people submitting manuscripts that are not properly edited and/or structured in order to save time;

(2) people submitting manuscripts still containing known flaws in order to save time, leaving it to the reviewers to point them out.

Note that only the reviews from the accepted papers are likely to be shown. When papers are rejected, authors aren’t going to grant permission for publication of those reviews, they’re going to want a fresh start at a different journal.

So poor papers are invisible to the system.

How about people trying to exploit the weaknesses of academic publishing, like:

(1) people submitting manuscripts that are not properly edited and/or structured in order to save time;

(2) people submitting manuscripts still containing known flaws in order to save time, leaving it to the reviewers to point them out;

(3) people submitting poor papers in order to save time, trying to take advantage of the knowledge of the reviewers in order to improve the quality of their paper during the review process?

Is this really a big problem? Are there a lot of researchers out there who are deliberately submitting substandard papers? Most researchers I know work pretty hard on their papers to make them the best they can when they submit. They are often imperfect, but this is rarely done on purpose.

As a journal editor, I don’t see an overabundance of submissions that contain known flaws. That seems a bit crazy, leaving a mistake in a paper that will have an impact on your reputation, taking a gamble that a reviewer will spot the error and make you change it. How much time, exactly, does one save by submitting an incorrect paper and then doing a rewrite at a reviewer’s request?

Lastly, as a reader, how much time are you going to spend digging through the early drafts of a paper in order to pass judgment on the laziness of the authors? Aren’t you more likely to read the final version, glean the scientific knowledge available and then move on? Or will you instead spend days picking over the grammatical errors in the original draft, perhaps e-mailing them around to colleagues to join in on the mocking?

I am working as a Western researcher in an Asian country, and the aforementioned issues are quite pervasive here. Making reviews open to the public would definitely improve the research and publishing ethics in this country, thanks to the shame/losing face factor.

As a reader, I would personally be interested in reading the comments of the reviewers with respect to the train of thought in the paper.

As an author and a non-native speaker of English, I would personally pay more attention to the use of correct spelling and grammar (in order to avoid embarrassing comments from the reviewers).

If this really is a major problem, then perhaps a better solution is having the journal editors return manuscripts to authors before sending them out for review (I know I regularly do this, often with a suggestion of where to find an English speaking freelance editor who can provide help if one is not immediately available).

Peer reviewers are a valuable resource, and researchers are hard-pressed for time. Asking them to be copyeditors in order to shame authors into using better grammar seems an extremely inefficient way to use expert reviewers.

Note that if reviewers are signing their reviews, they may not be as likely as you think to publicly shame colleagues. Who knows when that author you humiliated will be sitting on your grant review committee or the hiring committee for one of your former students….

Please ignore my previous reply. I completely misread David’s comment. Long day…

David,
Yes, I agree that poor papers are likely to be invisible to the system.

As I mentioned in a previous comment, I think that people will be less inclined to submit poor papers (like the ones listed in my first comment to you) to journals that make reviews open to the public, thanks to the risk of being called out publicly.

Comments are closed.