Open peer review attracts fewer, lower quality reviews, reports a new study.

The paper, “A prospective study on an innovative online forum for peer reviewing of surgical science,” by Martin Almquist and others, appeared on 29 June 2017 in the journal, PLoS ONE.

Their study compared the quality of open online reviews to conventional reviews for manuscripts submitted to the British Journal of Surgery (BJS). BJS employs a single-blinded review process.

team of doctors

110 manuscripts were posted online with email invitations sent to more than 7000 reviewers who had BJS accounts on ScholarOne, a peer review management system. Manuscripts were kept online for 3 weeks and were only accessible through email links. The same 110 manuscripts were simultaneously sent to reviewers for conventional evaluation. Editorial assistants scored all reviews based on a validated quality instrument.

Of the 110 manuscripts, just 44 (40%) received at least one online review. Compared to conventional reviews, these reviews received significantly lower quality scores across all review quality aspects. The overall score for online reviews was 2.35 compared to 3.52 for conventional reviews.

The quality of the online reviews varied considerably, but was significantly lower than conventional review. Given the large number of potential reviewers who were invited, the participation rate was very low.

Despite sending invitations to more than 7000 reviewers, the study received reviews from just 59 individual reviewers. “This has to be considered a disappointing rate,” the authors wrote. They surmised that a personal email from an editor targeting a researcher with known expertise may have greatly improved participation over a mass impersonal email. It is not known whether online reviewers had any competence on the topics they agreed to review. In this way, it is difficult to know whether the intervention (open review) or self-selection was responsible for their results.

The researchers noted several limitations to their approach, most importantly, that the study lacked randomization and a proper control group. Under a randomized controlled design to test reviewer anonymity, the editors of The BMJ reported no difference in the quality of reviews, although it did significantly increase the likelihood of reviewers declining an invitation to review. Other properly randomized studies [here, here, and here] also report no differences in review quality. One study reported that open peer review resulted in higher quality, more courteous reviews. Whereas opinions on the best way to conduct peer review are many, rigorous scientific studies are rather few.

While the British Journal of Surgery study may have limited generalizability to journals that openly publish reviews along with papers, it may have most application to publish-first-review-later journals, like F1000 Research.

Open online review has the potential to attract many more eyes to a new piece of research than conventional peer review. In reality, it may do far worse in attracting the eyes you want.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

16 Thoughts on "Study Reports Open Peer Review Attracts Fewer Reviews, Quality Suffers"

Although the importance of the reviewing process is well understood (it is often seen as the one really strong point of the traditional publishing system) perhaps the importance of the process of selecting reviewers has been underestimated. I know that researchers pay a lot of attention to the importance/prestige of the journal asking them to perform a task; I suspect that in many cases the question of exactly who is asking for the review is also important (respected expert in the field or unknown editorial assistant). Some of these qualitative aspects cannot be easily or at all replicated by automated systems.

Is there a confusion between “online” and “open” peer review here, including in the title? In my experience, in open peer review (where the names of reviewers as well as the reviews are published in a fully transparent fashion, e.g. in some BMC journals or F1000 Research), the peer reviewers are approached absolutely similar to blinded peer review by the editors? I think in the current form, the article is highly misleading and “open” should be replaced with “online” on any instance.

I agree with the ambiguity of terms. The authors use “online forum” in their article title, but also use “open online review” and “online peer review” in their abstract. Further down in the text, the authors are a little more specific about their intervention:

The authors were able to view the open reviews while their manuscript was still undergoing traditional review, and could comment on the reviews if they wished. Authors were not blinded to the identity of the reviewers submitting reviews via the online system; it was not possible to submit reviews anonymously.

Really good point. Like pretty much everything in scholarly communications, there’s some ambiguity around terminology. Is “open” peer review meant to describe peer review where the reviews are made public upon publication of the article, or does it mean that the peer review process is open to all reviewers? Does “open peer review” require that the reviewer identities are disclosed or is the availability of the review enough to be considered “open”?

That bit about not knowing if reviewers were competent to review the papers makes me a bit sceptical about JBS’s whole reviewer selection/invitation process.

Were the invitations just sent out randomly in this case?

To clarify: the editors sent out specific review invitations to qualified experts. At the same time, they had an open call for reviews from some 7,000 participants of varying specialties. For the commissioned reviews, one assumes the editors chose experts with area knowledge, for the open call, there’s no way to control who responds.

We feel that the title of your piece ‘open peer review attracts fewer, lower quality reviews’ is misleading. In the PLOS ONE study, the authors attempted to investigate the take-up and quality of public online open peer review (reviewers are named) compared to non-public, conventional single-blind peer review.

As you note there are several confounding factors, especially the method used to invite reviewers, which was not the same for “online open” versus “conventional single-blind” peer review. The former involved mass mailing 7000 potential peer reviewers and the latter involved a tailored approach to reviewers with expertise in the area, via a personal email from an Editor. As such it is not surprising that the take up for online open peer review was low.

BMC has successfully operated open peer review for over 16 years now and others also value openness. Open peer review as practiced by BMC means that authors know who the reviewers are and if the manuscript is published the reading public also see the content of the reviewer reports and reviewer names. In our experience, we have found a slight tendency towards higher quality peer review reports under openness – http://bmjopen.bmj.com/content/5/9/e008707 – as reports tend to be more constructive with reviewer comments backed up by evidence.

Louisa Flintoft, Alessandro Recchioni, Elizabeth Moylan, Paulina Szyszka
(COI: all employed by BMC)

As I discussed in my review of the PLOS ONE paper, the results of the BJS study are in stark contrast to properly designed trials that do not conflate transparency with selection bias. As a result, the generalizability of the study may be limited to open online forums like F1000 Research.

The operative statement in your response “open peer review as practiced by BMC” cannot be overemphasized. Just as we need to contextualize what is meant by “open access,” defining the open peer review process is required to properly understand the results. The authors of the PLOS ONE study used several different terms in their paper’s title and abstract. This ambiguity it not constructive.

Hi Richard,
Had you read past the headline, perhaps you would have seen that the post above makes nearly the same criticisms of this study as you did in your post. Perhaps a more careful reading would be helpful in the future.

I did, of course, read the blog, and I don’t think that you make nearly as clear as I did the severe limitations of this study. Ultimately you can conclude nothing useful. If you want to be scholarly you shouldn’t have covered the study at all, shouldn’t have used such a misleading title, and shouldn’t have taken a biased swipe at F1000Research, which doesn’t use a system anything like that in the study you reported.

I don’t think that you make nearly as clear as I did the severe limitations of this study.

I would think it sufficient that the author pointed out the flaws and offered links to different studies that were better designed (and that drew different conclusions). If that’s not enough for you, then it’s a good thing you have your own blog where you can offer your own opinion.

If you want to be scholarly you shouldn’t have covered the study at all

Which raises an interesting point. I tend to view TSK as a business blog, and more of an “editorial page” where opinions are voiced than a scholarly publication. We write about stuff that interests us as individuals and do not offer any form of scholarly peer review.

…shouldn’t have taken a biased swipe at F1000Research, which doesn’t use a system anything like that in the study you reported.

F1000Research uses, if I’m not mistaken, a system where papers are posted publicly for comment/review by anyone who chooses to do so, along with reviewers that are solicited by editors. That seems fairly well-related to what was done here.

Thanks for your comments. I suggest that you drop the word Scholarly and call it Business Kitchen or Publishing Kitchen. That would be more accurate.

While we appreciate your unsolicited advice, I think we are going to keep our name going forward. Because we cover the business of scholarly communication, it’s a more accurate descriptor than “business” (we don’t cover all business) nor “publishing” (we don’t cover all publishing). It’s perhaps a bit like your blog, “Open Pharma” where you purport to write about “pharma-sponsored research” yet cover papers like the one in this study that does not list any support from pharma companies.

Perhaps a compromise would be to put Scholarly in inverted commas, or perhaps call yourself Scholarly Publishing Kitchen, which would be the most accurate name, especially if you add a footnote saying: “Please note Scholarly does not refer to our mode of writing and thinking but merely to that branch of publishing misnamed Scholarly.”

And, for what it’s worth, you’ve misunderstood the mission of Open Pharma–it’s about encouraging the pharmaceutical industry, which funds around half of biomedical research, to play a larger role in improving the publishing of science; so all aspects of the publishing of science are relevant.

Perhaps if you are ever in London I could buy you a drink of warm beer and we could continue this conversation, which might, I fear, be beginning to bore your readers.

Comments are closed.