Open peer review attracts fewer, lower quality reviews, reports a new study.
The paper, “A prospective study on an innovative online forum for peer reviewing of surgical science,” by Martin Almquist and others, appeared on 29 June 2017 in the journal, PLoS ONE.
Their study compared the quality of open online reviews to conventional reviews for manuscripts submitted to the British Journal of Surgery (BJS). BJS employs a single-blinded review process.
110 manuscripts were posted online with email invitations sent to more than 7000 reviewers who had BJS accounts on ScholarOne, a peer review management system. Manuscripts were kept online for 3 weeks and were only accessible through email links. The same 110 manuscripts were simultaneously sent to reviewers for conventional evaluation. Editorial assistants scored all reviews based on a validated quality instrument.
Of the 110 manuscripts, just 44 (40%) received at least one online review. Compared to conventional reviews, these reviews received significantly lower quality scores across all review quality aspects. The overall score for online reviews was 2.35 compared to 3.52 for conventional reviews.
The quality of the online reviews varied considerably, but was significantly lower than conventional review. Given the large number of potential reviewers who were invited, the participation rate was very low.
Despite sending invitations to more than 7000 reviewers, the study received reviews from just 59 individual reviewers. “This has to be considered a disappointing rate,” the authors wrote. They surmised that a personal email from an editor targeting a researcher with known expertise may have greatly improved participation over a mass impersonal email. It is not known whether online reviewers had any competence on the topics they agreed to review. In this way, it is difficult to know whether the intervention (open review) or self-selection was responsible for their results.
The researchers noted several limitations to their approach, most importantly, that the study lacked randomization and a proper control group. Under a randomized controlled design to test reviewer anonymity, the editors of The BMJ reported no difference in the quality of reviews, although it did significantly increase the likelihood of reviewers declining an invitation to review. Other properly randomized studies [here, here, and here] also report no differences in review quality. One study reported that open peer review resulted in higher quality, more courteous reviews. Whereas opinions on the best way to conduct peer review are many, rigorous scientific studies are rather few.
While the British Journal of Surgery study may have limited generalizability to journals that openly publish reviews along with papers, it may have most application to publish-first-review-later journals, like F1000 Research.
Open online review has the potential to attract many more eyes to a new piece of research than conventional peer review. In reality, it may do far worse in attracting the eyes you want.