Like a lot of topics in academic publishing, peer review attracts strong opinions, denunciations, and prescriptions for change. Many of these diatribes are based on little more than personal anecdotes. Unfortunately, a dearth of evidence does not stop these opinions from dominating our discussions about scientific publishing. Just Google “peer review is” and you’ll see.
Recently, a group of French researchers took it upon themselves to test a widely voiced opinion that the peer review system is unsustainable — that the explosion of scientific papers has overwhelmed the community of scientists willing to review them.
Their paper, “The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise”, was published November 10, 2016 in PLOS ONE.
Rather than focusing on the experiences of individual journals or by surveying scientists on their opinions, the researchers attempted to model the system of peer review in the biomedical sciences to see if there was sufficient supply of reviewers to meet peer review demand.
Building such a model, however, required a number of assumptions, and not all of them appear, on face value, to be accurate. For example, the researchers assumes that 25% of manuscripts were desk-rejected (too low?), that 90% of peer-reviewed submissions go through a second round of peer review (too high?), and that 5% of papers are submitted just once, 10% are submitted twice, and 85% are submitted three times (too optimistic?).
Nevertheless, the researchers were not married to these assumptions and performed 25 sensitivity analyses, varying these and other assumptions. In all but the most stringent scenarios, there was sufficient supply of peer review to meet demand. They write:
From 1990 to 2015, the demand for reviews and reviewers was always lower than the supply […] In fact, the supply [of reviewers] exceeded the demand by 249%, 234%, 64% and 15%, depending on the scenario. The peer-review system in its current state seems to absorb the peer-review demand and be sustainable in terms of volume.
Logically, these results should make complete sense: a growing population of authors should theoretically translate into a growing population of reviewers. In practice, this does not always happen.
In 2013, Elsevier reported that some countries, like the United States, were doing proportionally more reviews than other countries, like China. This imbalance also operates on the micro-level as well — some scientists simply do much more reviewing than others. The authors of the PLOS ONE study refers to these scientists as “peer review heroes” and warns that this group “may be overworked, with risk of downgraded [sic] peer review standards.”
While this paper adds more empirical evidence to suggest that peer review is not suffering a sustainability crisis, it perpetuates an unsupported belief that quality may be at stake. Further, it treats peer review as a burden — a necessary chore that provides little benefit to the reviewer, rather than a part of the scholarly communication process. The title of their paper, “The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise” implies a sense of great unfairness and inequality in the publication process, a view adopted in this Vox article.
Peer review is largely based on a voluntary labor market and, in such markets, work is rarely distributed evenly among its members. Some volunteers contribute more because they have more time, more aptitude, or take more pleasure in contributing. Peer review should be no different.
I also question the unstated assumption that an insufficient supply of competent reviewers means that the peer review system is broken. In a voluntary system, reviewers get to be selective in what they chose to review. While a researcher may be willing to review a relevant, well-written paper presenting novel results, it may be much harder to find someone willing to review a poorly-composed paper presenting negative results. Under such conditions, it may be more efficient for an editor to leave the volunteer market and depend upon a commercial peer review alternative, like Rubriq, or to give up entirely.
Scientists find the time to review good papers, especially when they are relevant to their own work and sent to them by someone they respect. For these papers, an editor can demand reviews within days (sometimes within hours). At the other end of the spectrum, an editor may spend months finding a volunteer willing to review a paper or, in desperation, assign themselves to the role of reviewer. The selectivity of editors and the willingness of volunteer reviewers should not be considered flaws of the system, but as features, by speeding up the publication of some manuscripts while delaying others. While authors may not consider such preferential treatment fair, it may be ultimately beneficial to science.