The world of scholarly communications is awash with innovation around peer review. There is, however, a worrying thread running through many of these initiatives. It’s summarized nicely by this quote from the Société Française d’Écologie et d’Évolution:
“[We] Support the development of new ways of disseminating scientific knowledge, not based on commercial journals but on open archives and a public peer review process, completely handled by the researchers themselves”
The common refrain is that academics should take back control of peer review (e.g. here, here, here, here, here), which carries the heavy implication that journal staff and publishers add literally nothing to the process because volunteer reviewers and editors do all of the work.
I am equally convinced that peer review run at scale (>200 submissions per year) by only volunteer academics would be a shambles*. Why? Damian Pattinson neatly summarizes the issue in a recent post on the ASAPbio blog:
“The term ‘peer review’ has come to mean any assessment performed on a manuscript prior to publication […] But in actual fact, the act of preparing a manuscript for publication requires a huge number of checks, clarifications, tweaks and version changes, from many different parties. But because of the tradition of confidentiality during the editorial process, much of this work has gone unnoticed by the academic community.”
Many of these seemingly minor contributions come from Editorial Office staff: from the list of 102 things that publishers do, EO staff play a role in 28** and are the principal actors in 11***.
Put simply, the main role of the Editorial Office is to catch problems before they derail the peer review process. There are an astonishing number of ways in which authors, editors, and reviewers can make mistakes, and it takes an experienced and dedicated eye to catch them. A few examples: spotting missing or corrupted figures, noticing that the authors shared a dataset with raw patient data, seeing that ethics board approval isn’t mentioned, spotting that a potential handling editor has a serious Conflict of Interest, and removing ad hominem language from reviewer comments. Once a mistake gets through, peer review can be delayed by weeks or months while the issue is fixed and everyone’s attention is subsequently dragged back onto the manuscript. Big mistakes and big delays make everyone unhappy; lots of these will leave your reputation in tatters. The hope that AI-based automation could fulfil these functions greatly underestimates their range and subtlety of issues encountered during peer review: AI needs to come a long way before it can replace anyone in the Editorial Office.
Academics are wonderful human beings, but they are just not the right people to do this kind of painstaking (= boring), day-in, day-out work. First, they are often unavailable for weeks while travelling or on field work, and the journal will often grind to a halt in their absence. Second, their main job is doing research, with teaching, supervision, and university admin on the side. Adding 10+ hours of journal admin per week just detracts from their important-for-humanity research; this is particularly wasteful considering that most EO tasks only require an undergrad education.
The PeerCommunityIn model doesn’t even recognize that Editorial Office work needs to take place, and their site only provides a description of their ‘recommender’ role:
“The role of the recommenders is similar to that of a journal editor (finding reviewers, obtaining peer reviews, making editorial decisions based on these reviews), and they may reject or recommend the preprints they are handling after one or several rounds of reviews”
PCI recommenders could be asked to check through each new submission for missing figures and so forth, and they might be able to manage (without complaining) a few manuscripts per month. Any more than that and their volunteer enthusiasm will evaporate, so the checks don’t get done. PCI could bring in more recommenders as submissions increase, so that each only handles a few papers. Then the problem becomes consistency: not all academics would be equally diligent with these checks, and disasters-in-waiting will slip through.
The most logical approach would be to pay someone to consistently do these checks before the manuscripts go to the recommenders. While they’re at it, this person could chase up late reviewers, sort out issues with figures, and spell check the decision letters. But, dammit, we’ve found ourselves back with one of those pesky Editorial Offices again, even though we vowed to have peer review “completely handled by the researchers themselves”.
This matters because the need to employ staff moves these new peer review initiatives from running on fresh air and goodwill to needing a business model. A full time Editorial Office assistant will cost at least $40K per year, and that money has to come from somewhere. Who is going to pay?
This may come as a surprise to some, but publishers actually do something with all that money they charge. Lots of it goes towards paying their journal management staff. In fact, there’s a lesson here: these ruthless for-profit publishers cut costs wherever they can, but still employ thousands of people to oversee peer review. Maybe they’ve been doing this for decades and have discovered that journals run entirely by volunteer academics generally don’t work that well?
By fooling themselves that peer review costs almost nothing, these new peer review initiatives are setting themselves up to collapse just when they start to attract hundreds of submissions. Falling flat is always a risk with innovation. Falling flat because you neglected to have a business model is a major disservice to everyone who put in volunteer hours along the way.
* The Journal of Machine Learning Research is an interesting exception – over 200 articles published in 2017 and run entirely by volunteers. The EO automation mentioned here is nothing more sophisticated than basic ScholarOne functionality.
**list items 6, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 26, 27, 28, 29, 30, 31, 32, 33, 37, 49, 50, 58, 68, 73, and 84
***list items 13, 14, 15, 18, 21, 22, 23, 26, 30, 33, and 37