Author’s Note: I find myself posting a link to this 2018 post on Twitter every few weeks — there’s always some group of academics white-knuckled with rage about the high cost of open access fees/journal subscriptions, *especially* since peer review is essentially “free”. It must be free because reviewers don’t get paid, right?

Finding someone who’s wrong about peer review on Twitter isn’t much of a challenge, but unfortunately the misconception that peer review is essentially free extends to people trying to innovate in this space, which dooms their efforts as soon as they achieve any sort of success – they have no business model and hence no revenue. Since innovation around peer review is desperately needed, I keep re-upping this post in the hope that future entrepreneurs bake realistic costs into their visionary plans.

The world of scholarly communications is awash with innovation around peer review. There is, however, a worrying thread running through many of these initiatives. It’s summarized nicely by this quote from the Société Française d’Écologie et d’Évolution:

“[We] Support the development of new ways of disseminating scientific knowledge, not based on commercial journals but on open archives and a public peer review process, completely handled by the researchers themselves”

The common refrain is that academics should take back control of peer review (e.g. here, here, here, herehere), which carries the heavy implication that journal staff and publishers add literally nothing to the process because volunteer reviewers and editors do all of the work.

Color blind Test

I am equally convinced that peer review run at scale (>200 submissions per year) by only volunteer academics would be a shambles*. Why? Damian Pattinson neatly summarizes the issue in a recent post on the ASAPbio blog:

“The term ‘peer review’ has come to mean any assessment performed on a manuscript prior to publication […] But in actual fact, the act of preparing a manuscript for publication requires a huge number of checks, clarifications, tweaks and version changes, from many different parties. But because of the tradition of confidentiality during the editorial process, much of this work has gone unnoticed by the academic community.”

Many of these seemingly minor contributions come from Editorial Office (EO) staff: from the list of 102 things that publishers do, EO staff play a role in 28** and are the principal actors in 11***.

Put simply, the main role of the EO is to catch problems before they derail the peer review process. There are an astonishing number of ways in which authors, editors, and reviewers can make mistakes, and it takes an experienced and dedicated eye to catch them. A few examples: spotting missing or corrupted figures, noticing that the authors shared a dataset with raw patient data, seeing that ethics board approval isn’t mentioned, spotting that a potential handling editor has a serious Conflict of Interest, and removing ad hominem language from reviewer comments. Once a mistake gets through, peer review can be delayed by weeks or months while the issue is fixed and everyone’s attention is subsequently dragged back onto the manuscript. Big mistakes and big delays make everyone unhappy; lots of these will leave your reputation in tatters. The hope that AI-based automation could fulfill these functions greatly underestimates their range and subtlety of issues encountered during peer review: AI needs to come a long way before it can replace anyone in the Editorial Office.

Academics are wonderful human beings, but they are just not the right people to do this kind of painstaking (= boring), day-in, day-out work. First, they are often unavailable for weeks while traveling or on field work, and the journal will often grind to a halt in their absence. Second, their main job is doing research, with teaching, supervision, and university admin on the side. Adding 10+ hours of journal admin per week just detracts from their important-for-humanity research; this is particularly wasteful considering that most EO tasks only require an undergrad education.

The PeerCommunityIn (PCI) model doesn’t even recognize that EO work needs to take place, and their site only provides a description of their ‘recommender’ role:

“The role of the recommenders is similar to that of a journal editor (finding reviewers, obtaining peer reviews, making editorial decisions based on these reviews), and they may reject or recommend the preprints they are handling after one or several rounds of reviews”

PCI recommenders could be asked to check through each new submission for missing figures and so forth, and they might be able to manage (without complaining) a few manuscripts per month. Any more than that and their volunteer enthusiasm will evaporate, so the checks don’t get done. PCI could bring in more recommenders as submissions increase, so that each only handles a few papers. Then the problem becomes consistency: not all academics would be equally diligent with these checks, and disasters-in-waiting will slip through.

The most logical approach would be to pay someone to consistently do these checks before the manuscripts go to the recommenders. While they’re at it, this person could chase up late reviewers, sort out issues with figures, and spellcheck the decision letters. But, dammit, we’ve found ourselves back with one of those pesky Editorial Offices again, even though we vowed to have peer review “completely handled by the researchers themselves”.

This matters because the need to employ staff moves these new peer review initiatives from running on fresh air and goodwill to needing a business model. A full time EO assistant will cost at least $40K per year, and that money has to come from somewhere. Who is going to pay?

This may come as a surprise to some, but publishers actually do something with all that money they charge. Lots of it goes towards paying their journal management staff. In fact, there’s a lesson here: these ruthless for-profit publishers cut costs wherever they can, but still employ thousands of people to oversee peer review. Maybe they’ve been doing this for decades and have discovered that journals run entirely by volunteer academics generally don’t work that well?

By fooling themselves that peer review costs almost nothing, these new peer review initiatives are setting themselves up to collapse just when they start to attract hundreds of submissions. Falling flat is always a risk with innovation. Falling flat because you neglected to have a business model is a major disservice to everyone who put in volunteer hours along the way.

* The Journal of Machine Learning Research is an interesting exception – over 200 articles published in 2017 and run entirely by volunteers. The EO automation mentioned here is nothing more sophisticated than basic ScholarOne functionality.

**list items 6, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 26, 27, 28, 29, 30, 31, 32, 33, 37, 49, 50, 58, 68, 73, and 84

***list items 13, 14, 15, 18, 21, 22, 23, 26, 30, 33, and 37

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.


19 Thoughts on "Revisiting: A Curious Blindness Among Peer Review Initiatives"

Tim – thank you for re-articulating the issues so clearly! It would be interesting to read your opinions on a couple of follow-up questions:

1) Why don’t many academics and librarians understand or appreciate the work done by journal offices/publishers, and what might be done to change their negative perceptions?

2) In the future, will readers continue to value the “invisible” value added by journals/publishers, or will they just be satisfied with free preprint versions of similar content?

Thank you. Richard Wynne – Rescognito

I believe the recent spate of bad science in COVID pre-prints will answer your second question. The countries basing policy decisions pre-prints have seen higher mortality than in countries that have policies based upon accepted literature.

Hi Jonathon – that’s a fascinating point. Do you have a link I can check out?

Hi Jonathon – do you have a shareable source for this? I’d be interested in checking it out.

The closest analogy I can think of is an office firing the janitorial staff because the place always looks clean – the work the Editorial Office does is only apparent in its absence. Moreover, the work the EO does is only obvious when you’re trying to manage peer review, which largely doesn’t happen for pre-prints.

Some monograph publishers now charge the author for indexing, as if that were not part of the publisher’s job.

I’m the editor of Learned Publishing, the journal owned by ALPSP and published with the support of SSP (and one of the SK chefs). We don’t have an editorial office but I receive a small honorarium to fill this role. And what Tim says is absolutely correct. Whilst our reviewers are fantastic, they don’t have the time (or I guess the inclination) to check the “nuts and bolts” and more than once we have been saved from publishing some terrible research because I consider that to be my job (and the other editors likewise). I’m not claiming to be better than reviewers and I rely on their in-depth knowledge, however there are many cases where the article looks OK and gets their thumbs-up, but I discover problems when I do more detailed (and yes, boring) checks – and I’ve written about this in the journal – Peer review IS an expensive business – (and before anyone contacts me, yes I’m sure that I have missed errors in some of our articles)

Any academic who’s ever been department chair or otherwise had significant administrative responsibilities certainly has to know how valuable the office staff’s labor is.

But what really gets me is when academic librarians fall into this same trap. Considering that their labor is also often devalued, I would think that they would understand how much behind-the-scenes labor academia takes and would be sympathetic.

Great post, Tim. You state “AI needs to come a long way before it can replace anyone in the Editorial Office”, but surely a good use of AI tools is not to replace human judgement, but to help the human get to the decision point sooner. UNSILO provides over 30 simple but essential checks on the submission (and these checks are now available from within ScholarOne). Is each citation actually referred to in the text of the article? Are the figure legends numbered in the correct sequence? The AI tool finds examples, but in all cases we leave it to a human to make the decisions. As for conflict of interest, we trained our systems by humans assessing thousands papers to identify statements that could be construed as a conflict, whether or not they were highlighted as such by the authors. Anything we identify in the submission we then present to the editorial office for them to make a decision. In other words, we don’t replace anyone in the editorial office, we make them smarter and save their time. As the number of submissions increases, publishing houses are not hiring additional staff in the Editorial Office to handle them. But using AI tools, which are good at finding things, humans can carry out decision-making more effectively – and the checks they carry out can be more thorough than before. That seems to me an effective combination.

Hi Michael – my take is that AI (at the moment) mostly helps EO’s do the jobs they’d *like* to be able to do, but lack the time. The checks you mention above are a good example, in that no EO I’m aware of goes through the citation numbering for every new submission. It’s a very useful thing to do, but takes ages to do by hand. iThenticate is another example – nobody would ever systematically check for text copying by hand, but there’s (thankfully) a tool that runs that task automatically in the background.

Thanks for making an excellent point! In my 9 years as an EIC, I can confirm that sometimes the submission and review process works smoothly, and sometimes … it doesn’t. It was these latter instances where, assisted by the EO, I earned my extravagant honorarium. I won’t give examples because you’ll think I’m just making this stuff up, but other EIC’s will just chuckle knowingly. Let’s just say I kept some truly poor science from getting into the public record and saved more than one author from an embarrassing career-harming blunder. In the review process, someone has to be the adult in the room.

I think the new ways of disseminating scientific knowledge must be easy to access, understand and participate. Journals can be a depot of knowledge storagement, beacuse they are accurate and complete. But disseminating academic work needs to be more attracting, new media like video may be a perfect tool to promote.

Dear Tim, I, of course, agree that all the staff working with editorial handling needs to be paid for their time-consuming job.
But my issue with all that is that some biggest academic publishers like Elsevier do this job in a surprisingly poor way for the amount of public money that they receive from universities. For example, they do not offer any language check; I’ve seen a few papers published with typos and latex-type errors especially in tables; most of this work is probably outsourced and paid little money for. If academic publishing was handled by universities, I bet that any university library staff would be able to do this job much better as they are already well trained in editorial handling.
And still, the obviously most important reviewing job is done by academics for free and on a voluntary basis. Why don’t you postulate that they should be also paid for that? The whole situation is ridiculous, I would say. The universities are paying huge amounts of money to the commercial publishers and paying again the salaries of the academics who do the job for the academic publishers for free….
So sorry, but the argument that the academia would never be able to handle all these editing work themselves doesn’t speak to me. They can simply hire more librarians and use the money to pay fairly to their own employees instead of to the commercial publishers who fill their pockets using the voluntary work of scientists.

Why hire librarians to do the work of a publisher? Librarians are highly trained professionals, but they are not trained in running a publishing business, even a not-for-profit one. Luckily, university presses have been around for more than 500 years and they solve the problem you’ve raised here. I would argue that preferentially supporting them over commercial publishers would be a significant improvement (note, I work for a university publisher so I am clearly biased), but the research career system is largely indifferent to this matter, hence there’s no motivation for authors.

As for paying peer reviewers, I would suggest taking a look at this post where it is discussed:
The concepts (and problems) with paying peer reviewers are discussed extensively in the comments section.

Thanks, David for your response. Yes, sorry for the wrong terminology, I meant the staff of the university presses rather than the librarians. It is quite depressing that the university presses are around for more than 500 hundred years and their role in academic publishing is quite invisible worldwide. How come that commercial publishers like Elsevier can have a 40% profit margin and the scientific community is not protesting against it any stronger (source:

? I am just starting my academic career but I have never come across any senior submitting their work to a university publisher. Maybe if there were more opportunities to do so, this would start to work.

Thanks also for sharing the post about paid peer review. But I don’t understand why it should be treated as a mandatory career requirement? It could work quite well as it is now, but the commercial publishers could do it on a fee-for-task basis with some small amount of money. Since they do have money. The post is also mentioning that formalizing peer review could lower the enthusiasm for the task. I am sorry but is there currently a high level of enthusiasm for peer-reviewing among scientists? I don’t think so. The reviewing job has a terrible standard, is often done ‘in 5 minutes’ with a one sentence-long opinion (yes, commercial publishers allow that). I often receive really rude remarks, so I am probably very unlucky and never witnessed any enthusiastic reviewer. So I don’t think that commercial publishers paying small sums of money to the reviewers could make it any worse.

I’m 100% in agreement with you on supporting journals owned and run by the research community, and one should also add into that list independent non-profit publishers and journals owned and run by research societies. There are more of these than you probably realize, and I think the problem is not lack of opportunities (my employer publishes around 450 journals including the top journals in many fields), but rather than no one thinks about who owns a journal when they are choosing it for their papers. The author is looking to do what’s best for their own careers, rather than thinking more broadly about supporting the community.

As for peer review — I think there’s great enthusiasm for it, at least among some subsection of the community. Most journals rely heavily on reviewers they know will take on the work and do it well, and my concern in making it a paid transaction or a requirement for researchers is that it changes the nature of the activity from contributing to one’s field to doing a required task or something that pays a lot less than one’s salary on an hourly basis. As noted in the comments, where journals have paid a small amount, it has largely been rejected by peer reviewers. When one also looks at the quantities of papers being reviewed, a small amount of money per paper (including rejected papers), to multiple reviewers (and potentially more reviewers upon revision) adds up pretty quickly. Yes, the big commercial publishers could possibly afford this, but the smaller and non-profits would be hit very hard by it.

Comments are closed.