The world of scholarly communications is awash with innovation around peer review. There is, however, a worrying thread running through many of these initiatives. It’s summarized nicely by this quote from the Société Française d’Écologie et d’Évolution:

“[We] Support the development of new ways of disseminating scientific knowledge, not based on commercial journals but on open archives and a public peer review process, completely handled by the researchers themselves”

The common refrain is that academics should take back control of peer review (e.g. here, here, here, herehere), which carries the heavy implication that journal staff and publishers add literally nothing to the process because volunteer reviewers and editors do all of the work.

Color blind Test

I am equally convinced that peer review run at scale (>200 submissions per year) by only volunteer academics would be a shambles*. Why? Damian Pattinson neatly summarizes the issue in a recent post on the ASAPbio blog:

“The term ‘peer review’ has come to mean any assessment performed on a manuscript prior to publication […] But in actual fact, the act of preparing a manuscript for publication requires a huge number of checks, clarifications, tweaks and version changes, from many different parties. But because of the tradition of confidentiality during the editorial process, much of this work has gone unnoticed by the academic community.”

Many of these seemingly minor contributions come from Editorial Office staff: from the list of 102 things that publishers do, EO staff play a role in 28** and are the principal actors in 11***.

Put simply, the main role of the Editorial Office is to catch problems before they derail the peer review process. There are an astonishing number of ways in which authors, editors, and reviewers can make mistakes, and it takes an experienced and dedicated eye to catch them. A few examples: spotting missing or corrupted figures, noticing that the authors shared a dataset with raw patient data, seeing that ethics board approval isn’t mentioned, spotting that a potential handling editor has a serious Conflict of Interest, and removing ad hominem language from reviewer comments. Once a mistake gets through, peer review can be delayed by weeks or months while the issue is fixed and everyone’s attention is subsequently dragged back onto the manuscript. Big mistakes and big delays make everyone unhappy; lots of these will leave your reputation in tatters. The hope that AI-based automation could fulfil these functions greatly underestimates their range and subtlety of issues encountered during peer review: AI needs to come a long way before it can replace anyone in the Editorial Office.

Academics are wonderful human beings, but they are just not the right people to do this kind of painstaking (= boring), day-in, day-out work. First, they are often unavailable for weeks while travelling or on field work, and the journal will often grind to a halt in their absence. Second, their main job is doing research, with teaching, supervision, and university admin on the side. Adding 10+ hours of journal admin per week just detracts from their important-for-humanity research; this is particularly wasteful considering that most EO tasks only require an undergrad education.

The PeerCommunityIn model doesn’t even recognize that Editorial Office work needs to take place, and their site only provides a description of their ‘recommender’ role:

“The role of the recommenders is similar to that of a journal editor (finding reviewers, obtaining peer reviews, making editorial decisions based on these reviews), and they may reject or recommend the preprints they are handling after one or several rounds of reviews”

PCI recommenders could be asked to check through each new submission for missing figures and so forth, and they might be able to manage (without complaining) a few manuscripts per month. Any more than that and their volunteer enthusiasm will evaporate, so the checks don’t get done. PCI could bring in more recommenders as submissions increase, so that each only handles a few papers. Then the problem becomes consistency: not all academics would be equally diligent with these checks, and disasters-in-waiting will slip through.

The most logical approach would be to pay someone to consistently do these checks before the manuscripts go to the recommenders. While they’re at it, this person could chase up late reviewers, sort out issues with figures, and spell check the decision letters. But, dammit, we’ve found ourselves back with one of those pesky Editorial Offices again, even though we vowed to have peer review “completely handled by the researchers themselves”.

This matters because the need to employ staff moves these new peer review initiatives from running on fresh air and goodwill to needing a business model. A full time Editorial Office assistant will cost at least $40K per year, and that money has to come from somewhere. Who is going to pay?

This may come as a surprise to some, but publishers actually do something with all that money they charge. Lots of it goes towards paying their journal management staff. In fact, there’s a lesson here: these ruthless for-profit publishers cut costs wherever they can, but still employ thousands of people to oversee peer review. Maybe they’ve been doing this for decades and have discovered that journals run entirely by volunteer academics generally don’t work that well?

By fooling themselves that peer review costs almost nothing, these new peer review initiatives are setting themselves up to collapse just when they start to attract hundreds of submissions. Falling flat is always a risk with innovation. Falling flat because you neglected to have a business model is a major disservice to everyone who put in volunteer hours along the way.

* The Journal of Machine Learning Research is an interesting exception – over 200 articles published in 2017 and run entirely by volunteers. The EO automation mentioned here is nothing more sophisticated than basic ScholarOne functionality.

**list items 6, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 26, 27, 28, 29, 30, 31, 32, 33, 37, 49, 50, 58, 68, 73, and 84

***list items 13, 14, 15, 18, 21, 22, 23, 26, 30, 33, and 37

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.


63 Thoughts on "A Curious Blindness Among Peer Review Initiatives"

I was recently speaking with an editor who was complaining that her journal’s time to publication had increased in the past year primarily because first one and then another junior member of her editorial office staff had taken new jobs, leaving her shorthanded. As a newish editor, she admitted she hadn’t realized how much effect these people could have on things. I think this reflects the very blindspot you are pointing out.

That’s a neat little experiment that that journal you mentioned has unintentionally conducted: take the office staff out, the time to publication increases.

I’ll bet a number of us who have worked as administrators at universities recognize the pattern of faculty not seeing much value in nameless administrators’ work: “what do all those people do?” But they usually value the ones they can name.

Thank you, Tim. My team of 6 handles thousands of papers a year. Each one gets a QC for the things you mention. Chasing late reviewers and editors is a full time job itself. Answering never ending queries from authors (some legitimately concerned about reviews and other perpetually impatient) takes up a lot of time.

The biggest time suck not mentioned is dealing with ethics issues. We spend hours going back and forth with authors, editors, and sometimes other journals or institutions. The “researcher run review” you speak of never seems to address how they will handle these at all.

I am all for innovation but step one is to actually know what you are trying to innovate. Skipping that step is lazy and dangerous.

I fully agree that editorial offices with their professional editors and editorial assistants provide an indispensable service of quality, copy and records control.
We all know that not everything submitted should be allowed to enter peer review.
However, it doesn’t mean one needs journals as such for that task. If the academics were to return the means of scholarly communications to their universities or public/non-profit funders, the editorial offices could be also placed there. More job safety for those editors and assistants anyway, than with certain publishers.

A press based at a university? A “university press”?

Nah. That idea will never fly.

Bear with me David! If a scientists were restricted to publishing only with own funder or university press, and no external submissions would be possible, it could work. As I blogged here, example of Wellcome Trust:
Commercial journals can then print their elite Natures and Cells as a digest with “best of” they find in OA on funder’s publishing platforms. You could subscribe to those!

The university and the funder have a vested interest in seeing “their” work pass peer review. Who, in that system, has the incentive to reject a work?

Yes, the university and the funder can be made to have that incentive, by creating and empowering external watchdogs that they fear enough. But then why not task publishing to those watchdogs in the first place?

No watchdogs. Horrible term, especially when applied to research integrity or ethics of academic publishing. Watchdogs know who their master is, but will also keep silent when fed a sausage.
Of course the funder/university-press system I propose can only work under fully transparent peer review, again the model of Wellcome Trust. Universities might indeed be tempted to trick, but funders will sure be interested to know the objective quality of the outcome of their investments, so they would only invite unbiased reviewers (I hope).
The price of being publicly ridiculed for faux peer reviews should be enough of a deterrent. This is where science will be self-correcting itself.

“so they would only invite unbiased reviewers (I hope).”
“not a cat’s chance in hell!
It is incredibly easy to find reviewers that will accept papers of colleagues with just the right amount of criticism! I spent a lot of time and resources geting to know the communities i served to ensure that chosen reviewers were not ex-students, supervisors, best friends etc. Leonid – have you ever been involved in publishing? If so, you should know all this.

@ shiloh Mar 7, 2018, 3:35 PM (can’t reply to your comment).
Again, I am talking about funders publishing the research they financed and organizing transparent peer review of it. Nothing is fraud-proof, but consider:
1. Funders have an interest to see an objective assessment of their investment, to know they were right to fund those authors. It is basically less like journal peer review, but more like grant peer review, only transparently.
2. Transparency in peer review can fix a lot, of course not everything. But sunlight is still the best disinfectant.

Hi Leonid — jumping back in after a long day away from the internet. The problem with a funder or institution creating a “journal” to publish their own internal work is that it crosses the line and becomes something of a marketing exercise. Funders want people to see that they’re doing important things and donate funds, so they have a vested interest in presenting a positive image. Similarly, grant officers at that funder want the trustees to think they’re doing a good job distributing funds, so a vested interest in things looking productive. Universities similarly seek donors, not to mention potential faculty and students.

Having an objective, neutral, third party do the review offers a better chance for objectivity (although as I’m sure you’ll point out, it’s not perfect, but at least offers a better chance for fairness). As one potential example, this report on the Wellcome Trust sponsored sub-section of F1000 seems to show a 98% acceptance rate. Does that make you suspicious at all?

Disagree. Funders have NO interest in objective peer review. Objective peer review holds funders accountable (“You funded this crap?”). No one wants to sit before a review board, and that is what traditional publishing is–and why funders are seeking to undermine it.

So, not a “university press” in the sense that we understand today, but an “Institution Press”. Nah. That’ll never fly. Oh! Hang on. OECD, World Bank, IMF, UN agencies, EU and many NGOs (Chatham House, Brookings etc) all run in-house presses. OECD’s even won the London Book Fair Academic & Professional Publisher of the Year in 2017. But, wait, what’s to prevent us publishing rubbish and from vanity publishing? The need to protect our Institution’s reputation is very, very strong motivation for an in-house publishing team to tell an author – sorry, no, not good enough – even when they are a valued colleague.
Disclosure: I am OECD’s publisher.

But to be fair, those institution presses publish material from all authors, rather than just from within their own institution. And if run right, there’s a deliberate firewall between the institution and the editorial office of the press, preventing any favoritism toward the home institution. This is a far cry from what was proposed in the initial comment, where the institutional press would merely be a “house organ” that solely published its own researchers’ material.

Commercial journals can then print their elite Natures and Cells as a digest with “best of” they find in OA on funder’s publishing platforms. You could subscribe to those!

Trying to sell a list of reviews of previously published papers has been tried by Faculty of 1000 for decades now, and as far as I’m aware, it has never covered costs, let alone made a profit.

Gosh, so many criticisms!
Personally, I think my proposal is not doable simply because scientists wouldn’t give away their journals with their impact factors or other prestige attributes.
With a funder- or university-press there is no extra prestige to gain. You already got your job or funding there.
Yes, a 98% acceptance rate is the right way from Wellcome Trust. We want to see ALL research published, not just the fancy results. But this is exactly what journal-based publishing is biased against: the fancyness of results. One must do away with journal competition. You publish where you work or got money from, a kind of preprint, and the post-publication peer review takes it from there.
Anyone defending the current journal-based system as beneficial for science is invited to visit my site. Doesn’t matter if Elsevier, Frontiers or even a society-run journal.

Admittedly we have not (yet) reached the scale of 200 submissions per year, but at Discrete Analysis we do not have the kind of editorial support you are talking about (though we do have a small amount of very useful administrative support that we do not have to pay for, so our ostensible costs are slightly misleading, but the real costs are still very low) and we do not miss it.

I think that publishers have a blind spot: yes, they do all these things, but would the system come crashing to a halt if they didn’t? If there is a missing or corrupted figure in a published paper, then that is a serious problem if you insist on antiquated notions such as “the version of record”, but if you don’t, then all an author has to do is post an updated version to a repository. I find all your other examples of things that publishing offices do unconvincing in a similar sense: they may add a small amount of value, but nothing like enough to justify what we pay for them.

I can totally see where this may be true for an overlay journal (half the work with zero risk) with less than 200 submissions a year.

In my experience, the system would indeed degrade and grind to halt without the multitude of little contributions made by EO staff. I acknowledge that things are a bit different in mathematics, where tenured academics hold Managing Editor positions. They have to take time out of their research to chase reviewers etc, but I’m not convinced that this admin work is a good use of their time or intellect.

For your missing figure example, you’ve missed a few steps. First, who spots that the figure is missing? If it’s the reviewer, who contacts the author to request a new version? The editor? Who follows up with the editor to make sure they actually contacted the authors? Who follows up with the authors two weeks later to ask for the figure again? Stumble at any of these steps and the authors never hear that their figure is missing or get pushed to do anything about it.

Lastly, Discrete Analysis probably works with the most competent and conscientious people in your field, so the admin burden is quite low. Once you start getting submissions from everybody, you run into something like the Pareto principle: 20% of your papers take up 80% of your time. These 20% authors mess up in so many ingenious ways, and someone has to have the experience and dedication to spot all these issues.

Tim simply repost you mean after I have published my paper with your corrupted figure and then repost my paper with the correct figure only to find out that something else is wrong!

You can’t be serious

I’m completely serious. One just reposts again. I don’t see the problem. By contrast, a problem with more traditional publication methods is that if there are mistakes in the published version (as there often are), then there’s nothing one can do to correct them.

Copy-editing immediately springs to mind here. Even if maintaining the scholarly record isn’t considered important by some, surely basic communication has to be?

Working on a much larger scale (>4000 submissions a year), and with an author-base spanning the globe, we see some atrocious spelling and grammar in submissions, and also get complaints from referees that it isn’t their job to correct this. So, it’s left to the editorial staff and copy-editors to ensure quality and consistency – you’ll just get a lower standard of journal if you strip this out, to the serious detriment of future scholars trying to read and understand the work in order to replicate and build on it.

One of the reasons we can do things cheaply is indeed that we do not copy-edit (except that for the occasional article written by non-native English speakers I have done a bit of copy-editing myself). We can afford to do this because the standard of writing of our articles is in general high. I understand that this doesn’t generalize to every journal in every field, but it’s fine for us: I don’t think our articles are noticeably worse as a result of our failure to copy-edit. (We also ask referees to point out any typos that they spot.)

I think it’s great that Discrete Analysis is working well with your current setup – it’s clearly very efficient. Do you think you’ll be able to retain the same workflows with an order of magnitude more submissions?

The argument over static versions-of-record versus ‘living articles’ is a separate issue, and one I plan to tackle in a future post (spoiler: I think the latter would be a disaster).

I think that with an order of magnitude more submissions something would have to change. For example, at the moment we write editorial introductions to our articles, meaning a few paragraphs setting them in context for a wider audience. These are quite a lot of work, and I do most of them myself. To do ten times as many would eat up an unacceptable amount of my time. I think we could scale up by a factor of 2 without too many problems, but after that it would start to get more difficult. But we might be able to get somewhere by having more than one managing editor and a larger editorial board.

On the subject of versions of record versus living articles, we have a compromise at Discrete Analysis. We have a version of record that stays fixed for ever, but if an author wishes to post an improved version to the arXiv (which has not happened yet and is likely to be a rare occurrence), we can add a link to it from the editorial introduction, explaining the reason for the update. But the main link will always be to the initial published version. I think that that way we get the best of both worlds.

Thank you Tim, I echo many of the comments by Angela – Quality Control at every step of the process is part of the value we add to a reputable publication – we have over time automated our processes to be more efficient but a journal without an editorial office is leaving itself open untold dangers.


The pre-publication “peer review initiatives,” which Tim Vines cogently deplores as “a curious blindness,” prompts similar thoughts concerning a notable post-publication peer review initiative that the NCBI began in 2013. Whereas prepublication review provides some index of quality, a more exacting review is made after publication by readers who are influenced by, and may act upon, the information they have obtained. Citations provide one index of this. The other is post-publication peer review as provided by PubMed Commons. Remarkably, the results of the latter, may feed back into the pre-publication peer review process .

Those charged with reviewing a new paper, or grant application, or even a Nobel prize suggestion, are confronted with an author’s list of publications. Pasting each title into PubMed Commons, one sees an abstract of the publication, sometimes accompanied by the freely-given post-publication peer-review comments.

Often meant to be constructive, and monitored for politeness by PubMed staff, it seems likely that sometimes authors, editors, the original pre-publication peer reviewers, and even publishing houses, were embarrassed by the comments. Flaws, sometimes of a degree that Leonid Schneider so rightly deplores in his webpages, emerged, not only in works of authors world-wide, but also in the works of NCBI staff, which includes expatriates from Russia and other countries.

Sadly, the NCBI have now declared PubMed Commons an “experiment” that failed. The criterion was the number of comments received, not their quality. Despite important feedback when its intention to terminate was announced, the NCBI’s post-publication peer-review ended last week, and with it, a source of invaluable assistance to the pre-publication, (and pre-Nobel), peer-review process!

Thanks for a shout-out, Donald! A lot in science happens on the assumption that all scientists are good, rationale and honest people. Post-publication peer review was supposed to be a place where scientists help each other with constructive advice.
Hence the original concept of PubMed Commons, where such polite and productive academic debate was expected (only active researchers were allowed to comment) and many instead tried to flag fishy data and other problematic issues (which they now do on PubPeer, incidentally also founded as “online journal club”). But how do you constructively contribute a discovery of a fake western blot, or a statement that authors of that paper were found guilty of misconduct by their university?
It is not like scientists have no constructive suggestions on science as such to make, quite the opposite, but they do prefer to use the traditional route and simply write to authors, instead of making it public. Maybe it is actually better this way. In this regard, I wonder what is the ratio of public vs private feedback received by preprint authors.
Speaking of: even non-peer-reviewed biorxiv has to employ staff to reject plagiarism and research-free manuscripts.

One could not do on PubMed what you do so well at your site Leonid. But please note that PubPeer is a free-for-all, non-anonymous, torrent – snipes and all – which greatly contrasted with PubMed Commons.

Oh Donald, I sure know first-hand what PubPeer debate can deteriorate to. Still trying to find out what the actual scientific problem with that paper is, aside of its author:
Which again brings us to the point of the above article: the need of a minimum of unbiased editorial oversight, for both peer review and post-publication peer review.

Tim Vines makes a good case for the value added by editorial staff in publishing journals. That case is even a lot stronger when it comes to book publishing in which staff acquisition editors play key roles at every stage of the process, from initial contact through final production and publication. I outline nine types of roles these editors play in “The ‘Value Added’ in Editorial Acquisitions” in the Journal of Scholarly Publishing (January 1999) here: Some of these roles have no counterpart in journal publishing at all.

Every task listed in your paper at the link is done by either a managing editor, executive editor, or Editor in Chief of a journal.

PubMed Commons must have required a lot of resource as do many developments/experiments in publishing.What they received was not post-publication review but simply “comments” often picking fault with minor aspects of papers – not worth the continuing investment for so few snipes and comments. NCBI are to be applauded for trialling this, there is no other post publication review service that has been successful (e.g., F1000 had very few useful comments also).

With respect to investments if academia had been left to look after scholarly communication, i very much doubt we would have any risk-taking initiatives/investments and everything would still be in hardcopy only, printed in varying quality at a local press. It is also unlikely that we would have seen so many of the publisher-led initiatives, services and tools that have made the lives of authors and readers so much easier, these things could not have been achieved without the appropriate financing, journals such as those described by Tim Gowers are only possible because publishers have invested in the research around processes and systems to make it easy.

While I am not familiar with the specific Société Française d’Écologie et d’Évolution’s new peer review initiatives, we should be elated to learn that various alternative models are being considered in an effort to reduce overhead costs (resulting in increased net revenues for the academic community), while ensuring the integrity of the published research results. By dividing the editorial process between those tasks that require specialist academic know-how, and others (that are are also undeniably essential) that do not require content specific insight, but CAN easily be performed by others qualified individuals within he academic community, considerable progress can be realised.

I recall when the professional society I was working for at the time took over responsibility for publishing their journal from one of the ‘big five’ publishers. Competent members of the society were able to perform many of the editorial tasks not requiring specialist know-now, freeing up time for the dedicated academic specialists to focus on those aspects of article content that they were best qualified to perform. Incidentally, many of these purely editorial tasks were being performed by society employees anyway, without receiving any form of remuneration (or even recognition) from the commercial publisher (their work was highly appreciated by the Editorial Board!). Of course, the peer reviewers were not getting paid by the commercial publisher anyway, so they had no difficulty in performing their work within the context of an academic society rather than a commercial publisher. The result of the society takeover – increased income to the journal/society, supporting the journal’s growth, and presumably increased levels of research in that area of science.

Let’s not push discussions regarding alternative peer review models into the realm of false dichotomies. Many options are possible, and there is no ‘one fits all’ solution. However, if we fail to consider the merits of potential alternatives, while still respecting the need to ensure the quality of the content, no progress will be made.

Hi Daniel – many thanks for your comment. How would the system at the society respond if submissions doubled next year? I suspect that many of the people contributing volunteer hours would be overwhelmed and you’d need to bring in dedicated staff to cope.

This is the point I’m trying to make: peer review innovation is great, but the point of innovation is to hit on a solution that’s so much better than the current system that it grows very rapidly. If you don’t have a model that allows you to add staff as the operation grows, your operation will collapse under its own weight as soon as it starts to take off.

Thanks for your follow up comment Tim. Learned Societies pay their personnel – the editorial process is not free. They are remarkably adept at up or downscaling in order to accommodate adjustments in work volumes and revenue streams, but are MUCH cheaper than commercial publishers (not that the employees of commercial publishers are paid excessively – the money goes elsewhere).

I could not, respectfully, disagree more. I have always been an active reviewer, for several reasons. Of course, I am only sent manuscripts that relate to my area of expertise, an area in which I also publish.
1. I find it relaxing to read the work others are contributing in the area we share.
2. In instances where the manuscript us not published, I still am the beneficiary of the content of that work.
3. It helps me to stay current with the field.

Hi John – thanks for your comment. Which aspect of the piece do you disagree with?

As I read the above I am reminded of my recent drive by the fire station. The firemen were washing the truck. I said to myself; No one needs a fireman until there is a fire.

Additionally, if you want something done ask a busy man the other kind has no time. It is kind of like relying on volunteers a few will perform but the vast majority are just not busy enough to do so.

Thanks for your post, Tim. The argument for diffusing editorial labor through new peer review processes ensnares more than the significant work of manuscript or copyediting, but the value of substantive editing. In HSS fields, certainly in history with which I’m most familiar, the work that editors do to assess and synthesize readers’ reports is time and skill intensive. Helping authors to develop, substantiate, and then clarify their argument is a key contribution to scholarship. We did a breakdown on this for a single manuscript:

You can argue that this is too labor intensive and that cost savings could be squeezed at a number of stages. We could cut source verification, for example, but cutting out our editorial apprentice program. We could cut time and money by using generalist freelance copyeditors rather than full-time manuscript editors. But it depends what you want from scholarship, and how you recognize and compensate people’s work.

The lively and engaged exchange among specialists that one might imagine in an open review system, or in post-publication online spaces, takes place primarily before publication as scholarship is workshopped at conferences or any one of many regular seminars.

This system isn’t without flaws– but it isn’t broken.

Thanks Karin! I really like the breakdown you present. Although it crops up in places in your piece, it’s worth highlighting the ‘management’ aspects: knowing what needs to be done and when, delegating someone to do it, and then making sure it’s done to the required standard. Without adequate oversight the process is probably slow and ultimately results in much lower quality work.

Thank you for this post. As Managing Editor of 2 biomedical Journals, I am confident that our Editorial Office staff shoulders much of the responsibility through the peer review process, as well as a portion of the production process. I have no doubt that removing these parties would lead to a longer turn around time, as well as an increase in published errors. The things my staff look for are not always the same things our volunteer editors/reviewers look for- even if in some cases, they should be.

In addition to all the tasks mentioned in this article, I would add that in our case it is the Editorial office that culls and prepares submission data for the annual board meetings. Even in manuscript processing systems that keep good records and reports, there’s still a person behind all those spreadsheets, taking that raw data and presenting it to the board in logical and easy to understand pieces. Without the Editorial office doing the leg-work on these reports, our Editors and editorial board would be very much in the dark.

As an author, I do interact with the editorial staff and appreciated their work.
As a reviewer, that depends. If the review is by a backward journal, I am invited by the office, exchange emails with them and feel that I work with a person.

For more advanced journals, I do not communicate with a living person at all. I got a standard invitation from a bot, I assume that I am picked in the database by a bot, I got another email with a thank you for agreeing from a bot, I get a template thank you for the review. I assume that some preliminary checking is done before the manuscript is sent to me, but I am not sure.

If my recommendation is to reject, than I assume that a living person reads it before sending to the author. If my recommendation is major revision, this is a simple task for the office – the author has to do a lot of work. If my recommendation is minor revision, the office has a lot of work. The manuscript has to be read carefully, my remarks have to be read carefully. Other reviewers remark have to be read carefully. If my recommendation is to accept as is… never happens.

I would say that the editorial office cannot be scrapped. And the cost of the review can be as low as the cost of (3 times the number of reviewers) automated emails, but more difficult cases might cost a lot making average price much higher. And the production stage requires a qualified staff but that is not peer review.

I’m curious: for the larger/more standardized journals, how often do you feel like you’re being picked “by a bot” (perhaps based on area of expertise matches or Bibliography searches) as opposed to being purposefully selected by an editor?

Twice a year. And that is my assumption, of course, by receiving only automated emails.

Thanks for this interesting piece Tim (and for the quote!). The point I was making in my ASAPbio contribution was that confidentiality has not served the publishing industry very well, and has, I believe, led to a great deal of the mistrust that academics place in publishers. The very fact that you need to point out that journals actually do work for their money shows how bad that problem has got. A simple solution to this mistrust is to get journals to open up their editorial processes (as well as their peer review). This would allow readers to see the level of rigour a journal applies to its publications, and perhaps even lead to new quality metrics that replace the dreaded impact factor.

Absolutely! It blows my mind how often I read/hear academics putting forward the viewpoint that peer review is essentially free because volunteer reviewers and editors do all the work. The Editorial Office is out of sight and out of mind.

I come to this debate as an historian of scholarly publishing, and particularly of learned society publishing (and, of course, as an academic author and reviewer).

I agree with the point that some level of editorial support is still needed for the effective circulation of scholarly knowledge; but I don’t think that’s an argument against attempts to bring control of scholarly communication back into the scholarly community.

I think starting from the point about a “peer review process, completely *handled* by the researchers themselves” creates a straw-man argument. For me, the key point about calls for reform is that scholarly communication should be “run by”, or “controlled by”, researchers and their communities. That doesn’t mean we can’t have help (even, paid help) in doing it. Historically, scholars have sought the help of typesetters and printers to circulate their research; nowadays, we still need someone to create and maintain digital platforms, and I am quite happy to accept that we need someone to help organise the volunteer reviewers and editors, and to do the basic admin of receiving/tracking/chasing.

BUT, the issue is about who is in charge: who decides what is needed, and evaluates the cost/benefit to the community. The aim should be to manage the technical and administrative processes involved in circulating research so that it fits what the various scholarly communities feel they need. The amount of help needed may be more or less depending on discipline – we don’t all need the sorts of checks that bio-med researchers seem to need; and it may be that some communities are happy to rely on more AI/tech to do some of the work, but others are not. As I keep saying, I think this is where scholarly communities (e.g. learned societies and subject associations) need to be more actively involved.

With my historians’ hat on, I would point out that learned societies have provided this sort of support for their communities for decades, and, in some cases, centuries. To take an example I know well, George Stokes and the other secretaries of the Royal Society in the nineteenth century (all of whom were researchers/scholars) received an annual honorarium for their editorial and other work for the society. And they were assisted with the day-to-day correspondence and chasing by a paid clerk, then known as the Assistant Secretary. Once the editorial work became too great for that one staff member, the Society also employed an Assistant Editor (from 1937). The fact of there being paid support for the editorial work doesn’t change the fact that this editorial work was being managed by and for the scholarly community.

I’d also suggest that there’s a difference between ‘editorial work’ and ‘peer review’ – so that even if the actual business of peer review is all done by volunteer academics, there is more to editorial work than just that.

Thanks Aileen – the historical perspective is very useful. In the fields I’m most familiar with (ecology and evolution), the societies run most of the major journals, and they are firmly in control of the editorial-side decision making. I think the tradition of editorial independence means that this is true for most journals: Chief Editors get rightly annoyed if the publisher starts telling them what kinds of articles they should be accepting.

I do disagree that I’m attacking a straw man. At the start of the post I list five different peer review initiatives that have no obvious business model, and are hence unable to hire staff as they grow. They really will have to either do all the admin etc themselves, or just let it slide. The quote from the SFE goes on to reference PeerCommunityIn, who neither have a business model or see the need for editorial staff.

Dear Aileen, I would like to know your views, as a historian of scholarly publishing, about the possibility that there is gender bias at work in the way that Editorial Office work (organization, management, tracking, running peer review, copyediting, proofreading) is being valued in this discussion.

Dear Anonymous – on the editorial staff probably perpetuating gender bias, at the Royal Society in the 1970s, see the quotation from a staff member in my Nature comment paper, about gender bias in (historical) editorial processes, this week!

But are you implying that, nowadays, editorial officework might undervalued because it might be performed by women? I haven’t thought about the modern situation, but now you ask me, I’d say that – as an academic author and reviewer – the editorial staff are mostly invisible to me (on the far end of electronic systems), so I hadn’t ascribed gender to them either way. In the cases where I know the actual people doing that work (i.e. in my own learned societies), there are both men and women in those roles.

What a surprise. A blog post on the Scholarly Kitchen praising the supposedly hidden editorial work of commercial publishers. In point of fact, editorial oversight at many of the bigger conglomerate presses has actually degraded over the years, at least in the disciplines I mainly work in as a longtime editor and publisher (over 27 years experience) the humanities and social sciences, in both journals and books. An award-winning journal in medieval studies that I founded and have edited for almost ten years has now undergone 3 transitions in terms of management because of takeovers and mergers, and in each case the editorial and production processes have degraded, to the point where I and my co-editors have had to be more and more vigilant over all of these processes. Absolutely no assistance has ever been given to us in terms of peer review, except to insist we use a crappy online system to track our requests and replies. As a publisher myself, and an open-access one at that, I am often at pains to defend publishers and the many ways they add value to the scholarship they package and disseminate, but this blog’s constant praise of corporate conglomerate publishers, who over time have actually provided less and less editorial care and oversight, is beyond the pale.

Hi Eileen – many thanks for your comment. Is there a particular line in this post that you feel praises corporate conglomerate publishers?

The closest I can see are the observations that “publishers actually do something with all that money they charge. Lots of it goes towards paying their journal management staff” and “these ruthless for-profit publishers cut costs wherever they can, but still employ thousands of people to oversee peer review”

I think we both agree on those statements: publishers of all stripes do indeed employ editorial office staff. We also agree that a good EO is essential for consistent, high quality peer review.

My point in this piece was to contrast existing peer review operations (i.e. journals, who have EO staff) with the new peer review initiatives that would like to get by without a business model or paid employees.

Dear Tim — bingo! Thanks for replying. It was actually that very line that bothered me. I don’t know if any studies have been done on this (if anyone knows, I would love to know about them), but in my experience working with a wide range of university and commercial academic publishers over the years, editorial and aesthetic / production processes over the years — esp. with commercial presses, whose profit margins are quite extraordinary — have gotten worse and worse, either through cost-cutting measures that dictate less labor devoted to graphic design, cheaper paper, cheaper printing technologies (etc.), or through the outsourcing of copy/editing and typesetting to companies whose employees are clearly overworked, or even to more automated processes that remove human-to-human interactions so crucial to real editorial care in which disciplinary expertise matters. So if certain commercial publishers have such terrific profit margins, why have they been engaging in business practices that have actually harmed important protocols of editorial oversight? On the larger point of your essay — wanting to draw attention to the very real value that publishers can/could provide to editorial oversight, I basically agree. Even among OA editors and publishers, you often hear proposals for solving the economic problems by eliminating the “publisher” — via things like preprint repositories combined with automated typesetting services. This drives me a little cray because my long experience as an editor in HSS has shown me how at least 90% of academic authors need a LOT of assistance turning their research into published work of a very high caliber with a minimum of mistakes, not to mention everything publishers do to get this work properly entered into the scholarly record (metadata, cataloging, archiving, indexing, abstracting, etc.). But that’s also why I believe the most ethical and professionally “gold standard” future of academic publishing is in partnerships between new scholar-led, non-profit OA presses and university libraries.

I’ll second Tim’s comment — why in the world would you think that editorial oversight and office services like this are limited to commercial publishers? As your own experience seems to show, the commercial publishers, driven by the bottom line, are most likely to do away with these services. Meanwhile, there are many not-for-profits that pride ourselves on the level of service we provide authors.

this blog’s constant praise of corporate conglomerate publishers, who over time have actually provided less and less editorial care and oversight, is beyond the pale

Have you ever taken a look to see who writes for this blog? You won’t find an employee of a “corporate conglomerate publisher” among us:
Also, I would recommend the following posts:

I never once said or implied that editorial oversight and office services were limited to commercial presses. Tim intuited rightly which lines in his essay I was responding to. And I am mainly quibbling with the idea that the money commercial presses make translate into better management of peer review. At least in HSS, they do not. Uncompensated academic editors are in fact doing most of that work, and even more, as editorial services and oversight basically degrades. Also, I never implied that anyone who writes for SK works for a corporate publisher. But I do find that the essays on this blog, for all of their attention to innovation and change, often prop up the status quo of traditional academic publishing. I assume that will be considered an impolitic thing to say, but I’m honestly trying to be as frank as possible. And I do read regularly, so I’m aware of the content you share here. I would never imply that the authors here are all “of one mind.” Nothing could be further than the truth, and I know that. But maybe SK should also try to understand a little bit more its own (more than occasional) conservatism. Please don’t assume I am not a close, regular reader of this blog. This is just the first time I have commented.

I can’t speak for the other chefs, but I find myself being grumpily conservative about a lot of new stuff in publishing because so much of it is poorly thought out and based on the flimsiest understanding of the problem at hand (particularly the money side).

Part of that conservatism also stems from the high stakes at play here. Society has two means by which we work towards the truth: the adversarial legal system, and academic peer review. The legal system is enshrined and protected by government, but it seems that we’re willing to make major changes to the peer review system on the basis of a few mediocre research papers and a lot of angry blog posts.

I keep coming back to something GK Chesterton wrote:

“In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

Thank you for your post, Tim. I’m coming late to this but was inspired to comment on an element that hasn’t been mentioned in the above conversation: Editorial office staff are professionals who must constantly seek continuing education in this quickly changing industry. Not only are editorial office staff necessary, as you explain in your post, they must be well educated in peer review, publication ethics, and editorial office management. It is often the editorial office staff who identify and then advise the academic editors on how to manage the difficult issues that arise, such as image manipulation, plagiarism, authorship issues, process efficiency, educating new reviewers, etc. In my experience, academic volunteers do not have time to educate themselves on these issues; that’s why they rely on the editorial office staff. This must be considered in all models of peer review and regardless of who pays for it.

Hi Kristie – this is an excellent point. Given that academics often act as Editors for any one journal for a few years, the role of the editorial office as an ‘institutional memory’ for how situations have been dealt with in the past is invaluable.

Comments are closed.