Duplicate submission is widely held to be a severe ethical infraction, and there are many COPE cases where authors are reprimanded or even banned from publishing in the affected journals.

But why is duplicate submission held to be such a sin? It is perhaps telling that both COPE flowcharts on this issue point to duplicate publication rather than duplicate submission as such: the bigger crime is submitting an article to two or more journals with the intent of publishing the same article (or a near carbon copy) more than once. Duplicate publication unfairly inflates those authors’ publication records and should therefore prevented whenever possible.

The act of simultaneously submitting the same article to two different journals in order to hedge against rejection from one or the other is still regarded as misconduct (as in this example), particularly when the authors conceal their actions from the other journal(s).

twin babies

As a thought experiment, let’s have an author submit their article to two journals A and B on the same day. Their cover letter makes it clear that they have submitted the same article to multiple journals, and that they will withdraw the article from one or the other journal depending on the outcome of peer review. They have no intention of publishing the same article twice.

The Chief Editors of both A and B would likely be scandalized and immediately reject the article. Their decision letters might read something like this:

“Dear Dr X.

Duplicate submission is considered to be academic misconduct. We cannot in good conscience ask our reviewer community to evaluate this article when we know that journal A/B is also requesting the same effort from their reviewers.”

This response sounds plausible, until one realizes that both journal A and B will waste the efforts of their reviewers when they reject the article – which happens for 60-90% of submissions. The reviews are sent to the author and the review text is archived away within the journal’s manuscript management system. There is no mechanism to compel authors to revise their article in response to the reviews, and anecdotal evidence suggests that authors regularly submit an unchanged manuscript to the next journal. Aside from all of those hours out of the reviewers’ lives, it’s as if the review process never happened.

One could argue that it’s the authors who are wasting the reviewers’ time by not improving their article, but their cynicism is fuelled by high rejection rates: why bother to revise the manuscript when it’s only got a 30% chance of going out for review, and even less chance of being accepted for publication?

Journals fussing that duplicate peer review wastes reviewer time are therefore standing in very fragile glass houses. If peer review at journals A and B leads to acceptance only 10% of the time, what do we lose if they review the article in parallel rather than in sequence? And is that loss offset by the corresponding reduction in publication time?

This same logic is one of the principle arguments in favour of preprints: the community can evaluate the article while it goes through formal peer review at the journal. There is an exciting range of peer review initiatives for preprints, with some closer than others to duplicating journal peer review.

Consider an author who posts a preprint on a preprint server and then immediately submits the manuscript to a journal. The author can truthfully answer ‘No’ to the question “Is this manuscript under consideration at any other journal” and the manuscript can move into peer review.

BUT: the preprint is an identical public version of the manuscript, which allows all sorts of other peer review activity to go on while the ‘original’ version of the article is in review at the journal. Here’s a range of scenarios for the preprint:

  1. It gets discussed on Twitter by the authors and other members of the community
  2. It’s picked up for discussion by an online journal club and the transcript of the conversation is posted online
  3. The author is asked to join an online journal club to discuss the preprint and agrees
  4. An overlay journal contacts the author and asks whether they’d be interested in having the preprint reviewed; the author agrees
  5. An overlay journal reviews the article then offers the authors ‘publication’; the author agrees (which amounts to a listing of the preprint DOI in the overlay journal Table of Contents)
  6. A regular journal contacts the author and asks whether they’d be interested in having the preprint reviewed; the author agrees
  7. A regular journal reviews the preprint and accepts it ‘as-is’ for publication; it’s then typeset, assigned a DOI, and published on the journal website. The author is unaware of this process throughout
  8. The author submits the preprint to an overlay journal
  9. The author submits the preprint to a second regular journal

Scenarios 1 through 9 range between innocuous (1) and duplicate peer review (9), but where is the line in between? There’s several complexities here.

First, preprint servers rarely indicate whether a preprint is in formal review at a journal, so spontaneous review efforts by the community may well unwittingly be duplicating peer review. The only way to find out is to contact the authors.

One criterion where review of a preprint verges into unethical duplicate peer review hinges on whether the author is aware of, or even initiates, peer review of their preprint. In scenario 6 (a regular journal offers to review the preprint) the authors should decline. But should they decline the overlay journal, as overlays don’t officially ‘publish’ articles (i.e. don’t assign their own DOI)? What happens if they revise their manuscript and a different version of the preprint goes to the overlay journal?

Another complication is around volunteer versus solicited reviewers. Academics are free to volunteer to review whatever they want, but once someone starts soliciting reviews on behalf of a journal or similar organization, the process looks more like traditional peer review (and may therefore count as ‘duplicate’, even when the authors are not involved).

Scenario (7) further complicates the picture. The preprint is CC-BY, so as long as the publishing journal acknowledges the authors, they are completely within their rights to publish the preprint and assign it a DOI. This is, of course, duplicate publication, and perhaps even duplicate peer review, but the authors are blameless in this case. (One can imagine a ransomware version of predatory publishing, where the preprint is rapidly published in a predatory journal, and the authors must pay a ‘ransom’ to have it retracted and removed from the scholarly record before they can publish it in a reputable journal.)

To sum up, the existence of a public version of a manuscript (i.e., the preprint) opens up many new avenues for peer review, and these are largely positive for the integrity of the scientific record. However, many of these peer review efforts run in parallel to peer review at the journal. As I hope I’ve illustrated above, there’s no clear way to decide what counts as legitimate discussion of a preprint and what is unethical duplicate peer review. As preprints become more prevalent we may need to abandon our hopes of enforcing sequential peer review entirely, and that may not be a bad thing.

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

Discussion

30 Thoughts on "The dawn of the age of duplicate peer review"

While I’m not discounting the basic premise of this article, it’s worth noting that many (and maybe even most, in some disciplines) preprints receive no comments or reviews, so the scope of this potential problem may not be as great as the article may suggest.

That’s true, but if it’s unethical then every instance is a problem. Duplicate peer review may also become a lot more common in the future.

This essay seems to make two mistakes, I think.

1. “Misconduct” is a harsh term that exaggerates the failure. It’s a failure to comply with or follow (or perhaps read) journal policies. Remember those stories about bands who want bowls of only green M&Ms backstage in their contracts — they didn’t only want green M&M’s; they wanted evidence that the contract was read carefully. If there were no M&Ms or there were multicolored M&Ms, the relationship with the venue changed — the band had to presume that other elements of the contract were unread or ignored. So, too, the policy against multiple submission — if an author missed that in the journal submission guidelines, they likely missed other things, and (as editor) my relationship to the author changes. So it’s not “misconduct,” the way plagiarism might be seen. It’s more like “laziness,” “carelessness,” or “a red flag.”

2. The essential problem with multiple submission to academic journals is not that it’s “misconduct” but that the pool of peer reviewers is proportionally smaller than the pool of authors, as a baseline. We need twice as many reviewers as submissions that survive desk rejection, minimum. In highly specialized fields, the number of peer reviewers may comfortably fit within a university classroom. In some fields, the number of peer reviewers in an area is so small, single-blind (rather than double-blind) peer review is allowed because double-blind would be impossible. [Everyone in the field knows that Professor A operates a lab that studies topic B and has been generating a stream of presentations and publications on that topic; to pretend that you don’t recognize that this is the next paper produced by that lab would be silly.]

If Journal X and Journal Y both ask Professor C to review Professor A’s paper about Topic B:
Professor C will likely say “no” to one of them (likely whoever asks second), and possibly, Professor C may withdraw from the pool of reviewers from one or both of the journals.

We can compare this with creative writing, where literary journals will happily accept multiple submissions so long the piece is withdrawn if it is accepted elsewhere. The system is crafted to make this work feasible. One of the significant differences is that literary journals have “reading periods” of as small as two weeks a year; within those two weeks, they will receive all the submissions that their readers can handle, enough to fill the pages of the journal. Another significant difference is that the pool of readers is smaller — a literary journal may plow through a hundred submissions with only six readers, which helps maintain a “vision” and “consistent voice” in the journal. At minimum, a functioning academic journal would need 100 reviewers for 100 submissions, if every reader read two in a year.

I don’t know whether this works against the overall points in the article about the changes coming ahead, especially as caused by preprints. I don’t read preprints; I can’t keep up with the stuff in my fields that is published. There may be big changes ahead, but it will be easier to understand the changes when we understand the status quo. I offer these corrections as a richer and more accurate depiction of the status quo.

Trying to publish the same paper in two different journals in order to inflate your publication record definitely is misconduct, and serious misconduct at that. Journal submission checklists ask authors to confirm that they haven’t submitted it elsewhere to prevent authors from claiming they didn’t know it was a rule – this applies even if the authors just rattle down the checkboxes without reading any of the statements.

The math around reviewers outnumbering authors works if you only count the corresponding author, but there are normally enough co-authors to satisfy the demand for additional reviewers for other papers. We studied this a while back with data from Molecular Ecology: submissions doubled, and the number of reviewers we used doubled too (https://www.nature.com/articles/4681041a). This suggests that the reviewer pool can expand to cope with extra peer review.

Hi, Tim,
I don’t think you are responding to my note in full; I was addressing simultaneous submission as misconduct. Intrinsically, it’s not, and when confronted about it, most academics I have worked with (in humanities and social sciences contexts) plead ignorance and assert that they would have withdrawn after receiving their first acceptance. I believe them; I don’t have reason not to. Perhaps there is greater pressure to publish in grant-funded sciences and so there is more tendency to inflate publication counts by the genuine misconduct you describe, which is not multiple submission but autoplagiarism.

As for the reviewer counts, in the humanities and social sciences, an average of 4.5 co-authors would look striking. So you are correct and I defer to your research there. When the average issue of a journal publishes six articles written by 6-9 authors, the math is harder to work out.

Hi David – lots of journals do see duplicate submission as serious misconduct, as they’re either upset at the potential duplication of editorial and reviewer effort, or worried about attempted duplicate publication. The journals are also privately worried that the authors will just go with the journal that demands fewest revisions, which (if duplicate submission became routine) would undermine the rigour of peer review.

You may need to distinguish between “duplicate submission” and “duplicate publication”; these are two different things:
Duplicate submission does not mean necessarily duplicate publication and much less a misconduct particularly if authors withdraw their submissions from other journals when it is accepted somewhere else.
It simply means that authors try to avoid wasting their time by waiting long peer-review to come.

As a journal Editor I don’t want to waste MY reviewers’ time. But if other academics or otherwise qualified peers want to spend their time reviewing a paper, including a preprint, have at it. It will only help the paper (if the comments are thoughtful and the authors actually use them to improve the study/paper). So that would be the reason I would triage a paper if I knew it was simultaneously submitted elsewhere for formal peer review. That and the possibility that simultaneous review could potentially lead to dual publication, so why open that can of worms?

Would you reprimand one of your reviewers if you caught them participating in open peer review of a preprint? (I suspect not, as they’re autonomous individuals who don’t really belong to any journal)

Your post does not reflect the universe I live in. The article noted that duplicate peer review would cause Journals some trivial “fussing” that this wasted reviewer time. Then asked, “What do we lose if they review the article in parallel?” You already answered it: Reviewers wasted their time. I don’t need to be paid as a reviewer. I don’t need public recognition. As a researcher, time is gold. You don’t respect my time, I don’t review for you ever again. Duplicate peer review is a non-starter.

I think the problem of wasted time is even worse than you assume. You assumed that researchers who opt for duplicate review would stop at two journals. In my opinion, they’re likely to submit to many journals simultaneously. The wasting of reviewer time is a much bigger issue than you imagined.

But let’s back up. For a suggestion to drastically change peer review to be credible, it ought to be solving a major problem. I didn’t see what major problem duplicate peer review would be solving. Or maybe the point was that we are supposed to succumb to the notion of duplicate peer review as the price of thinking preprint servers were ever a good idea.

Let’s think about costs for a moment. If we assume this becomes the norm, and authors submit their papers to 2, or 5, or 10 journals simultaneously, what does that do to costs involved in managing the peer review process? As you wrote about in 2020 (https://scholarlykitchen.sspnet.org/2020/08/19/revisiting-a-curious-blindness-among-peer-review-initiatives/), there are significant costs involved. Now if we double those costs, or increase them 5-fold, or 10-fold, should we assume then that APCs and subscription prices will skyrocket accordingly in order to cover those costs?

As one of the commenters above asks, what problem is this solving? And I’ll add, is that problem urgent enough to warrant large increases in APCs and subscription prices?

This piece definitely isn’t advocating for duplicate peer review, it’s more an acknowledgment that the walls that held duplicate review in check are crumbling, and the arrival of peer review for preprints hastens that.

My preferred solution is independent peer review at a brokerage and then authors taking their articles to the interested journal(s) in sequence – much cheaper, and with better outcomes for the authors to boot.

It sounds to ne like you want to emulate the “agent” system in nonacademic book publishing. In such context, the agent serves as the independent reviewer and broker. But in your vision, the author, not the broker/agent, pitches to presses?

This – “One can imagine a ransomware version of predatory publishing, where the preprint is rapidly published in a predatory journal, and the authors must pay a ‘ransom’ to have it retracted and removed from the scholarly record before they can publish it in a reputable journal.” – is really frightening.

I wouldn’t be too concerned about it for a couple of reasons. Even a CC BY license has some recourse- I think the implied endorsement clause could have some weight here. Pretending that authors chose your journal to submit to could run afoul of that.

Secondly, it doesn’t make good business sense to do it. If it interferes with authors publishing in the venues of their choice universities will have incentives to get their legal teams involved. After all, there’s now rankings and funding on the line if staff can’t publish. It also doesn’t earn the predators any money. “Stealing” content from preprint servers doesn’t net them any APCs. So there’s negative risk for the predators and no immediate upside. It’s no wonder reselling already published content from OA journals and books is an already existent business but poaching preprints is not.

What “implied endorsement” clause?

FWIW, the upside is a journal website that looks like its been publishing for awhile and not just brand new as it attempt to snare others.

“FWIW, the upside is a journal website that looks like its been publishing for awhile”

in principle, couldn’t predatory journals also do this with published OA papers? Grab, republish, and use an inconspicuous note buried at the bottom to acknowledge the original publication details. Ends result, an impressive looking journal, full of papers by established researchers.

Section 2(a)(6): No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).

I think republishing preprints as if you received a submission from the authors would violate that clause and give the authors recourse against the publisher. I don’t think that padding your website is a sufficient upside for this. It’s much easier and less risky to poach already published work. Authors are much less likely to ever find out or care.

I don’t think this is an accurate interpretation of the stated clause. If the mere re-publishing of the work is considered an “endorsement” of the venue that republishes it, then the entire CC BY license is rendered moot, and reuse would only be able take place with the explicit permission of the rights holder.

David, you might be right. I don’t think the no endorsement clause has been used effectively very often. But I do think authors have a better chance of using it if a predatory publisher is trying to pass themselves off as the first journal to publish a work. And if it fails there is always the remove attribution clause, which would require them to remove the author’s names, reducing the credibility it adds to their website. It’s much better for them to continue to just repackage and resell already published works or accept the hoax papers/stings written in sci gen.

I think the “no endorsement” clause is more about a situation where an organization tries to claim you as a spokesperson for their cause, quoting your work out of context. Simply re-posting the work in its entirety with no editorial commentary would not qualify. The other thing to consider is that predatory publishers are, by and large, criminal organizations built around committing fraud. I would not expect them to pay much attention to licensing terms, nor respond to author requests unless major legal action was credibly threatened, and most researchers don’t have the assets or financial ability to pursue such international cases. Publishers, in general, have little motivation to do so on authors’ behalf, at least under an APC model, as they’ve already earned everything they’re ever going to earn from that article.

David, you make completely valid and fair points. The only thing I would say is that if predatory publishers did start poaching preprints is that it might disturb universities and cause them to act. I don’t think they would be happy if their Nature papers were all stolen by predatory journals and I think they would use any tools available to them. If they didn’t they would lose so much funding. If CC BY licensing gives them no recourse, which is entirely possible, we might then see a backlash against it and development of a licensing suite more suited for academic uses.

Why it is OK for journals to waste authors’ time after X months of waiting and then rejecting manuscripts, but not OK for authors to submit manuscripts to more than a journal at a time?
It does not make any sense.
Articles are now considered as products, so each producer (author) should have the right to sell his or her products (articles) where they want particularly in the era of open access journals where authors should pay thousands $$$ for each published article.
Think of any industrial product or medication that manufacturers sell to many stores/pharma at the same time.
Don’t we find Aspirin in all drugstores?
So, why scientific articles should be an exception?
Each journal wants to keep the lion part (of money) for itself. But this should change now and authors should have the right to submit their manuscripts to multiple journals at the same time. Once accepted in a journal, the authors can withdraw their manuscripts from the other journals that are still wasting their time.
Copyright is bad for science and innovation. Priority of invention and credits for first discoverers should however be acknowledged and rewarded all the time.

|Don’t we find Aspirin in all drugstores?
|So, why scientific articles should be an exception?

What’s the difference between you bringing me an aspirin today versus Bayer (IIRC) introducing aspirin for the first time?

Isn’t that what makes peer reviewed journal articles exceptional?

And if an equivalence can be made, what is their value then?

Thanks Tim, interesting post. Just one comment – you seem to take it as read that journals reject 60-90% of submissions. Not sure what your source for these figures is, but I don’t think you can assume this – rejection rates can be as low as 10% even for reasonably high impact journals in some fields, and yet your argument here seems to assume high rejection rates for all journals. Duplicate submission to two journals that generally accept 90% of submissions would indeed waste the time of the same pool of reviewers, and the Chief Editors would be entirely justified in their rejection, in my opinion at least.

In the scenario of the authors submitting to Journals A and B simultaneously, where the text states, “The Chief Editors of both A and B would likely be scandalized and immediately reject the article… This response sounds plausible, until one realizes that both journal A and B will waste the efforts of their reviewers when they reject the article”, I feel as if I have missed something. If the article is immediately rejected by the Editors of both journals, no reviewer time will be wasted because the article will not proceed to peer review. At what point would reviewers’ efforts be wasted? Apologies if I’ve misunderstood and thank you for clarification.

With regard to the discussion about authors receiving reviewer comments but not revising their article prior to submission to another journal, if the review comments have led to rejection, it benefits the authors to address these issues prior to further submission. Otherwise, if the article is submitted as is, it truly is a waste of reviewer time to likely comment upon the same issues identified by the previous reviewers.

Leave a Comment