Academic publishing has long been a popularity contest: each journal is an exclusive club to which many are drawn, but only a few get invited in. The best clubs feature the most brilliant editors, the most interesting research, and – by extension – the choosiest manuscript selection process. 

It therefore makes good sense to measure the success of your club by the length of the queue outside, quantified by the number of manuscripts submitted by hopeful authors each year. More manuscripts equate to being more popular; more manuscripts also mean the editors have a wider range of research to choose from. Both good things. And, some decades ago, steadily increasing submission numbers may have been a reliable indicator that the journal was headed in the right direction. 

I argue here that the total number of submissions a journal receives each year has long ceased to be a useful performance indicator. In fact, chasing ever more submissions is having a serious deleterious effect on our industry, and we need to stop counting rising submissions as success as soon as possible. 

Stock photo of a stern bouncer with arm outstretched outside nightclub

1. The Money

First, the number of articles published each year has risen for decades. While some of the extra manuscript volumes do represent additional valuable research, the biggest area of recent growth has been in weak, flawed, or even fraudulent work. The second half of 2025 has seen dramatic increases in submissions at many journals, likely because Large Language Models make drafting (or outright fabricating) manuscripts so easy.

Receiving more submissions now means feeding ever more junk into the journal’s peer review pipeline. In many cases, it’s not just the journal pipeline: often, the goal is to keep the manuscript within the publisher’s publication cascade so that it will eventually be published somewhere and the editorial costs recouped via an APC. Keeping junk within your cascade means the costs discussed below are even higher, particularly if the manuscript never gets published.

Each submission has a cost. Journals have to pay for journal management infrastructure on the basis of how many submissions they receive. Increasing submissions also means more person-hours in the Editorial Office (EO) on basic manuscript checks, or hiring vendors to help with screening. More submissions mean more email from authors that the EO has to deal with.

Since there is considerable Editorial Office work before a new submission reaches an Editor (e.g., checking that all the figures have been provided), desk-rejecting all of the junk still imposes an expense. Moreover, the junk tends to need more Editorial Office time & attention before it meets the threshold to be passed to an Editor. The costs rise sharply for each manuscript that makes it into peer review, as much more Editorial Office time is spent on managing reviewers, sending emails, and monitoring the review process. 

Multiplied across 10,000+ journals, chasing ever more submissions (even when many get immediately rejected) imposes massive costs on our industry. We may publish about 4.5 million articles per year, but — assuming an average acceptance rate of 30% — we’re collectively receiving about 15 million submissions. 

Assume that 50% of submissions get a desk rejection, and each of those desk rejects costs $20, while 20% of articles going into peer review and being rejected cost $100, and the 30% getting accepted cost $250 (before production). That’s $1.5 billion (with a ‘B’) in peer review costs, and I suspect those dollar values are underestimates. 

Fast forward to the end of 2026 or mid-2027, and it’s not implausible that there’s another five million junk manuscripts out there, hitting our journals over and over. That’s an additional $100 million in peer review costs just to let them get submitted. Since those authors pay nothing to submit, that cost will have to be added onto APCs, subscriptions, or whatever model we’re using by then to cover these peer review costs.

As you can imagine, celebrating an extra $100M in costs as a win because submissions went up is a recipe for communal disaster.

2. The Innovation Shackles

Second, when ‘number of manuscripts submitted’ is treated as a key performance indicator, any initiative that might deter authors from submitting is deemed too risky. These initiatives range from requiring authors to share their data prior to submission to basic author-facing automated checks for compliance with journal submission guidelines. Anything that could prompt the author to abandon the submission process and go elsewhere is viewed as a threat to submission targets. Submission fees are unpopular for the same reasons. 

To abuse the exclusive club metaphor from above a little more: our club has a strict dress code, but our success metrics demand we let everyone in and then have the bouncers wrestle the hippies in flip-flops off the dance floor. Someone turning up with a device to assess dress code compliance for punters in the queue is turned away because we dare not do anything to discourage the hippies in the queue.

[Minor aside: letting the hippies into the club also runs the risk that they’ll end up doing interpretive dance on the main stage (i.e., published in your journal), leaving your reputation as a sophisticated, James-Bond-esque club in tatters.]

3. The Fix

I’ve largely foreshadowed this, but as an industry, we need to make authors demonstrate that they care about doing good science before we let them submit their manuscript to a journal. This logic also applies to preprint servers: since submission requirements for a preprint server are minimal, they’re receiving a growing avalanche of manuscripts. If these servers are to have a value for readers, then multiple quality control steps need to be in place. 

Decades ago, we forced authors to demonstrate their desire to send in their manuscripts to our journals by making them comply with bewildering formatting requirements at initial submission. While these requirements normally served no functional purpose, they imposed a sufficient time cost on authors whose papers were unlikely to be published that they did not bother to submit in the first place. Since each journal had its own idiosyncratic submission format, shopping a manuscript around multiple journals was far too much work to be worthwhile. 

Fortunately, technology has moved on, and specialist tools like blueberg.ai will be able to automatically assess manuscripts against journal submission requirements and let authors address issues before they get submitted. Surfacing these issues before submission makes sense for everyone: authors are less likely to have their manuscript kicked back, and a much higher proportion of manuscripts will meet journal standards the first time, saving the Editorial Office a great deal of hassle. 

Lastly, prompting authors to do improvement work on their manuscript prior to submission acts as a prequalification step: are they sufficiently committed to submitting to your journal to do what you’ve asked? If not, then you’re much better off if they take their work elsewhere. 

The prequalification step is doubly important when the journal has requirements that relate to research integrity. Note: I am not talking here about the negative signals of integrity, such as image manipulation or paper mill activity, where it is vital that authors not be able to ‘test run’ their problematic manuscripts against the detection tools. 

Instead, when the journal requires ‘honest signals’ of research integrity — time-consuming actions like sharing data and code, including RRIDs for reagents, or using ORCIDs — then helping authors to comply with these prior to initial submission accomplishes several good things. 

  1. First, the journal sends a powerful message that it cares about research integrity.
  2. Second, by itemizing the research integrity actions the authors need to take prior to submission, the authors are accountable for taking those actions. It’s easy to ignore a generally worded blandishment like ‘please share your data’; it’s much harder to ignore a concise list of the individual datasets from your manuscript that the journal expects to see on a public repository. 
  3. Third, authors who fear sharing their data and code, perhaps because they’re fabricated, or because the work was done in such a rush that the datasets are a mess, these authors will quietly take their weak submissions elsewhere. Journal submissions will go down, but the submissions that do arrive will be of higher quality. 
  4. Fourth, it is much easier to maintain Editor enthusiasm and find willing reviewers when the work is by authors who clearly care about the quality of their science.

So, to recap, the publishing industry has led itself into a mess by focusing on rising submissions as a positive indicator of journal performance. Journals are now receiving a flood of low-quality work, and the Editorial Office, Editors, and reviewers are having to wade through it, and we’re wasting huge sums of money in the process. The time has come to close the floodgates and require that authors demonstrate their commitment to quality science before we let them in the door.

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

Discussion

17 Thoughts on "Manuscript Submissions Are Up! That’s Good, Right?"

Thanks Chris, those are good reads. However, they are mostly aimed at the volume of published manuscripts, whereas here I’m talking about journals aiming for ever-rising submissions of new manuscripts. Of course, if you keep the rejection rate constant then more submissions translates directly into more articles.

What I’d like to see is journals forcing authors to prequalify themselves to submit their manuscripts in the first place: do the authors care enough about doing quality research that they’re willing to make a few simple improvements to their manuscript before sending it in? That idea is an anathema if the journal (& publisher) are fixated on rising submissions as a signal of journal health.

To play devil’s advocate though, won’t this give an advantage to the publishers (likely larger, probably commercial) who don’t need to force authors to do this and have tools to allow them to do it on the authors behalf.

If a journal/publisher feels like they’re somehow gaining an advantage from inviting a lot of AI generated slop into their peer review system then they should go ahead. I think the smart money is on the publishers that save editorial time and community goodwill by steering the garbage elsewhere before it even gets submitted.

Thank you, Chris and Tim. This is a very helpful thread. Absolutely agree stronger screening and author support can improve quality at the point of submission, and potentially this could be done via various types of automation. The only thing I’d add is that we can’t filter our way out of volume pressure forever. Better filters help, but even the best ones will introduce new issues & none will reduce the incentive to submit low-quality manuscripts in the first place. Would love to see the ecosystem tackle both sides of this: more efficient workflows AND upstream reform of incentives and assessment, so we’re not just shifting the pressure around the system but actually reducing it at the source. Initiatives like DORA and CoARA are lighting the way on reforming incentives. Now the challenge (and opportunity) is for all of us across the sector to turn those principles into discipline-specific practices. L. Elizabeth Parker captures this well below.

Hi Chris – I’ve re-read your comment and realized I hadn’t fully understood it. I think publishers of all sizes will want to adopt tools that prescreen articles before submission, particularly for things that the EO is going to have to catch and fix once the article gets submitted: the efficiency gains of having to spend less time on EO checks will lead to considerable per-manuscript cost savings. And those are good no matter the size of the journal or portfolio.

The real fix is to shift away from “publish or perish.” What will it take for initiatives like TARA (Tools to Advance Research Assessment) and CoARA (Coalition for Advancing Research Assessment) to actually transform the way institutions assign value to the work individual researchers are doing?

That is one fix, but we’ve been trying to reform publishing incentives for a few decades with no success. The flood of low-quality submissions arriving at journals is a newer problem, perhaps driven by poorly conceived incentive systems in places like India and China. Whatever the reason behind all these extra submissions, they’re costing publishers a fortune to deal with and collapsing the goodwill and enthusiasm that powered the peer review system.

My point here is that we need to stop fetishizing article submission rates and help authors to demonstrate that they care enough about decent science before they send in their article – whether that’s via submission fees or simple pre-submission checks for journal policy compliance.

Is this a straw man? I think you are pushing at an open door. The only publishers stoking submissions at all costs are those that follow an unqualified profit-maximising strategy, something that has been fostered by the APC model of funding academic publishing. So, what we need to develop are criteria of integrity and quality in academic publishing, and call out the profit-maximisers. It’s not just the volumes of submissions that are amiss, but the quality, the authorships for sale, the citation cartels, the paper mills. I would be surprised if there is anybody in reputable publishing who is not aware of these threats, although it is still a shock to realise that the likes of Wiley were prepared to buy Hindawi despite all the red flags. But there are some criteria that can be ambiguous, including number of submissions. Highly-cited publications. That’s good, right? Well not if it has been manufactured by bad actors. So, we need more nuance and good faith initiatives.

Hi Peter, thanks for your comment. You’d be amazed how much reluctance there is throughout the publishing industry to put any impediments in front of authors who are about to submit their manuscript, and that applies just as much to august society subscription journals as much as large OA outlets.

I don’t know what to think. I’m an editor for a society owned environmental science journal (ET&C) which has required ‘honest signaling’ for data and code for about 10 years. Our submissions have been steady while publication numbers at some other journals in the domain increased ~10fold, with journal impact factors doubling (eg Chemosphere, STOTEN). Granted they were suspended from WoS indexing last year for publishing lots of dodgy content, although Scopus took no similar actions And even our requirements for honest signaling are sometimes more signal than substance. I make efforts to check whether in fact all submissions I get that say their data are available really are. There’s usually omissions. And with a double-blind review process that withholds the data availability statement from reviewers because that would compromise author anonymity, I question how well supporting data for articles gets checked. YMMV, but code and data reviews are nontrivial and I’m skeptical that oftentimes, honest signaling isn’t really.
//Just another depressed editor//

I really enjoyed this piece, thank you Tim! I completely agree with you on so many points (and genuinely adore the analogy of hippies and flip-flops). The worry (at least in my head) about putting more obstacles in author’s way is that it will deter some of the quality authors as well as the poor ones. There’s also a DEI implication. We’ve worked for years to reduce friction in our submission processes and for good reason. I’d love to be able to stop some of the dross coming through the door so we can focus on the good stuff without compromising our publications output, however I’m not sure that’s in our control. Identity verification is one mechanism I’d be keen to see more of, and we need to get more comfortable with banning known bad actors. Aside from that though, I think automating a lot of the heavy lifting in desk assessment is the way forward, potentially to the extent of auto-rejecting. It’s the only way I see to keep costs from spiralling and maintaining our standards. There is a risk that we kick out some potentially good work with the bad of course, but I’m not sure we have the luxury to be handpicking through work any more – at least not at initial submission.
The other more controversial, unlikely to ever happen solution harks back to Angela’s suggestion from last year – can institutions start vetting submissions before they get to a journal? I can dream… https://scholarlykitchen.sspnet.org/2024/03/28/putting-research-integrity-checks-where-they-belong/

Thanks Kim – these are good points! Your dream might be edging a bit closer to reality with PubShield (https://pubshield.proofig.com/), which is bringing together integrity & research quality tools and making them available to researchers via an institutional subscription.

There’s a wide range in how you structure automated checks prior to submission, and in how strictly you enforce compliance prior to submission. For example, the checks could be just informational, such as “Journal X prefers manuscripts with open data, so please consider moving the datasets from your supplemental information onto a public repository” or “It looks like Figure 4 is missing, could you please check?”. The authors are free to ignore these requests and submit anyway, but at least you’ve given them a chance to fix up their manuscript before it gets kicked back to them…

Great piece Tim, thanks for writing this and encouraging this debate. Scholarly publishers are trying to meet three distinct and contradictory challenges head on – managing increasing volumes of submissions (which they may or may not have encouraged depending on the publisher and its business model); improving speed to publication as part of their value proposition to their authors (and which is often another internal KPI); whilst managing the quality and integrity of the research they publish. You can probably do any two of these three things in tandem reasonably well; but doing all three simultaneously has so far proved near impossible for our industry.

I agree with your assertion that addressing quality issues upstream, prior to or during submission, is without doubt the answer… but to Kim’s point, how do publishers do this in a way which is controllable and affordable, yet doesn’t increase or reintroduce friction for authors? I’ll resist the urge to do a sales pitch here in my capacity as Growth Director with Kriyadocs, except to say that we are among a number of technology providers (you mention one in your piece Tim) who are trying to address this conundrum with tools which can be used before or during the submission process. The key to avoiding friction lies in reducing the burden of form-filling by submitting authors – the default UX for most submission platforms – and in utilising AI tools to flag areas of risk and provide suggested improvements to authors – without giving fraudsters the inside scoop on our detection methods, of course!

Until institutions and governments around the world start to consistently and directly address the issue of fraudulent behaviour by their researchers (which is often incentivised by those very institutions) and improve researcher education on integrity issues as a core competence, the scholarly publishing community is going to have to continue to collaborate on this problem as best we can.

Thanks Jason – I think the friction needs to be viewed as a feature, not a bug. If it’s something as simple as highlighting issues with the figures (numbering, presence, etc) or something else that would lead the Editorial Office to kick the manuscript back, there’s no reason not to raise it pre-submission: if they don’t fix it, the EO will be in touch in any case. Authors that are deterred from submitting by being asked to put in a missing figure are almost certainly not authors you want to be working with.

Generally, I find this is a very sensible proposal. I’d be curious to hear your take on requiring sharing qualitative (e.g. interview data) where participant safeguarding and anonymity are sometimes acutely at risk due to the type of information divulged

Hi Louis, thanks for the comment. Journals don’t generally require that authors with this kind of sensitive data put it on a public repository, but they do generally require a Data Availability Statement that describes why the data are not available and how readers can get access (e.g. by contacting the authors or a group at the institution). Authors can also post an anonymized dataset if there’s little chance of de-anonymization.

Leave a Comment