Editor’s Note: Today’s post is by Josh Dahl, Senior Vice President and General Manager of ScholarOne at Silverchair.

In the first quarter of 2026, journals on the ScholarOne Manuscripts platform received 33% more submissions than in the same period in 2025. But the aggregate growth rate undersells what actually happened: the year-over-year increase itself more than doubled compared to the prior year, from 17% growth in 2025 to 33% in 2026. Platform growth is accelerating at a rate many publishers and journal editors would describe as unsustainable.

The optimist in me would love to interpret this as a research boom. More research, a healthier scientific enterprise, all of which leads to progress and discovery for our world. The pessimist in me wants to throw my hands in the air and yell into the void about commercialized fraud and AI-generated slop. But, at the end of the day, I’m a pragmatist — so, instead, I looked at the numbers.

Decorative image representing a sudden and massive surge in a data graph

The distribution under the aggregate results is more telling. The journals growing fastest are the smallest and least selective ones: journals receiving fewer than 15 submissions per quarter in 2025 saw an 81% increase in Q1 2026. Journals receiving more than 1,500 saw 20% growth. That asymmetry doesn’t prove anything on its own. It is consistent with legitimate growth, with researchers in expanding fields finding new venues, with global research capacity maturing in ways that naturally favor accessible journals. It is also the pattern you would expect if a significant portion of the surge reflected low-effort, AI-assisted submissions targeting journals least equipped to screen them out. The data can’t tell us which explanation is doing more work. But the pattern is sustained enough, and concentrated enough, to warrant a closer look at what the system is actually absorbing.

Submission counts alone don’t tell us much about quality, but desk reject rates offer a reasonable proxy. Between 2022 and 2025, desk rejections grew 72%, nearly double the 43% growth in total submission decisions over the same period. In 2022, editors issued 1.69 desk rejects for every acceptance. By 2025, that ratio had climbed to 2.49. The editorial filter is working harder, catching more, and rejecting at a faster rate than submissions are growing.

What Does It All Mean?

From my vantage point, the submission surge is not primarily a technology problem. It is evidence that peer review’s social contract with researchers was already under strain, and AI has made that strain visible at scale.

Peer review rests on a foundational assumption: the author produced what they submitted, stands behind it, and can defend the claims, locate the sources, and explain the methodology. The entire apparatus sits on top of that assumption of individual intellectual ownership. That assumption is now unstable, because the relationship between a researcher and the text they submit has become genuinely ambiguous, and the norms governing that relationship haven’t caught up to the technology running through it.

The spectrum of AI involvement in manuscript preparation runs from light grammar editing to full draft generation, with no standardized consensus on where acceptable assistance ends and problematic use begins. Researchers are making judgment calls in that ambiguous space every day. Consider the hallucinated citation problem: large language models generate plausible-looking references that don’t exist, fluently enough that a researcher relying on AI-assisted drafting may not catch the error before submission. The author didn’t fabricate a citation in any meaningful, intentional sense. But they submitted a manuscript containing one. Current author attestation requirements have no clean mechanism for that scenario. The accountability infrastructure has a gap in the new behavior.

I think the field is underestimating the psychological dimension at play here. There is meaningful research on how friction functions as an ethical checkpoint. Hiring a paper mill requires deliberate transgression: finding a vendor, paying, coordinating, knowing you crossed a line most people recognize as a line. Using AI to draft a manuscript occupies a genuinely different moral register. The rationalization is almost automatic: the research is mine, the ideas are mine, the tool just helped with the writing. I don’t think most researchers using AI this way consider themselves bad actors. I think they are making decisions in an environment where the pressure to produce is intense, and where the cost of entry has dropped to near zero. That combination is more corrosive to research integrity than bad intent, because it scales in ways that bad intent doesn’t.

Every manuscript that enters the system makes a downstream demand on two resources already under strain: reviewer capacity and editorial labor. Reviewer fatigue and declining response rates predate this surge. A 33% volume increase further stresses that existing deficit. And the accountability gap runs downstream too: reviewers are using AI to manage their own workloads, for the same reasons authors are. A manuscript containing a hallucinated citation, reviewed by someone using AI assistance to manage a review they don’t have time to read carefully, may never surface the problem. The quality check the system relies on is happening under the same conditions of ambiguity that produced the issue upstream.

The editorial labor problem is more acute, and I think it is the least discussed consequence of this surge. The journals absorbing the most disproportionate growth are exactly the journals with the least infrastructure: part-time editors, volunteer board members, and minimal administrative support. These journals are not incidental casualties of a technology problem. They are the structurally under-resourced end of a publishing ecosystem that has never adequately invested in editorial capacity at the margins. The surge has made this condition impossible to ignore.

None of this happens in a vacuum. Funding instability compresses the timeline between research and publication. Career insecurity increases the incentive to inflate output. The publish-or-perish calculus is arguably more severe in emerging research economies, where positions are scarcer and accessible venues are fewer, and where AI tools are now just as available as anywhere else. The same conditions that increase submission incentives decrease review participation. The system is being pushed from both ends simultaneously, by the same underlying pressures.

So, What Should We Do About It?

I’m always a little skeptical of anyone who claims to have a clear path forward, or even a clean answer. But I certainly have some thoughts.

The instinct in moments like this is to reach for technological solutions: better screening tools, AI detection software, automated citation checking. These are worth pursuing, and some are already in use. But they address symptoms rather than conditions, and I think over-indexing on them risks missing what the data is telling us.

The 2026 submission surge is a stress test. What it is testing is not only whether journals can process the volume. It is testing whether the accountability infrastructure peer review depends on is still fit for the environment researchers are actually working in: one where the relationship between a researcher and their submitted text is no longer stable, where downstream capacity to catch what slips through is under its own pressure, and where the structural incentives pushing researchers toward the system haven’t changed while the tools available to them have changed considerably.

If I had to stake a position, it would be this: the questions most worth asking are structural and social. How do we rebuild accountability norms that fit the manuscript preparation environment in which researchers are operating? How do we sustain reviewer participation in a system that is asking for more of a pool already stretched thin? And most uncomfortably: how do we have an honest conversation about journals with the least capacity absorbing the consequences of a challenge they didn’t create?

While those questions don’t have clean answers, the data now makes them impossible to defer, and I think that is the most important thing the surge is telling us.

Josh Dahl

Josh Dahl

Josh Dahl is Senior Vice President of Product and General Manager of ScholarOne, Silverchair. Josh is a seasoned leader in the scholarly publishing industry with over 20 years of experience and a proven track record of driving innovation and strategic growth in the academic and research sectors. Most recently, Josh served as the Senior Director of Product Management at Clarivate, where he was responsible for a suite of solutions for publishers and societies. He is dedicated to enhancing the impact of scholarship by developing next-generation platforms and solutions that cater to the needs of publishers, editors, and researchers.

Discussion

6 Thoughts on "Guest Post — Is Growth Always Good News? 2026 Article Submission Surges"

Hi Josh, interesting statistics, thanks for sharing! You briefly refer to “reviewer fatigue and declining response rates”. Do you also have statistics on that? It would be highly relevant to know how reviewer response rates are developing in light of the rapidly growing volume of submissions.

Thank you, Josh. I just looked at last year’s report. It indeed provides strong evidence of declining response rates. Essentially, researchers’ availability to perform peer review has declined by almost one-third in a period of seven years (2018-2024). For me this confirms the urgency of the questions you are raising at the end of your piece.

I also think there is one more question we need to ask ourselves: Should we stick to the norm that all research articles ought to be peer-reviewed, or do we need to move toward a system in which we have more diverse quality checks, somce of them being more light-touch and others being more intensive? Perhaps light-touch quality checks (e.g., as performed by many preprint servers) should be the minimum standard, and we should expect peer review and other more intensive quality checks to be performed only for specific research articles for which it is of particular importance to determine their quality and trustworthiness.

Hi Josh, thanks for the link to the 2025 version of the Report. It is very informative. In P.12 it says 53.8% of respondents want to have universal submission system with one login. I think this is a very good point as it is hard for people to remember so many passwords, given that each website has its own password requirements. Recently I find the “login via ORCID” button available on many submission websites very useful. Instead of remembering multiple passwords, just login with ORCID. I look forward to seeing the 2026 version of the Report. Thanks for the effort.

Thank you Josh, for an excellent piece! You have asked some deep questions which needs some serious answering.
I would like to take this opportunity to go a bit further.
1) Are we not trying to maintain a fundamentally broken (scholarly publishing) ecosystem? We are trying to find options/solutions to mend the loopholes of a system that promotes and maintains the ‘philosophy’ called ‘publish-or-perish’. Thus we are working at publishing actors’ level (i.e., publishers, editors, reviewers, and researchers). But shouldn’t we work at the system level — the institutions, administrations, and their recruitment/promotion policies/rules that maintain that ‘philosophy’?
2) Shouldn’t we start talking about the “degrowth” of publishing system? Because, what does it mean publishing a research in every 6 seconds, globally? What does that tremendous volume tell us? https://scholarlykitchen.sspnet.org/2024/10/24/scholarly-publishing-the-elephant-and-other-wildlife-in-the-room/
3) Again, where is the real ‘impact’ of the above growth of our academic publishing? How long will counting citations qualify as so called impact? https://scholarlykitchen.sspnet.org/2022/04/27/making-research-a-global-enterprise/
4) Why is the publishing industry so reactive? E.g., papermill has long been there; but when it started attacking publishers’ revenue, they started thinking of investing in early warning/detection systems. When will the industry be proactive to change the system it is in, not just responding to it?
5) Same is happening with AI in publishing. Strong negative approach could be seen from publishers in terms of using AI in peer-review, for example. But, they are not investing enough to improve how AI can reduce the burden on human reviewers and go a 100% AI-dependent review system. Because, when you can manage to maintain an overrated system, like peer review, by glorifying human peer reviewers (which are essentially free labourers), you don’t need to invest money to improve AI to replace humans. https://scholarlykitchen.sspnet.org/2025/04/09/peer-review-has-lost-its-human-face-so-whats-next/

This is a terrific piece indeed. One question I have: why would a hallucinated reference not be considered a fabricated citation? I was taught that a list of references should be _works one has actually read_ or at least skimmed or consulted. Of course, I come from the humanities, so maybe norms are different in the sciences?

When I taught undergraduates I was always trying to drum into them bibliographies / references / citations are not just pro forma: you’re supposed to read some research to inform your own thinking, then list what research you read in your writing. There’s no point in just listing random articles that you haven’t read.

Leave a Comment