Editor’s note: Today’s post is by Claudia Taubenheim and Sarah Hands. Claudia is a Research Integrity Consultant and Sarah is the Chief Operating Officer, both at PA EDitorial.

When Did “Published” Stop Meaning “Trustworthy”?

For most of the 20th century, getting a paper into a journal was seen as a proxy for the credibility of its results. Readers assumed that what appeared in print had passed through a thorough and rigorous process where sceptical reviewers and editors had to be convinced of the new data or arguments.

However, that credibility is weakening. We have editors who are flooded with submissions, and finding reviewers is becoming more complicated than ever. And then there is the ever-increasing flood of new journals paired with new records in retracted papers. Readers are learning that not everything in the literature deserves their confidence.

As former scientists now working in publishing, we see this erosion from the inside, and we are deeply worried. This trend is not driven by a few bad actors. It is driven by a system that rewards volume, speed, and metrics over judgment, integrity, and care.

Decorative image: A beautifully crafted wooden gate stands tall, adorned with elegant carvings. A shiny padlock keeps it secure, hinting at mysteries within the hidden garden beyond.

The Metrics Trap

Publication counts, h-indices, journal impact factors, citation numbers — all have their values as imperfect indicators for different metrics. However, over time, they became targets. When hiring committees and funders purely rely on them, they send a clear message: publish more, publish faster, aim higher. This pressure is only amplified by institutions that are competing in rankings. And the publishing industry adds further to it by expanding portfolios and increasing throughput to capture submission growth.

The result is overproduction: more journals, more special issues, more manuscripts, all of it without a meaningful increase in editorial or reviewer capacity. This leads to an erosion in quality where low-value work will still be published somewhere. Practices like “salami-slicing” papers, redundant publication, and strategic self-citations become rational responses.

When everyone optimizes for output, fewer people are rewarded for asking: does this actually hold up?

AI as Accelerant, Not Root Cause

AI did not create these issues, but with the advent of large language models, all of these issues multiplied. We now see a genre of AI-assisted manuscripts: fluent, but generic language (although LLMs have their strengths), confident framing, and neat structure, but far too often paired with hallucinated references.

As researchers seek faster, easier routes to publication, paper mills professionalize. Detection has become an arms race with modern, often also AI-powered detection software. A recent study on paper mill activity in cancer research suggested that almost 10% (or ~260,000 papers!) of all published papers in the last 25 years seem to be fabricated, spanning multiple publishers and increasing in frequency.

Paper mills offer full publication packages: fabricated data, AI-generated text, and citation networks to boost metrics. However, banning AI is neither realistic nor useful, as many researchers use these tools legitimately, and publishers should opt for accountability rather than pure reporting.

When Bad Science Reaches the Bedside

The consequences of fraudulent science or paper-mill articles are not abstract and can be outright dangerous. In clinical and life sciences, they may lead to years of wasted tax money by trying to replicate experiments. Even worse, they might be included in systematic reviews and meta-analyses. Downstream reviewers may lack the time or the forensic skill to detect subtle fabrication where the contaminated evidence could influence guidelines. Once that happens, the damage is hard to undo. Retraction processes are often slow, secondary analyses might propagate the distortion as has been shown by the VITALITY study I with significant and worrying results: The authors found 1,330 retracted trials with 847 systematic reviews that already included them. Furthermore, 1 in 5 meta-analyses changed in a meaningful way when the retracted papers were no longer included. Worse still, these meta-analyses directly guided 157 clinical guidelines, many of which continue to serve as the most recent guidance.

The stakes extend well beyond academia: unreliable publications can misdirect patient care, introduce unsafe treatments, further undermine an already fragile public trust in science and, in particular, in evidence-based medicine, which is especially worrying in the current polarized political climate. And, they might expose publishers to legal and reputational risk when flawed research influences real-world decisions.

All this is not confined to one country or discipline, as an example from computer science shows, which was recently discussed in a news article in Nature. Under-resourced institutions, intense competition for tight funding, and ranking pressures make corner-cutting an adaptive strategy for some. The problem is structural.

Credibility is the Core Product of Publishers

Where does this leave publishers? Caught between two competing interests: On the one side, commercial models reward scale: more submissions, more titles, more volume — more revenue. On the other, credibility is the core product. Every fraudulent paper that enters the record increases investigation costs, consumes editorial time, and weakens journal brands built over decades — and erodes the trust in science in society.

We believe this is the moment for publishers to lean into, rather than away from, their liability for science integrity and rigor. It is an ethical and practical responsibility to ensure that what enters the scholarly record has passed a meaningful threshold of plausibility and integrity.

Why Early Triage Matters Most

The highest leverage sits at the first submission. Journals should combine automated screening with trained human oversight to filter out manuscripts that clearly do not belong in peer review. Only work that passes these checks should reach editors and reviewers. Doing this well requires investment in tools, training, and experts. Admittedly, it may reduce short-term throughput. But the alternative is wasting the time and expertise of editors as well as reviewers on manuscripts that never had a credible claim to the scientific record.

Realigning Incentives with Trust

However, the publishing industry must go further: Integrity cannot survive inside growth models that prioritize volume above all else. If internal KPIs (key performance indicators) are driven mainly by submission counts and publication volume, integrity initiatives will remain cosmetic. Aligning business models with trust may require slower expansion or even contraction. However, maintenance of rigorous standards ensures the reliability of the research disseminated, which in turn safeguards reputation, strengthens relationships with authors and readers, and drives sustainable long-term revenues.

Publish Less to Trust More

The core question is simple: how much volume are we willing to sacrifice to rescue trust?

We strongly believe that rejecting more at an early submission stage is crucial and a strong signal that reviewers’ time and readers’ confidence matter. As scientists by training and publishing professionals by role, our position is pragmatic. The system is at risk but not yet broken beyond salvage. But we need to act.

If scholarly publishing is to remain a steward of reliable knowledge, the easiest path through the system must once again be the honest one.

Claudia Taubenheim

Claudia Taubenheim

Claudia Taubenheim is a Research Integrity Consultant at PA Editorial, where she advises on integrity-related topics. She holds a PhD in Microbiology from Kiel University and brings eight years of publishing experience, including her role as Senior Managing Editor at PA Editorial. In addition to her consultancy work, Claudia is a freelance medical writer, a project coordinator for a clinical research unit at University Clinic Schleswig-Holstein, and a career coach for scientists.

Sarah Hands

Sarah Hands

Sarah Hands is the Chief Operating Officer at PA EDitorial, where she leads operational strategy, business development, and the delivery of high-quality editorial and peer review services for a range of well-known publishers and societies. With a background in scientific research, she has 15 years of experience in the publishing industry.

Discussion

1 Thought on "Guest Post — Quality Over Quantity: Why Scholarly Publishing Needs Stronger Front-End Gatekeeping to Build Trust and Long-Term Value"

Pressures on researchers to be productive are nothing new. The phrase “publish or perish” dates back to at least 1928 (https://en.wikipedia.org/wiki/Publish_or_perish). This previously did not distort the research literature in the ways we’re seeing now.

I think it’s worth actually naming the cause behind the phenomenon you describe in this post, rather than eliding around it. Simply put, in the mid-2000s, the vocal leaders of the open access movement settled on the author-pays APC business model as the “one true way”, and attacked anyone who dared criticize it or point out the obvious consequences of the model (which have now all largely come true). This very website has long been labeled an “enemy of open access” for stating, in its early days, that the APC model would result in favoring quantity over quality, that it would end up more expensive than the subscription model, that it would merely shift inequity from the reader to the author, and lead to all sorts of predatory scams. The dogma of the APC was then further reinforced by funder policies such as those of the RCUK and Plan S. And here we are.

“Journals should combine automated screening with trained human oversight to filter out manuscripts that clearly do not belong in peer review. Only work that passes these checks should reach editors and reviewers. Doing this well requires investment in tools, training, and experts.”

I agree with this 100%. However, given that most publishers are under intense pressure to lower costs (see the new Chinese Academy of Sciences policy banning the payment of high APCs, and the soon to be announced US policy on APC caps), I’m not sure how all this gets paid for.

Leave a Comment