Editor’s note: Today’s post is by Claudia Taubenheim and Sarah Hands. Claudia is a Research Integrity Consultant and Sarah is the Chief Operating Officer, both at PA EDitorial.

When Did “Published” Stop Meaning “Trustworthy”?

For most of the 20th century, getting a paper into a journal was seen as a proxy for the credibility of its results. Readers assumed that what appeared in print had passed through a thorough and rigorous process where sceptical reviewers and editors had to be convinced of the new data or arguments.

However, that credibility is weakening. We have editors who are flooded with submissions, and finding reviewers is becoming more complicated than ever. And then there is the ever-increasing flood of new journals paired with new records in retracted papers. Readers are learning that not everything in the literature deserves their confidence.

As former scientists now working in publishing, we see this erosion from the inside, and we are deeply worried. This trend is not driven by a few bad actors. It is driven by a system that rewards volume, speed, and metrics over judgment, integrity, and care.

Decorative image: A beautifully crafted wooden gate stands tall, adorned with elegant carvings. A shiny padlock keeps it secure, hinting at mysteries within the hidden garden beyond.

The Metrics Trap

Publication counts, h-indices, journal impact factors, citation numbers — all have their values as imperfect indicators for different metrics. However, over time, they became targets. When hiring committees and funders purely rely on them, they send a clear message: publish more, publish faster, aim higher. This pressure is only amplified by institutions that are competing in rankings. And the publishing industry adds further to it by expanding portfolios and increasing throughput to capture submission growth.

The result is overproduction: more journals, more special issues, more manuscripts, all of it without a meaningful increase in editorial or reviewer capacity. This leads to an erosion in quality where low-value work will still be published somewhere. Practices like “salami-slicing” papers, redundant publication, and strategic self-citations become rational responses.

When everyone optimizes for output, fewer people are rewarded for asking: does this actually hold up?

AI as Accelerant, Not Root Cause

AI did not create these issues, but with the advent of large language models, all of these issues multiplied. We now see a genre of AI-assisted manuscripts: fluent, but generic language (although LLMs have their strengths), confident framing, and neat structure, but far too often paired with hallucinated references.

As researchers seek faster, easier routes to publication, paper mills professionalize. Detection has become an arms race with modern, often also AI-powered detection software. A recent study on paper mill activity in cancer research suggested that almost 10% (or ~260,000 papers!) of all published papers in the last 25 years seem to be fabricated, spanning multiple publishers and increasing in frequency.

Paper mills offer full publication packages: fabricated data, AI-generated text, and citation networks to boost metrics. However, banning AI is neither realistic nor useful, as many researchers use these tools legitimately, and publishers should opt for accountability rather than pure reporting.

When Bad Science Reaches the Bedside

The consequences of fraudulent science or paper-mill articles are not abstract and can be outright dangerous. In clinical and life sciences, they may lead to years of wasted tax money by trying to replicate experiments. Even worse, they might be included in systematic reviews and meta-analyses. Downstream reviewers may lack the time or the forensic skill to detect subtle fabrication where the contaminated evidence could influence guidelines. Once that happens, the damage is hard to undo. Retraction processes are often slow, secondary analyses might propagate the distortion as has been shown by the VITALITY study I with significant and worrying results: The authors found 1,330 retracted trials with 847 systematic reviews that already included them. Furthermore, 1 in 5 meta-analyses changed in a meaningful way when the retracted papers were no longer included. Worse still, these meta-analyses directly guided 157 clinical guidelines, many of which continue to serve as the most recent guidance.

The stakes extend well beyond academia: unreliable publications can misdirect patient care, introduce unsafe treatments, further undermine an already fragile public trust in science and, in particular, in evidence-based medicine, which is especially worrying in the current polarized political climate. And, they might expose publishers to legal and reputational risk when flawed research influences real-world decisions.

All this is not confined to one country or discipline, as an example from computer science shows, which was recently discussed in a news article in Nature. Under-resourced institutions, intense competition for tight funding, and ranking pressures make corner-cutting an adaptive strategy for some. The problem is structural.

Credibility is the Core Product of Publishers

Where does this leave publishers? Caught between two competing interests: On the one side, commercial models reward scale: more submissions, more titles, more volume — more revenue. On the other, credibility is the core product. Every fraudulent paper that enters the record increases investigation costs, consumes editorial time, and weakens journal brands built over decades — and erodes the trust in science in society.

We believe this is the moment for publishers to lean into, rather than away from, their liability for science integrity and rigor. It is an ethical and practical responsibility to ensure that what enters the scholarly record has passed a meaningful threshold of plausibility and integrity.

Why Early Triage Matters Most

The highest leverage sits at the first submission. Journals should combine automated screening with trained human oversight to filter out manuscripts that clearly do not belong in peer review. Only work that passes these checks should reach editors and reviewers. Doing this well requires investment in tools, training, and experts. Admittedly, it may reduce short-term throughput. But the alternative is wasting the time and expertise of editors as well as reviewers on manuscripts that never had a credible claim to the scientific record.

Realigning Incentives with Trust

However, the publishing industry must go further: Integrity cannot survive inside growth models that prioritize volume above all else. If internal KPIs (key performance indicators) are driven mainly by submission counts and publication volume, integrity initiatives will remain cosmetic. Aligning business models with trust may require slower expansion or even contraction. However, maintenance of rigorous standards ensures the reliability of the research disseminated, which in turn safeguards reputation, strengthens relationships with authors and readers, and drives sustainable long-term revenues.

Publish Less to Trust More

The core question is simple: how much volume are we willing to sacrifice to rescue trust?

We strongly believe that rejecting more at an early submission stage is crucial and a strong signal that reviewers’ time and readers’ confidence matter. As scientists by training and publishing professionals by role, our position is pragmatic. The system is at risk but not yet broken beyond salvage. But we need to act.

If scholarly publishing is to remain a steward of reliable knowledge, the easiest path through the system must once again be the honest one.

Claudia Taubenheim

Claudia Taubenheim

Claudia Taubenheim is a Research Integrity Consultant at PA Editorial, where she advises on integrity-related topics. She holds a PhD in Microbiology from Kiel University and brings eight years of publishing experience, including her role as Senior Managing Editor at PA Editorial. In addition to her consultancy work, Claudia is a freelance medical writer, a project coordinator for a clinical research unit at University Clinic Schleswig-Holstein, and a career coach for scientists.

Sarah Hands

Sarah Hands

Sarah Hands is the Chief Operating Officer at PA EDitorial, where she leads operational strategy, business development, and the delivery of high-quality editorial and peer review services for a range of well-known publishers and societies. With a background in scientific research, she has 15 years of experience in the publishing industry.

Discussion

22 Thoughts on "Guest Post — Quality Over Quantity: Why Scholarly Publishing Needs Stronger Front-End Gatekeeping to Build Trust and Long-Term Value"

Pressures on researchers to be productive are nothing new. The phrase “publish or perish” dates back to at least 1928 (https://en.wikipedia.org/wiki/Publish_or_perish). This previously did not distort the research literature in the ways we’re seeing now.

I think it’s worth actually naming the cause behind the phenomenon you describe in this post, rather than eliding around it. Simply put, in the mid-2000s, the vocal leaders of the open access movement settled on the author-pays APC business model as the “one true way”, and attacked anyone who dared criticize it or point out the obvious consequences of the model (which have now all largely come true). This very website has long been labeled an “enemy of open access” for stating, in its early days, that the APC model would result in favoring quantity over quality, that it would end up more expensive than the subscription model, that it would merely shift inequity from the reader to the author, and lead to all sorts of predatory scams. The dogma of the APC was then further reinforced by funder policies such as those of the RCUK and Plan S. And here we are.

“Journals should combine automated screening with trained human oversight to filter out manuscripts that clearly do not belong in peer review. Only work that passes these checks should reach editors and reviewers. Doing this well requires investment in tools, training, and experts.”

I agree with this 100%. However, given that most publishers are under intense pressure to lower costs (see the new Chinese Academy of Sciences policy banning the payment of high APCs, and the soon to be announced US policy on APC caps), I’m not sure how all this gets paid for.

David, could you further speak to where you believe this will be announced? I’ve only heard it will be in the presidential budget – which is not law. It will have to come in the form of legislative action and agency policy.

Two separate things — there’s language in the current proposed budget (vague and unenforceable) about not spending on “expensive” publishing costs, but no one has a clue whether that’s something that might make it into the final budget for various agencies, and the NIH’s proposed caps on APCs, for which they put out a request for comment, and from what I’ve heard, is coming soon, possibly held up in OMB: https://scholarlykitchen.sspnet.org/2026/03/20/guest-post-all-the-seats-at-the-table-a-summary-and-status-review-of-the-nih-apc-caps-proposal/

Dear David,
Thank you for you comment. I agree that the APC model deserves to be called out directly as one possible problem. However, we chose to focus on what publishers can do now in our post. Perhaps that was too cautious.

On the funding question: honestly, I do not think that there is an easy answer or maybe not the answer the business side of publishing wants to hear. Investment in front-end screening costs money that the current volume-driven model does not reward. But I strongly believe that publishing is a business with a special responsibility regarding ethics, trust and truth. What we hope to convey is that the cost of not doing it is also real: retractions, reputational damage, eroded author/reader trust. We as a society/public simply cannot allow that business growth comes with less scientific rigor and this means to engage in the current arms race against fraudulent players than it needs to be done. And I strongly believe that being rigorous and trustworthy as a publisher will be a strong argument for the majority of scientists and also ultimately funding bodies to choose a respective journal. Whether that argument is persuasive to publishers under margin pressure is another question entirely.

But how can publishers say that they are under margin pressure when major publishers have huge profit margins? Elsevier has a 40% profit margin. The ‘Big Five’ seem to be doing just fine, at the expense of everyone else.

Well, this is actually a very good question and would directly lead to some great philosophical and political discussions. Personally, I stand by it what I said: Profit maximisation cannot and must not be the only KPI in industries with special responsibilities towards society and public. Being trustworthy and committed to the highest ethical standards is at least as important.

I’m not really sure how much the policing of ‘science and research integrity’ should fall on journal publisher’s shoulders. Submitted papers are the communication of that science and research, and, for sure, publishing integrity is a responsibility of the publisher.

Use of AI systems as part of peer review screening and triage is itself becoming problematic – we have been seeing calls from authors in some disciplines to boycott journals using AI screening of submissions as a basis for rejection. (“The article was too polished and there were no typos, ergo it was AI generated. Reject.”) Whereas, ironically, some journal editors, creaking under higher submission rates and peer reviewer shortages are pushing publishers for more AI help at submission.

Clearly there needs to be human oversight of AI in the peer review process, but this means more investment in staff, in those with ‘domain knowledge’ and editorial expertise in publishing houses, rather flying against the push for more process, technology development and reduced editorial staffing levels being experienced in the industry overall.

Reinvestment and retooling in skilled editorial staff is a prerequisite to dealing with these issues of publishing integrity.

The point about AI screening leading to rejection of legitimate work is worrying and part of the problem. The LinkedIn example you linked in a later comment illustrates exactly the risk when people are relying too much on AI tools. I believe tools can only be this: tools. But it crucially needs human oversight to actually interpret the output of those tools (as you also said) and simply “ban AI” is not the answer we are advocating. Actually, I wrote a guest post for TSK last year, where I argue that especially for people like me who are not native speakers, LLMs can really be a strong asset and being rejected for polishing the language is not helpful: https://scholarlykitchen.sspnet.org/2025/10/20/guest-post-from-language-barrier-to-ai-bias-the-non-native-speakers-dilemma-in-scientific-publishing/

Your point about staffing is the crux of it. Trained human oversight costs money and requires domain expertise that has been systematically deprioritised lately. I think this is something we see in many sectors at the moment as an answer to LLMs. As said above, we are also strongly advocating for human oversight rather than relying only on tools. However, whether the current business environment allows for it is, as you say, a harder question. As said in my comment to David Crotty, I believe, academic publishing has the responsibility of adhering to higher standards regarding ethics, trust and truth than many other sectors and needs to act accordingly.

It would seem sensible to penalise authors publishing false papers, and the most effective penalty would seem to be a ban on future publication.

Just how this might work in practice is of course another matter! But, as with any penalty, we don’t need to catch more than a small proportion for it to be effective.

Although I can see that this would make sense, the implementation challenge is significant, though. Who is responsible for such a list and who maintains it? How do you handle contested cases, name changes, and early-career researchers who were pressured by supervisors? And then there would be the question of international enforcement. The thing is that in other professions, misconduct leads to the loss of their license – think of medical doctors, or lawyers, but I do not think that there is such an infrastructure set for scientists. I also believe that this will not get to the root of the problem (publish or perish vs. publishers’ journal growth models leading to a feedback loop).

I could not agree more. The problem when it comes to data manipulation and/or fabrication is that even when the authors are caught and it’s undeniable, they just move on after the retraction, and it’s the publisher who ends up looking like the bad guy for having been deceived.

As for how to make it work: The scientific community knows who these authors are, and it’s up to them to hold them accountable. Authors who are proven to have manipulated the record with fake results should be shunned and banned from future events. Nothing will change on that front until the community stops treating it as some sort of joke.

+1 (or +n) to David’s comment. Follow the money. And rejecting isn’t enough. Journals should screen and check and even enrich references at submission (easy to do for 20+ years automatically, easier now) and if there are hallucinated references QED that the authors have at the very least not read the literature they are citing. Obviously the first check is that all the authors are not hallucinated or impersonated–again the really reputable journals have been doing this and plagiarism checks too for some time. Same for checks at copyediting (or we need to stop claiming that copyediting adds value). Someone like integrity officers at their institutions needs to know. These are serious integrity issues, not just submission errors and need to be treated as such. And journals publishing hallucinated references w/o these basic checks should be called out. One can do the same for data (assuming journals have strict data requirements as they should). Then someone, looking at you Clarivate and DOAJ, should require evidence of these checks and a continuing record of integrity for listings. Yes, these things cost, but not doing them costs more.

Yes to all of it! As said above – we need to make sure that tools are treated as what they are: tools. And we need human oversight. This applies not only to the publishers side but of course also to the authors. And the cases you describe need to be caught quickly at the beginning of the publishing process before valuable editor or reviewer times are wasted.

Follow the money is a very apposite comment here. Most journals are run to make a profit for publishers.
If a publisher insists on deliberately misleading the public for profit, there’s a name for that – fraud.

The word “fraud” carries legal weight that I am cautious about applying broadly here. Most of what we describe is more likely some kind of negligence or misaligned incentives than deliberate deception. However, this may matter less to the reader who cited a retracted paper in a clinical guideline. And this is what we wanted to say with our article: publishers have more responsibilities regarding trustworthiness and truth – and we cannot afford to be on a path where “published” no longer means what readers assume it means with regard to a thorough review process. That erodes trust in ways that have real consequences!

Praiseworthy caution on your part, and of course ‘fraud’ is not a term universally applicable.

But if someone sells you goods and tells you they will do something, and they don’t – in fact the vendor has not taken any steps to check they do – yes, that’s fraud. And bear in mind that we’re not always discussing archaeology, my interest, here – lives are at stake.

So the principle should be the same across academia. Articles are best checked, and also published, by people without a financial stake in them. How that’s done is another matter.

Adding to the AI conversation.

We have an AI type of system which checks integrity of the methods section for biomedical research (also works in social sciences), SciScore. We are not used to auto reject but I am curious if you might share the citation of the author questions or complaints about rejection due to AI.

SciScore as a reviewer can’t be gamed to have the tool give higher scores if you ask nicely in white text, so it’s a bit different than what authors are probably referring to, but I would appreciate a pointer to your author sentiment statement.

Thank you for sharing this. As answered above the irony that a non-native speaker working hard to produce polished English might now be penalised for succeeding is not lost on me. It seems the problem is moving faster than the solutions.

Anita, thank you for raising SciScore as an example of what screening can look like when it is designed thoughtfully. I believe you have human oversight in place where somebody checks the scores manually to decide whether a paper is likely problematic or not?

The reports are intended for authors and reviewers so there is a human who is a target for them to help improve the paper! At some publishers, the editors get the reports and this helps them adjudicate the key rigor criteria that they have agreed to add as evaluation criteria.

We do not however intervene specifically with the reports.

There’s a clear tension between scale and scrutiny here. Pure automation erodes trust, but pure manual review doesn’t scale. Maybe the answer sits in assisted triage, tools surfacing risk signals with context so editors can make faster, better-informed decisions rather than replacing judgment altogether.
Feels like the core issue isn’t AI vs. humans, but lack of explainability in decisions. If tools are used, they need to show why something is flagged linking signals (authorship patterns, references, data consistency) into something editors can interpret, not blindly trust.

Leave a Comment