Editor’s Note: Today’s post is by Scholarly Kitchen Chef Ashutosh Ghildiyal, Maria Machado, and Gareth Dyke. Maria is a physiologist turned consultant, helping scientists transmit the core message uncovered from their data and disseminate new knowledge quickly. Gareth serves as Academic Director at ReviewerCredits, Sales Director at 4Evolution, and is a co-founder of Sci-Train.
In a world where information is abundant and instantly generated, what becomes scarce? If AI can produce plausible scientific language at scale, how will society decide what to trust?
Observation and experimentation will not disappear. Science will continue to require instruments, biological materials, chemical reactions, and disciplined human judgment. What is changing is everything surrounding the practice of science — how research is planned, recorded, interpreted, communicated, discovered, consumed, and ultimately trusted as AI becomes deeply embedded in all these processes.
However, AI in science should not be viewed merely as a productivity tool layered onto existing workflows. It represents a structural shift in how knowledge moves through society, and therefore in how scientific authority is established and maintained. For scholarly publishing, this matters because the signals via which credibility is inferred are being reshaped in real time.

AI Will Be Embedded in the Research Workflow
The most important thing about AI is not that scientists will “use” it; it is that AI will integrate the infrastructure of science itself. We already see early signals: research workflow automation, manuscript screening tools, research integrity checks, scientific writing tools, and AI-assisted peer review. These systems and tools will only become more deeply embedded and harder to separate from the workflows they support.
Over time, AI will sit inside the entire research lifecycle, assisting with experiment planning, protocol optimization, data analysis, anomaly detection, and pattern recognition. AI is already being embedded in most publishing platforms, peer review systems, and discovery interfaces although the process is still incipient. Editors, reviewers, and authors will inevitably use AI more and more for improving productivity, polishing manuscripts and review reports, as well as screening and preparing manuscripts at different stages of the publishing workflow.
We have already seen a version of this dynamic with preprints: when friction in dissemination dropped, scientific output accelerated. AI will extend that acceleration for journal articles as well, by lowering the barrier to producing scientific prose.
The Paradox of Convenience
AI has been touted as a means to reduce grunt work, shorten research cycles, lower costs, and democratize capabilities that once required entire teams. For researchers in resource-limited environments, this is genuinely empowering, a chance to compete on ideas rather than institutional resources.
However, the most predictable shift is what we might call cognitive laziness—not a moral failing, but a rational human response to automation. When effort decreases, so do attention spans. The risk is not that AI makes people less intelligent, but that it makes deep cognitive engagement feel less necessary. If a tool can summarize a hundred papers in seconds, why spend hours reading them? The result is an abundance of content and a collapse of meaning. AI is poised to accelerate this dynamic, not because it is inherently harmful, but because it makes plausible sounding text effortless to generate at industrial scale.
It follows that, as content becomes cheap, skepticism is essential but expensive.
- People will stop asking “Is this well-written?” and start asking “Is this real?”.
- They will stop asking “Is this published?” and start asking “Is this manipulated?”.
- The question shifts from “Is this convincing?” to “Can I trust it?”.
Science Will Be Consumed Differently
Researchers will keep running experiments. But how they encounter science will change radically. In an AI-embedded world, they will not primarily “read papers” the way they do today. They will increasingly interact with the literature through query-driven, AI-mediated systems, a shift already visible in how researchers navigate Google Scholar, PubMed, or Semantic Scholar rather than journal homepages. Scientific consumption becomes conversational, personalized, and claim-centered; the unit shifts away from the article as narrative and toward the claim as a modular unit of knowledge. This has real advantages — less time scanning the literature, and easier currency. But researchers who rely heavily on AI summaries may read fewer full papers and engage less with methodological nuance. This may be particularly concerning among early-career researchers still developing their disciplinary instincts. Furthermore, sustained attention reading a narrative may become rarer, eroding the community’s collective capacity for critical insight; this will inevitably have downstream effects, such as on the quality of peer review.
This shift is far from being uniform across fields. In the humanities, philosophy, and parts of theoretical mathematics, deep reading remains central to intellectual practice; the argument, not the claim, is the unit of engagement, and sustained interpretation resists fragmentation into modular summaries. Even in the laboratory sciences, complex methodological papers may still demand full immersion.
Thus, the journal article as the original scientific narrative authored or approved by the author will remain essential. It will continue to be the archival record of science, containing methods, evidence, and reproducibility details that cannot be reduced without losing what matters. But the published article will no longer be the primary interface through which knowledge flows (or even the primary factor of research evaluation in the future). AI will increasingly sit between the paper (and related contributions, like data and code) and the reader, transforming the scientific record into knowledge graphs, evidence maps, credibility rankings, contradiction trackers, and claim networks; most importantly, it will help the reader digest the information in a way that is most understandable, accessible, and interesting for them.
Trust Becomes the Highest-Value Asset
Most lay audiences have never read journal articles. They rely on science journalists and public institutions, intermediaries who translate technical research into accessible, trustworthy narratives. In an AI-mediated world, even that layer risks being bypassed. People will ask their AI assistants: “Is coffee bad for me?”, “Does this cancer treatment work?”, “What does climate science actually say?”. AI tools will answer instantly, fluently, and confidently, and that confidence will feel persuasive, even when it is misplaced. If the AI response is wrong, biased, manipulated, or incomplete, most users will lack the tools to detect it. Even when it is correct, its authority feels opaque because the user does not see the chain of evidence or the uncertainty woven into scientific reasoning: the response generated by AI does not carry accountability, as that of an author or a journalist would. This deepens the trust crisis and raises an uncomfortable question — where does scientific authority actually reside in an AI-mediated society?
Publishers now operate in a marketplace where misinformation competes directly with science and often wins because scientific voices are absent, inaccessible, or incomprehensible — and because people are not able to distinguish between what is a verified, trustworthy source and what is simply a channel meant to serve commercial purposes through gaming engagement metrics. Non-scientific fact based, and often politically motivated campaigns, such as anti-vaccine movements, climate denial campaigns, and health myths succeed because of these factors. AI will amplify this dynamic due to its sheer capability and speed in producing plausible content in all kinds of engaging forms, seeking to exploit various psychological and cognitive hooks designed to engage and persuade. All this negatively impacts science as a global enterprise. Declining trust reduces public support for research, less support invites political pressure on funding, and budget pressure strains institutions, cancels subscriptions, and diminishes the perceived relevance of publishing. Hence, science communication, which we define as factual, complete, verified, and source-linked communication of research that is accessible, understandable, and engaging for the audiences is no longer a “nice to have”; it is becoming central to the trustworthiness and sustainability of the scientific enterprise and managing its downstream impact on society.
Accountability in Science Communication: How Human Is It?
Trust will become the central currency of science communication. But it cannot be built by machines alone. Machines can generate language, simulate reasoning, and produce synthesis. Trust ultimately requires human accountability. It requires the system to ensure careful and sustained human attention and editorial judgment at the point of review and selection. It also necessitates human integrity at the point of research, and depends on the humility to acknowledge uncertainty and negative results, the courage and clarity to make reasoning transparent, and the willingness to stand behind what is communicated, even when incentives favor speed, scale, and certainty over truth. In the future, the most successful publishers will not be those that publish the most content, they will be those that have earned the most credibility. In an AI-saturated environment, human judgment is foundational for trust.
One of the most overlooked realities of AI is that it will intensify the incentives already embedded in the scholarly ecosystem. If academia rewards publication volume, AI will inflate output. If publishers reward speed, AI will accelerate workflows. If funders reward visibility, AI will sharpen competition for attention. AI is an amplifier; it makes existing systems more extreme. Trust cannot be treated as a secondary outcome. It must be an explicit design goal, reinforced through deliberate policies, visible standards, and human-led editorial structures that protect quality even as scale increases. The new mission of academic publishing is to build, signal, and sustain trust in an AI-mediated knowledge ecosystem.
What Journal Evolution Might Look Like in Practice
If researchers no longer consume science by browsing journal issues, and if AI becomes the primary interface through which knowledge is queried, the role of journals must evolve accordingly. Their future relevance will not come from being destinations for content, but from being ecosystems that enable trust, interpretation, and continuity in an AI-mediated world. Several directions emerge:
- Journals may become trust certifiers, acting as credibility engines that validate claims through rigorous human scrutiny.
- They may take on a stronger role as curators of meaning, helping communities interpret emerging themes, surface contradictions, and connect research to real-world relevance.
- They could also evolve into community platforms, enabling dialogue across researchers, practitioners, policymakers, and the public.
- At a more structural level, journals may become field-level knowledge systems — discipline hubs where research is continuously mapped, synthesized, and made machine-readable for AI-driven discovery.
Across all of these possibilities, the shift is clear. The value of the journal moves away from being a publishing venue and toward being a trusted signal in an increasingly noisy, AI-mediated environment. Without this evolution, journals risk becoming invisible infrastructure — still essential, but no longer directly experienced or recognized by the reader.
Even as access accelerates through mechanisms such as preprints, most readers will continue to rely on curated, contextualized, and credibility-assured content. What changes is not whether journals matter, but why. Their role as explicit trust certification systems will become more pronounced, reinforced through visible signals such as structured integrity checks, transparent disclosure of review processes, and standardized trust markers that travel with the research wherever it is accessed.
At the same time, journals will need to move beyond publishing articles as isolated units and provide field-level context — through synthesis, contradiction mapping, replication tracking, and expert commentary that clarifies where consensus is emerging and where uncertainty remains.
A parallel shift will be toward machine readability. AI-driven discovery will rely less on static formats like PDFs and more on structured metadata, claim extraction, and interoperable taxonomies. Journals that make research legible to both humans and machines will be better positioned to remain visible and relevant in this new environment. Those that complement this with compelling narratives, visual summaries, and audience-specific interpretations will compete not only on prestige, but on relevance and reach.
Finally, journals may need to expand into post-publication stewardship. Publication no longer marks the end of quality control, rather it becomes the beginning of an ongoing process of validation, correction, and interpretation. The most trusted journals of the future may not be those that publish the most influential papers, but those that most reliably maintain the integrity of the scientific record over time.
None of these shifts require abandoning peer review. They reinforce its original purpose: not merely to filter content, but to provide the field with a trusted foundation for collective understanding.
Beyond the Article: The Next Communication Ecosystem
If science is to remain influential in an AI-mediated world, it must be translated into forms people can actually use: plain-language summaries, podcasts, video abstracts, interactive visualizations, policy briefs, and teaching modules for students and early-career researchers. The strategic imperative is clear: publishers and journals must move beyond content distribution and become communication enablers. If they do not, others will fill that role — often without the same commitment to rigor, nuance, or integrity.
The opportunity is not simply to disseminate knowledge, but to enable decision-making. Evidence dashboards for policymakers, living systematic reviews, and AI-integrated knowledge platforms built on verified content are not mere extensions of publishing — they are its next frontier. In a landscape where access to content is no longer scarce, the ability to translate evidence into usable, trustworthy insight becomes a defining source of value.
Science communication will continue to move forward. AI will accelerate its speed, reshape its channels, and expand its reach — but it will also amplify noise. As synthetic content becomes pervasive, trust will become more fragile. The central challenge ahead is not access to information, but the ability to trust what is being seen, interpreted, and acted upon. This is not a problem that can be solved through automation alone. It requires systems that preserve the human elements of judgment, accountability, and meaning-making at scale. The publishers that will matter most are those that recognize three essential shifts: (1) attention is scarce, (2) trust is a strategic differentiator, and (3) the reader is the ultimate stakeholder. In this environment, their role is no longer defined by the volume of content they produce, but by their ability to ensure that what flows through the system remains credible, interpretable, and reliable.
Which brings us back to the defining question: What will journals represent in the future? Will they remain repositories of research articles, valued for prestige and archival completeness? Or will they evolve into something larger—trusted, field-level systems that curate, verify, interpret, and safeguard scientific meaning in an AI-mediated world? The answer to that question will not only shape the future of journals, it will shape the future of science communication itself.
Discussion
1 Thought on "Academic Publishing in the Age of AI: From Content to Trust"
“Beyond the Article: The Next Communication Ecosystem
If science is to remain influential in an AI-mediated world, it must be translated into forms people can actually use: plain-language summaries, podcasts, video abstracts, interactive visualizations, policy briefs, and teaching modules for students and early-career researchers. The strategic imperative is clear: publishers and journals must move beyond content distribution and become communication enablers. If they do not, others will fill that role — often without the same commitment to rigor, nuance, or integrity.”
Besides researchers doing this themselves as part of their research process and new research skills, I see Librarians fulfilling that role as knowledge curators and facilitators of research. While publishers/journals may fulfil this role to an extent, it may be too general. I see how researchers themselves and Librarians will be better suited than publishers/journals to provide a tailored service for their researcher’s niche purposes.