The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

  • About
  • Archives
  • Collections
    Scholarly Publishing 101 -- The Basics
    Collections
    • Scholarly Publishing 101 -- The Basics
    • Academia
    • Business Models
    • Discovery and Access
    • Diversity, Equity, Inclusion, and Accessibility
    • Economics
    • Libraries
    • Marketing
    • Mental Health Awareness
    • Metrics and Analytics
    • Open Access
    • Organizational Management
    • Peer Review
    • Strategic Planning
    • Technology and Disruption
  • Translations
    topographic world map
    Translations
    • All Translations
    • Chinese
    • German
    • Japanese
    • Korean
    • Spanish
  • Chefs
  • Podcast
  • Follow

Guest Post: When the Front Door Moves: How AI Threatens Scholarly Communities and What Publishers Can Do

  • By Ben Kaube, Steve Smith
  • Jul 7, 2025
  • 13 Comments
  • Time To Read: 4 mins
  • Artificial Intelligence
  • Discovery
  • Technology
  • World of Tomorrow
Share
0 Shares

Today’s guest post is by Ben Kaube and Steve Smith. Ben is the co-founder of Cassyni, a platform for helping publishers harness events, video, and AI to engage their journal communities. Steve is the founder of STEM Knowledge Partners and an independent consultant with over 25 years of experience in scholarly publishing, including leadership roles at Blackwell Publishers, John Wiley & Sons, Frontiers Media, and AIP Publishing.

Imagine a researcher typing a complex scientific query into one of today’s AI-discovery and summarization tools. In seconds, they receive a concise, seemingly authoritative summary – no clicking through to journal websites, no navigating subscription paywalls, and no downloading branded PDFs.

To the researcher, this feels like pure convenience, perhaps even magic, but for publishers, it looks like disintermediation. What does this mean for scholarly journals, the communities they nurture, and the integrity they safeguard?

cartoon of a group of people rushing toward an open door

This is no hypothetical scenario: it’s already here. Tools such as Google’s AI Overviews, Perplexity, and ChatGPT now deliver instant answers that bypass publisher websites entirely. What looks like a routine dip in referral traffic is in fact a rewiring of the very infrastructure that has sustained scholarly communities for decades.

A cautionary example can be found in the recent collapse of Stack Overflow, the leading question-and-answer platform for software developers. Between early 2023 and late 2024, the site saw new user questions plummet from nearly 90,000 per month to under 30,000, ostensibly as a result of AI chatbots that provided users with direct answers from Stack Overflow without directing them to the site.

This decline wasn’t just about a reduction in clicks for users or traffic to Stack Overflow – it also meant fewer human engagements, fewer new questions (the platform’s equivalent to manuscript submissions), and ultimately fewer reasons for experienced curators to remain involved. Revenue soon followed suit, declining sharply as advertisers realized that bots don’t buy products or get hired. The trajectory is clear: AI-driven convenience can rapidly disrupt community-driven platforms and long-standing commercial dynamics.

Could scholarly publishing face a similar trajectory? Early data suggest the answer is yes. Google’s AI Overviews, rolled out globally in October 2024, are already having a similar impact on traffic to a variety of sites. One usability study found click-through rates dropped by as much as 66% when an AI-generated summary appeared above traditional search results.

For subscription-based publisher websites, this immediately translates into falling content usage and declining revenues. For open access publishers relying on APC income, there is a similar feedback loop – fewer readers mean fewer authors submitting manuscripts, which ultimately erodes future revenue and community growth. Even platforms primarily built around free, openly available content – such as SSRN and ResearchGate – are vulnerable. Their engagement and community vitality depend critically on consistent traffic from search discovery; a decline in visits threatens the influx of new content, discussions, and the active researchers who sustain these platforms’ relevance.

Further warning signs are already apparent. Multiple publishers report privately that bot traffic (from GPTBot, OAI-SearchBot, ClaudeBot) is outpacing genuine researcher visits. Surveys reveal that researchers increasingly rely on AI summaries rather than journal abstracts for their initial literature reviews. Editorial flags about integrity and peer review quality concerns are rising, highlighting the erosion of human oversight networks traditionally supported by journal communities.

Yet despite these challenges, researchers’ fundamental needs haven’t changed. They still seek trusted communities to stay informed, debate emerging findings, develop collaborations, gain community recognition, and build their careers. The journal community, anchored in authentic human interaction, remains a crucial ecosystem amidst the noise of AI-generated content.

So, what can publishers do to remain resilient?

First, they must clearly encode trust signals – metadata readable by both humans and AI – to safeguard quality and context. Initiatives like eLife’s “Reviewed Preprint” model show that when peer-review information is embedded in a standards-compliant, machine-readable format, it becomes a powerful alerting mechanism — flagging higher-confidence research for both people and the algorithms they increasingly rely upon.

Crossref’s machine-readable metadata, which enables implementation of the FAIR (Findable, Accessible, Interoperable, Reusable) principles, similarly demonstrates how embedding clear, machine-readable metadata can alert researchers and algorithms alike when accuracy is crucial. A recent study highlighting that ChatGPT cited retracted cancer-imaging articles in around 10% of its answers underscores the urgent need for high-quality, structured, machine-readable retraction metadata to preserve research integrity.

Publishers can further reinforce trust by introducing visual markers – such as ORCID trust markers or the Signals badge – to highlight credible content or surface potential issues. This added transparency, comparable to Taylor & Francis’s “currently under investigation” banner, visibly reinforces trustworthiness and accountability.

Second, publishers should rethink how they approach engagement metrics. Rather than narrowly optimizing for traditional COUNTER metrics or basic “usage,” the goal should be meaningful, differentiated user experiences that foster genuine engagement. Publishers can add substantial value through multi-format content, such as video abstracts, interactive seminars, infographics, comprehensive supplementary materials, datasets, and easily digestible summaries (similar to Mayo Clinic’s well-known “easy-to-understand” information format). Curation also presents significant opportunities here; carefully crafted virtual collections can bring together diverse content types – including articles, books, conference proceedings, podcasts, and videos – in a coherent manner that adds clear value, with opportunities for collection editors to add commentary, nuance, and contextual insight.

Finally, the value of real human engagement cannot be overstated. Publishers must actively nurture authentic community interactions, for example, by hosting webinars, hybrid conferences, Reddit AMAs, Slack Q&As, and mentor cafés – similar to the successful programs run by the Royal Society Publishing and AIP Publishing. Societies, with their established memberships, conferences, and existing communities, are particularly well positioned to excel here. Providing authors with more prominent profiles and greater opportunities to directly promote and discuss their work – through seminar series, “Behind the Paper” posts (such as those pioneered by Springer Nature), and other interactive formats – can significantly enhance community vitality and author engagement, particularly critical in today’s AI-saturated information landscape.

Publishers should embrace, rather than resist, the upsides of AI discovery tools. Intelligent summarization can uncover hidden gems, helping researchers discover niche or older content that might otherwise go unnoticed. Publishers who proactively leverage AI tools to extend their reach will find themselves better positioned in this rapidly evolving environment.

The collapse of Stack Overflow is not just an intriguing anecdote, but also an important lesson for scholarly publishing. Publishers who recognize the threat of AI-driven disintermediation and proactively build resilience through trusted metadata, differentiated content experiences, and vibrant, authentic community engagement will continue to thrive. Those who chase yesterday’s metrics risk becoming invisible – cited by no one, read by no one, trusted by no one.

The future has already arrived. Publishers have a narrow window to respond thoughtfully and strategically. The choice is stark: adapt or risk becoming invisible.

Share
0 Shares
Share
0 Shares

Ben Kaube

Ben Kaube is a cofounder of Cassyni, a platform for helping publishers and institutions create and engage communities of researchers using seminars.

View All Posts by Ben Kaube
Steve Smith

Steve Smith

Steve Smith is the founder of STEM Knowledge Partners and an independent consultant with over 25 years of experience in scholarly publishing, including leadership roles at Blackwell Publishers, John Wiley & Sons, Frontiers Media, and AIP Publishing.

View All Posts by Steve Smith

Discussion

13 Thoughts on "Guest Post: When the Front Door Moves: How AI Threatens Scholarly Communities and What Publishers Can Do"

Excellent post Ben. Interested to know your thoughts on how this compares with ArXiv disintermediating physics journals in the 90s. Physics journals appear to have been disintermediated, but they are still here. Is this situation different? (Fwiw, I think it probably is different)

  • By Adam Day
  • Jul 7, 2025, 2:59 PM

arXiv certainly diluted journal traffic, but it did so gradually and within a few disciplines. Its design still nudged readers toward the journal of record for peer-review certification and prestige, so publishers had years to adjust while usage drifted.

Generative-AI answers are operating on a different scale. General-purpose discovery tools such as ChatGPT, Perplexity and Google AI Overviews now surface distilled takeaways for every field almost overnight. When that happens the journal title, DOI and even the authors’ names often disappear from the answer box, so neither journal brand nor individual credit travels with the content. arXiv abstracts are being ingested and summarised in exactly the same way, which further blurs provenance for pre-prints.

A parallel model is already emerging in AI research: many landmark papers live only as arXiv pre-prints amplified by social-media threads, blogs and conference talks, while industry labs treat journal submission as optional. That ecosystem thrives because it moves information—and conversation—faster than traditional publishing while still rewarding the researchers themselves.

Curious to get Steve’s view based on his experience at AIP Publishing.

  • By Ben Kaube
  • Jul 8, 2025, 4:40 AM

Great question, Adam. arXiv diluted traffic, but it didn’t really undermine the role of journals. If anything, it reinforced them as the place you went for peer review, citation, and career credit. There was time to adjust, and the community helped maintain that division between preprint and publication.

From the vantage point of a publisher, AI tools tools don’t just compete with journals – they bypass them entirely. When ChatGPT or Perplexity answers a research question by pulling in findings from multiple sources (often without citing the journal, the authors, or even whether it’s peer reviewed) the journal almost disappears as a meaningful container.

This puts pressure not just on publishers, whose brand is stripped out; for authors, who may lose credit unless attribution mechanisms evolve; and for the whole idea of “peer review” as a trusted filter, which risks becoming invisible or irrelevant at the point of use.

Perhaps the question isn’t just whether journals survive, but whether current scholarly incentives survive intact in a world where AI intermediates the conversation from beginning to end.

  • By Steve Smith
  • Jul 8, 2025, 5:23 AM

Thank you both! Really enjoyed this post.

  • By Adam Day
  • Jul 8, 2025, 7:02 AM

If you can’t prevent readers from using AI, then launch one that is publisher controlled, e.g. https://www.stm-publishing.com/elsevier-launches-sciencedirect-ai-to-transform-research-with-rapid-mission-critical-insights-from-trusted-content/

  • By Brian Plosky
  • Jul 8, 2025, 9:05 AM

Publisher-side AI copilots such as Elsevier’s new ScienceDirect AI can enrich on-site discovery and reading, but developing a truly competitive assistant without Elsevier-level scale and expertise will be tough for most publishers.

And because 70–80+% of journal use is already mediated by upstream discovery platforms, an AI that lives only on the publisher’s pages will reach only the shrinking slice of readers who still navigate there (unless publishers adopt the broader solutions outlined above).

  • By Ben Kaube
  • Jul 8, 2025, 11:01 AM

I agree. This would be more useful if associated with something that has broader coverage, like Google Scholar or PubMed Commons. When I was still at Cell Press, Scopus was also incorporating AI, so that would have a broader reach than ScienceDirect. Of course, you need access to Scopus to take advantage of that, and that is mainly realistic for larger organizations.

  • By Brian Plosky
  • Jul 8, 2025, 11:22 AM

I meant “PubMed Central” not PubMed Commons (which no longer exists).

  • By Brian Plosky
  • Jul 8, 2025, 11:38 AM

@Ben and @Steve thanks for sharing your views.
I very much like the fact that you point to the value publishers can add by deriving multi-format content from their publications, such as video abstracts, interactive seminars, infographics, comprehensive supplementary materials, datasets, and easily digestible summaries,
Yet, often these activities are not considered core enough and are not fully implementated. There is a clear opportunity for them to develop additional author services, and fully explore the monetisation opportunities associated with creating such derivatives.

Meanwhile, you are right, it is time to do away with vanity metrics, and to use available technology, such as tracking devices, to constitute a digital profile of visitors to publishers’ portals (such as tools developed by Hum), based on a much more in-depth view of their behaviour on the site and target them with relevant services.

  • By Sabine Louet
  • Jul 8, 2025, 3:05 PM

Attention spans have shrunk dramatically since the two-column article format was invented. In other content-rich industries readers now expect ideas to arrive through video clips, interactive visuals, and concise summaries—and only then dive into the full text when they want to explore in depth.

Multimedia derivatives such as short form video abstracts, interactive seminars, infographics, and well-curated supplementary materials let busy clinicians, policy makers, industry scientists, and journalists grasp key findings quickly. This broader accessibility is crucial when public research budgets face heightened scrutiny and scholarship must demonstrate real-world relevance and impact.

Expanding beyond text also helps widen the audience without diluting rigour. Surfacing the individuals behind the research—through blog posts, interviews, panel discussions, or seminar presentations—restores trust by showing the human judgment that underpins the science. New content modalities should therefore not be a side project, but a core publishing function that strengthens engagement, credibility, and reach in an era of shortened attention and diverse readership.

  • By Ben Kaube
  • Jul 9, 2025, 5:01 AM

@Sabine, really appreciate this. I agree, publishers could do more to move these derivative formats from the margins into the mainstream. We talk a lot about value-added content, but… it’s still often treated as extra rather than essential.

I suppose it’s an increasingly open question what we mean by “core” publishing. If AI tools are increasingly how research is found, interpreted, and recombined, then formats like summaries, visualisations, and structured metadata aren’t just peripheral. Meanwhile, the traditional article risks becoming the appendix.

And I agree on the potential of audience insight tools like Hum. Still early days, but there’s real scope here, for marketing, yes, and for more tailored services and better decisions about what to build next.

  • By Steve Smith
  • Jul 9, 2025, 3:59 AM

I completely agree, publishers do need to find more effective ways to keep researchers engaged with their platforms. Introducing virtual collections like podcasts and videos could make a big difference in the overall user experience and boost engagement. Take audiobooks, for example, some people find traditional reading tough, but audiobooks let them absorb content while multitasking, as a result the percentage of readers has grown.

It would be great to see publishers embrace these alternative formats while also exploring new, innovative ways to enrich the research experience.

  • By Gold Agboola
  • Jul 10, 2025, 8:55 PM

Thank you for writing this very timely article. It was the impetus for me to start strategizing and planning around this issue.

  • By Nicole Ameduri
  • Jul 21, 2025, 10:47 AM

Comments are closed.

Official Blog of:

Society for Scholarly Publishing (SSP)

The Chefs

  • Rick Anderson
  • Todd A Carpenter
  • Angela Cochran
  • Lettie Y. Conrad
  • David Crotty
  • Joseph Esposito
  • Roohi Ghosh
  • Robert Harington
  • Haseeb Irfanullah
  • Lisa Janicke Hinchliffe
  • Phill Jones
  • Roy Kaufman
  • Scholarly Kitchen
  • Stephanie Lovegrove Hansen
  • Alice Meadows
  • Alison Mudditt
  • Jill O'Neill
  • Charlie Rapple
  • Dianndra Roberts
  • Maryam Sayab
  • Roger C. Schonfeld
  • Randy Townsend
  • Tim Vines
  • Hong Zhou

Interested in writing for The Scholarly Kitchen? Learn more.

Most Recent

  • Ask the Chefs: What’s Your Favorite AI Hack?
  • The Next Open Revolution: Equity, Impact, and the Architecture of Knowledge
  • Impact Metrics on Publisher Platforms: Who Shows What Where?

SSP News

Unlock Your Potential in Scholarly Publishing with SSP’s 2026 Fellowship Program

Nov 10, 2025

Spots are still available for SSP’s 2025 Journals Academy!  

Nov 6, 2025
Follow the Scholarly Kitchen Blog Follow Us

Related Articles:

  • Knob labeled "Impact" turned to the "High" setting Guest Post: Time to Rethink Usage Analytics
  • Toy robot reading a book Let’s Be Cautious As We Cede Reading to Machines
  • A robot hand stealing the letter AI from the abbreviation AI Guest Post — The Open Access – AI Conundrum: Does Free to Read Mean Free to Train?

Next Article:

screenshot from video, black and white footage of the Bangles onstage Happy 4th of July -- Surf's Up!
Society for Scholarly Publishing (SSP)

The mission of the Society for Scholarly Publishing (SSP) is to advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking. SSP established The Scholarly Kitchen blog in February 2008 to keep SSP members and interested parties aware of new developments in publishing.

The Scholarly Kitchen is a moderated and independent blog. Opinions on The Scholarly Kitchen are those of the authors. They are not necessarily those held by the Society for Scholarly Publishing nor by their respective employers.

  • About
  • Archives
  • Chefs
  • Podcast
  • Follow
  • Advertising
  • Privacy Policy
  • Terms of Use
  • Website Credits
ISSN 2690-8085