Have you noticed how you use Google these days? Has it changed? These days, I don’t click on links as much; an AI overview is usually enough to give me the information I need. This made me wonder how these overviews have affected the search behavior of researchers. As they type a search query, do they really click, or are they satisfied with the comprehensive and seemingly confident AI-generated overview that they get in response to their query? I believe it is the latter.

Zero-click readership can therefore be described as research that is consumed, interpreted, and even reused without the reader ever accessing the original source. What does this mean? From a discoverability perspective, this means no PDFs are downloaded, no tabs are opened, and no papers are read, which is such a drastic change from how papers have traditionally been read. Search –> Scan a list of results –> click the promising articles –> read/skim through these articles –> and cite — this was the traditional search journey for papers. But now the search journey is shorter. Search –> Read the AI-generated overview –> Move on.

3D illustration of robot humanoid reading book

From the reader’s perspective, this new way of finding synthesized information is faster and more efficient. But if readers are increasingly finding what they need without clicking on the paper, then what happens to journals as destinations? And what happens to the integrity and visibility of the research itself? Real traffic to the papers will decline, and this weakens the visibility of not just the papers but also journals that host them. This also affects data that publishers use to understand their author base. It could also have a potential impact on authors, affecting decisions related to funding, tenure, and promotions.

While efficient, this change has its share of risks for the reader; it comes at the cost of comprehension. Traditionally, there has been friction as readers move between papers. They may notice inconsistencies or grapple with details in the Methods section as they evaluate the paper critically. But this is slowly changing because they are no longer challenging what they read. They are learning to accept what is synthesized by AI.

Over time, this could have subtle but significant effects, for instance, early-career researchers who rely on AI-synthesized papers may not learn to evaluate manuscripts critically. AI systems may hallucinate, misattribute findings, or overstate, and summaries may often oversimplify things. The result is a version of research that is easier to consume, but potentially harder to question.

AI isn’t just summarizing research; it is, in fact, selecting what needs to be shown in the overview. Publishers, journal editors, and peer reviewers are traditionally considered gatekeepers; they are the ones deciding what should be selected and what is worth reading. There have been so many panel discussions and articles written about gatekeeping and the biases and limitations. These are at least visible and contestable. But what about the kind of gatekeeping that AI introduces? It decides which papers to include, which findings to show, and which perspectives to omit. This filtering happens at the back end without any clear signs of what is being left out. This process is opaque, and there is no editorial board to question or peer reviewers to challenge.

It’s like algorithmic obscurity is replacing editorial bias. Highly cited papers are more likely to be published. Papers in English are more likely to be surfaced than papers in regional languages, and well-structured AI-readable papers with accessible language and a standardized format are more likely to appear in searches.

It’s like a loop. What is visible ends up getting more visible, and what is hidden stays hidden. This is all the more an issue for authors in the Global South whose work may never reach the synthesized layer where AI discovery begins.

Another important change we see is that publishers may no longer be the primary interface between the research and the reader. They have, in fact, become part of the underlying infrastructure, essential but not as visible. They continue to produce, curate, and validate content, but that content increasingly serves as input into other ecosystems rather than drawing users into their own. Their role becomes foundational, but less visible.

This dynamic is not entirely new. Indexing services and aggregators have mediated access to scholarly content. But AI accelerates and deepens this trend by collapsing discovery and interpretation into a single step. The user no longer moves from platform to platform.

In such a situation, it’s important to consider: how can publishers continue to remain visible even when readers are not directly encountering their platforms? How does their role evolve beyond supplying credible content? Can they be instrumental in influencing transparency and traceability in AI outputs?

Finally, how does this new search behavior affect the way we analyze the impact? Today, we are discovering articles through synthesis, not discovery. If papers aren’t actually being read, then what exactly are citations measuring? This creates a fundamental misalignment between how knowledge is used and how impact is measured, i.e., via article downloads and citations. Downloads lose meaning if fewer people are actually clicking on a paper to read, and it is unclear if citations are the result of direct reading or interpretation via AI summaries. We are, in a sense, measuring a system that no longer reflects how knowledge is being used.

It is tempting to frame these shifts as a loss of traffic, visibility, and control. What matters, though, is not resisting AI-mediated discovery but shaping how it evolves.

  1. There should be traceability: a way for readers to track back to specific sources. Without this, the trust and reliability of research results are compromised. If researchers cannot easily distinguish between what is drawn from literature and what is inferred by AI, then the boundary between evidence and interpretation begins to blur. Publishers need to play a role in defining these expectations.
  2. How impact is understood needs to evolve: Should we consider new forms of influence? Perhaps one measure of impact would be inclusion in AI-generated summaries and answers, more specifically, how often does research surface in discovery systems, and how is it attributed by AI?
  3. Metadata matters: Abstracts, keywords, funding information, and contributor roles are not just descriptive elements; they are signals that determine how research is surfaced. Poor or inconsistent metadata not only reduces discoverability but also risks rendering the research invisible, so this needs to be encouraged.

The integrity of the academic system has rested on a relatively stable premise: that those engaging with research will return to the source. That premise is now being tested. Which brings me to my final unresolved thought:

If research is increasingly accessed through AI-generated summaries rather than via primary sources, then what does it mean to “engage with research” at all?

Roohi Ghosh

Roohi Ghosh

Roohi Ghosh is the ambassador for researcher success at Cactus Communications (CACTUS). She is passionate about advocating for researchers and amplifying their voices on a global stage.

Discussion

Leave a Comment