Editor’s note: Today’s post is by Dr. Krishna Kumar Venkitachalam, a surgeon, editor, writer, and academic working as the Innovation Officer at Enago. Reviewer credits to Chef Dianndra Roberts.

Artificial intelligence (AI) is changing how we work faster than we can comprehend. Like it or not, academia is no exception. I would like to believe I’m adapting well, and yet I feel shaken. I want to talk about how we all feel about this unrelenting march.

Researchers wonder whether using AI — now legitimate from ideation to publication — is making them less sharp. Some peer reviewers resort to AI assistance to squeeze in extra reviews, then worry about ethical malpractice. Journal editors struggle to identify genuine work among mountains of submissions.

Worries around AI aren’t just about the technology itself. Our core values of self-worth, trust, and belonging are being shaken. This is called “AI anxiety” and lies at the crossroads of emotion, cognition, and adaptation (a research focus in information science and beyond). Studies show scholars experiencing identity erosion, moral fatigue, epistemic overload, or epistemic injustice. We’re losing our trust in judging what’s real or valuable.

Conversations about AI’s impact mainly focus on efficiency and scale or on policy and ethics. What matters equally is the emotional toll: the exhaustion from daily recalibration, the isolation of not knowing if colleagues share your values, the constant rebuilding of systems we’d just learned. With each adjustment, we lose a bit of our humanity in our work.

Here I share my thoughts on what’s happening as a scholarly community, and what could preserve the humanity in what we do.

Closeup of a digital pulse and heart background

Theme 1: Identity Drift

Identity drift is an AI concept, but it is also a seafaring term. The latter describes what’s happening to us. Our identities are rooted in skills, roles, and connections crafted over years. Most of academia involves text and information processing, which AI excels at. As AI-mediated productivity becomes standard, we’re all wondering: Am I replaceable or redundant?

This hits some groups harder: the more established among us may feel threatened; early-career researchers and non-native English speakers may feel more included. Yet, they might struggle with AI assistance erasing their unique voice. Researchers from the Global South watch colleagues at elite universities use integrated AI seamlessly while they struggle to keep up on personal time.

Studies show technological shifts evoke both fear of redundancy and hope for reinvention. Banking on hope requires valuable resources: time to learn, institutional support, and security. At AI’s current pace, these resources will be even more compromised. Previous transitions — typewriters to word processors, libraries to the internet — gave us more time to adapt. This one doesn’t, and the norms are shifting daily. So everyone is struggling to keep up.

Theme 2: Trust and Moral Fatigue

Moral fatigue is not quite burnout. Instead, it is the exhaustion when rules keep changing, and you can’t tell right from wrong. In scholarly publishing, we are currently living in sustained ambiguity about the legitimate use of AI. There is systemic reluctance to define AI policies across the industry, with shifting goalposts and legal uncertainties. Published policies that we do see are bare and at times unclear on their position of penalizing or encouraging AI use.

Researchers, already pressured to produce more, can legitimately use AI to meet demand. Yet the lines not to cross are murky within the policy vacuum. Reviewers potentially have a powerful ally in AI, but worry about violating intellectual property rules. Editors might inadvertently enforce unclear policies while possibly depending on AI tools themselves. One can imagine the internal conflict.

Many publishers have integrated AI in line- and copy-editing, where human expertise remains invaluable. This is an area where replacement threats loom largest. Early-career researchers might worry that admitting to using AI will affect their careers, and senior scholars may quietly use AI while presenting public skepticism. The Global South researcher community, which could arguably benefit most from AI leveling, remains underrepresented in conversations defining “ethical” AI norms.

The trust architectures — peer review, editorial oversight, citation practices — were built for human-paced scholarship. They assumed time for reflection, reciprocity, and shared understanding of “work.” AI arguably accelerates everything and threatens these values.

What happens to peer review if the focus shifts from quality to monitoring human versus AI contributions? What happens if editorial correspondences become more scrupulous but colder? What happens to mentorship if junior scholars rely on AI instead of seniors? What happens to trust if everyone suspects everyone else is automating, but no one admits it?

The policy vacuum and moral fatigue compound pre-existing structural problems. The fatigue in upholding shifting standards without transparent conversations seeps into us, causing a strain related to personal failure.

Theme 3: Information Excess and Erosion of Discernment

I am convinced that the academic publishing community has a problem of abundance. We’ve always had more to read than we could, but AI accelerates this differently than usual publication growth.

AI makes creation easier, and I notice this in myself. I produce more: more drafts, more revisions, more versions. The AI tools save me time, which translates into expectations that I should do more. I’m reading more abstracts, skimming more papers, and generating more ideas. But I’m not convinced I’m actually knowing more.

This isn’t so much a confession of moral failing as it is an observation about cognitive limits that haven’t changed. Our brains evolved to process a certain amount of information; we’re now routinely exceeding that threshold. Journal submissions are higher than ever, preprint servers overflow, and the markers we trusted — careful prose, coherent argumentation, methodological rigor — are harder to spot when AI simulates them convincingly.

The term for this is epistemic overload: when you can no longer reliably judge what’s real or valuable. Generative AI (GenAI) produces human-like text without deep understanding, and in academic papers, the distinction becomes nearly impossible to detect. The very thing we’ve built careers on becomes less reliable as a signal.

Research suggests increased AI reliance reduces space for reflection and critical judgment. Although I’m aware that research was probably produced by people facing the same pressures, the anxiety is self-referential in uncomfortable ways. AI could help fix these problems — by serving as an interface and a supplementary (or secondary, if you prefer) brain. But that raises the question of what we become in the process. As AI transitions from trusted ally to potential competitor, our self-image shifts in ways we’re still articulating.

This affects everyone: teachers wondering what to assign, funders evaluating potentially AI-optimized proposals, researchers questioning if ideas are genuinely novel, and editors and reviewers facing inhuman submission volumes. We’re all recalibrating what constitutes a genuine contribution while the ground keeps moving.

Theme 4: The Erosion of Human Touch

Science operates in a strange space. The system is designed to be ruthless, unemotional, and evidence-based. Yet the individuals running it are warm-blooded, vulnerable, emotional, gut-feeling-based humans. So far, we’ve balanced this such that the system stays rigorous while humans stay involved enough. AI threatens to disrupt this balance by replacing human interactions that happen in the margins. From my perspective, role redundancy is less concerning than what happens when we stop interacting with each other and start interfacing primarily with systems.

Authors increasingly receive almost instantaneous AI screening results ranging from language quality to image integrity. As the scope increases with AI-assisted threats, inaccurate flagging will likely increase in this AI-versus-AI battle. The burden of proof on authors ultimately grows. Automated notes feel colder and more accusatory than distilled social exchanges from human editors, who tend to phrase concerns tentatively or give the benefit of the doubt.

Editors might be grateful for systematic coverage but have accuracy concerns; they are conscious of making life difficult based on assessments that might be wrong. They are caught between maintaining standards and knowing AI detection is notoriously unreliable, particularly for non-native English speakers.

Reviewers may find their interactions with editors becoming more transactional, with automated requests replacing personal exchanges. They may worry about their own AI-use being flagged incorrectly. The paranoia compounds: everyone suspects everyone, but actual policies remain vague.

Across the board, the ratio of human-to-human interactions is dropping substantially. AI systems can be designed to humanize outputs — warmer language, expressions of gratitude, softer phrasing — but this often feels generic. Humans are better versed at expressing genuine gratitude, reading between lines, and sensing when someone needs encouragement. AI-generated “warmth” reads as sycophantic or hollow, potentially deepening disconnection.

I’m not suggesting that publishers haven’t thought about these dynamics, but the challenge isn’t just designing better interfaces. As AI becomes more integrated, opportunities for genuine human connection — arguably scarce in a pressured system — become rarer. Dehumanization isn’t intentional but is already underway, and the alienation takes a tangible toll.

Theme 5: Toward Collective Rehumanization

Everything I’ve described so far might feel inevitable, like just another technological wave to adapt to. That’s partly true. AI isn’t going away, and its integration will likely deepen. But there is a more important response than individual adaptation: a collective effort to rehumanize the system as AI becomes embedded.

This starts with acceptance, which is not resignation but a clear, honest acknowledgment. AI is becoming part of what we do at every stage. The question isn’t whether to use it but how to use it in ways that preserve rather than erode the human elements that make scholarship meaningful.

AI integration will happen unevenly. Researchers at well-resourced institutions will have different access than those in the Global South. Early-career scholars will face different pressures than established ones. This inequality shouldn’t become a reason to discriminate against legitimate, thoughtful AI use by any stakeholder group. We need to resist creating hierarchies based on who can afford to use AI the least.

Right now, culture tends toward anxiety and blame. Researchers wonder if reviewers judge them for AI use. Reviewers worry about being flagged for permissible assistance. Editors enforce policies they’re ambivalent about. This creates silence rather than dialogue. What if we encouraged openness about successes and failures? What if we all practiced practical sharing — what works and what doesn’t, what confuses us, what worries us, and what might lead somewhere better?

This kind of openness requires genuine dialogue. We need spaces where stakeholders can share experiences without fear. Researchers need to be heard when policies about research outputs are drafted. Editors need input into screening automation. Reviewers need a voice in discussions about AI-assisted peer review. Policies emerging from community conversation rather than top-down decree better reflect how people actually work.

Perhaps, most importantly, we need to safeguard intellectual kinship — mentor relationships, human-focused peer review discourses, and warmer editorial correspondences. Our conversations need to factor in AI anxiety on the other side. These interactions happen in the crevices and get eliminated in optimization drives. If AI genuinely saves us time and cognitive load, we should protect space for human connections that AI can’t replicate, not fill that space with more automated tasks.

Conclusion

The uncertainty and anxiety we feel in the AI era are inevitable and should be acknowledged. We’re wired to become exhausted when exposed to constant unpredictability and novelty. As we strive for a new status quo, we need to give more attention to human aspects: connection, communication, trust, judgment, and self-efficacy.

I’m not arguing for abandoning automation or returning to some idealized pre-AI state. I’m suggesting we focus on rehumanization through transparency, honesty, and genuine sharing — not as aspirational values but as practical responses to a system threatening to become more efficient and less human.

I started by saying I feel shaken despite wanting to believe I’m adapting well — that contradiction hasn’t resolved itself in writing this. What has become clearer is that we’re all feeling some version of this, and this will only become worse if we push it aside. The vulnerability emerging across scholarly communities isn’t a weakness we need to eliminate. It might instead be the most honest starting point we have.

Krishna Kumar Venkitachalam

Krishna Kumar Venkitachalam is an orthopedic surgeon by qualification with 15+ years of experience in academic publishing under various roles, including manuscript editing, manuscript writing, editor training, author education, language technology solutions, and innovation. Currently, he works as the Innovation Officer for Trinka/Enago.

Discussion

2 Thoughts on "Guest Post: The Human Heart of Science — Navigating AI Anxiety in the Academic World"

I shared this with my entire journal team AND the entire senior leadership team of our medical society because it is so philosophically, emotionally, and strategically perceptive. What a great way to end this week, and thank you to Dr. Venkitachalam for crafting such a profound analysis.

“I started by saying I feel shaken despite wanting to believe I’m adapting well. ”

I’m not adapting well – I think early retirement is the best option. Years invested in using my brain to teach, write and do research, and expecting the same of my many hundreds of students, trusting them to do the same. I now get assignments where it is clear they have done little work. I will complete my last 2 books without AI.

Leave a Comment