Editor’s Note: Today’s post is by Ashutosh Ghildiyal, Ashutosh is a strategic leader in scholarly publishing with over 18 years of experience driving sustainable growth and global market expansion. He currently serves as Vice President of Growth and Strategy at Integra.

The scholarly publishing industry stands at a defining crossroads. Artificial intelligence is rapidly transforming knowledge work, promising increased efficiency, scalability, and automation. Yet in our rush to explore its technical capabilities, we risk overlooking the human dimension — specifically, how AI is impacting the mental health, creative fulfillment, and cognitive engagement of the very people who create, review, and disseminate scholarly content.

As the sector debates policies, pilots AI tools, and drafts guidance on use cases, we must ask a deeper question: What is the human cost — and benefit — of AI adoption in scholarly publishing?

A person in a suit with their head obscured by a dark cloud labeled 'AI,'

The Unspoken Elephant: Fear of Disruption

Let’s acknowledge what many quietly feel about AI: fear. While often regarded as a slow-moving industry, scholarly publishing is deeply mission-driven, with a genuine commitment to advancing knowledge and serving the academic community. For many professionals, it also offers meaningful work and a relatively stable work-life balance. The specter of rapid AI-driven change threatens this equilibrium, breeding hesitancy.

This fear is partly why the community struggles to form standardized policies on the use of generative AI by authors, editors, and reviewers. We’re not only debating facts and formats — we’re grappling with unknowns that touch our sense of purpose, security, and professional identity.

Yet amidst uncertainty, a north star can help: evaluating every AI application through a dual lens of human cost and human benefit — not just operational efficiency.

Human Costs of AI Implementation

Beyond the familiar concerns around job displacement or plagiarism detection, there are subtler but more insidious costs of AI integration:

  1. Cognitive atrophy – As we rely more on tools for drafting, summarizing, or even thinking, we risk diminishing our own intellectual faculties.
  2. Diminished creativity – The cognitive discomfort necessary for deep creative insight is easily bypassed in favor of quick AI outputs.
  3. Lowered information quality – While voluminous, AI-generated content may lack human insight, subtlety, or originality.
  4. Erosion of intuitive judgment – In a world driven by prompts and machine confidence scores, our ability to discern quality, novelty, or truth intuitively may weaken.

The cost here is not just productivity — it’s attention. And attention, especially in editorial and publishing contexts, is arguably our most precious cognitive resource.

AI’s Psychological and Cognitive Impact

Today’s technologies encourage us to live increasingly in our heads — processing, reacting, optimizing. But this comes at a cost: many knowledge workers are losing the ability to simply be present with their own thoughts. Solitude — the wellspring of creativity — is becoming rare. And yet, it is precisely in those moments of quiet reflection that the best editorial judgment, insight, and ideation often emerge.

As AI becomes more integrated into workflows, we need a new dimension in our evaluation frameworks: psychological impact. Will this tool improve not just the speed, but the quality of our cognitive and emotional lives? Will it promote a sense of creative agency — or reduce humans to overseers of machine outputs?

The Human Benefits of Thoughtful AI Adoption

When thoughtfully integrated, AI can unlock significant human value — not just in productivity, but in purpose:

  1. Liberation from tedium – Automating routine, repetitive tasks allows professionals to focus on higher-order judgment, creativity, and strategic thinking.
  2. Reclaiming time for meaning – With less time spent formatting references or managing checklists, individuals can devote energy to work that feels purposeful and intellectually engaging.
  3. Thought partnership – AI can act as a brainstorming ally, helping to refine ideas and expose blind spots — provided users stay active and intentional, not passive consumers.

But fulfillment is not the same as comfort. Many AI tools offer ease and efficiency, but true engagement arises from challenge, autonomy, and creative expression. The most meaningful uses of AI in publishing will be those that amplify human fulfillment — not diminish it.

If AI is to serve us constructively, it should make our lives more fulfilling, not merely more convenient. And that distinction matters. Fulfillment comes from expressing ideas, shaping knowledge, and exercising control over one’s work — not from outsourcing thinking entirely.

This raises a deeper question: if AI can increasingly perform the functions of human thought — drawing on data, patterns, and logic — what becomes of the human being, whose education, careers, and contributions have long been shaped around the very faculties that AI now simulates?

The answer may lie in shifting our focus from replicating mechanical knowledge to deepening awareness — using AI not to replace thinking, but to free us to do the kind of thinking that machines can’t.

Ethical and Sensible Use Cases for AI in Scholarly Publishing

Editing Drafts

Generative AI can help refine drafts, especially those written in stream of consciousness without attention to grammar, typos, or brevity. Even for native English speakers, AI can polish writing to enhance clarity. However, careful human attention to the final output remains essential, as subtle changes in meaning or tone may require adjustment.

Thought Partnership

When colleagues aren’t available for brainstorming, AI can serve as a thinking partner. While not always accurate, it provides a space to explore ideas and arrive at better insights. As in any meaningful conversation, outcomes depend on the quality of questions asked — clear, focused inquiry leads to better results.

Peer Review Assistance

AI might assist with routine aspects of peer review that require no creativity — those tedious elements that create little fulfillment. However, human review becomes even more essential in the age of AI. While we desire speed and efficiency, AI lacks the quality of human attention. Quality attention requires time; it cannot be optimized or made efficient. It must be unhurried to observe the minutest irregularities.

Preserving Human Agency and Creativity

I am not in favor of AI tools dictating what we should do or offering predefined options on what to say or how to say it. We need to reverse the process: we should be the creators, initiators, context-builders, and clarifiers. AI should enhance our work — not define it.

By initiating thought ourselves and allowing AI to organize and present it, we retain our humanity. For any good outcome with AI, we must remain actively engaged in thinking. We are delegating certain aspects to AI, but we remain the boss — we must tell AI what to do, not be told by it what to do.

AI as Assistant, Not Mind

Does this mean we can outsource our thinking to another “mind”? Partly, yes, but only for lower-level tasks that don’t require high-value judgment. AI is not intelligent in a holistic sense — it is intelligent only in a partial, material sense.

Human thought is also not inherently intelligent. Thinking is just one function of the mind. It deals with processing information, analyzing, and communicating in words and images, as well as processing emotions. Like AI, human thought is based on memory, knowledge, and experience.

However, the mind also possesses attention and awareness, which give meaning to things before they are expressed through thought.

The Challenge of Non-Mechanical Learning

The greatest unspoken challenge in scholarly publishing is not just training people for skills or techniques, but cultivating talent that is not mechanical — minds that can look at issues with clear eyes, learn something new, and apply fresh, unburdened perspectives.

There are two ways of learning. The mechanical way absorbs knowledge without questioning, strengthening it through repetition until it settles in our awareness in a half-awake sort of way. Such knowledge becomes a deteriorating factor of the mind, causing it to close up.

The alternative approach is not about accumulating knowledge, but observing with a non-committed, empty mind that says “I don’t know what this is, but let me find out.” Such learning doesn’t rely on verbal descriptions but observes independently of them. This type of learning opens the mind.

Beyond Fear: Toward Creative Evolution

The scholarly publishing community must move beyond fear of disruption and toward thoughtful, intentional engagement with AI. By weighing both the human costs and benefits, we can shape policies that preserve the essence of human contribution while harnessing AI’s strengths where appropriate.

The critical question isn’t whether AI will replace human thought — but whether there are dimensions of the human mind that AI cannot replicate. In integrating AI into publishing workflows, we are presented with a profound opportunity: to rediscover and cultivate the uniquely human capacities of attention, awareness, creativity, and meaning-making.

Technology should not lead to our devolution, but rather support our further evolution — beyond technology itself. In scholarly publishing, this means using AI to liberate us from repetitive, low-value tasks so we can focus on the creative, interpretive, and intellectual work that truly advances knowledge.

A Call for Human-Centered Innovation

AI will undoubtedly reshape the workflows, roles, and economics of our industry. But the aim must not be dehumanization in service of optimization. Instead, it should be a reinvestment in what makes us most human.

As we design policies, implement tools, and train teams, we must stay anchored to a central question:

What do we want to protect, preserve, and promote about human contribution in the age of AI?

If we get it right, AI won’t signal an endpoint — but a new beginning. A tool not just for better systems, but for better selves. For deeper creativity, greater fulfillment, and more meaningful contributions to the scholarly record.

Author’s Note: This piece focuses primarily on the psychological and cognitive dimensions of AI in scholarly publishing. The environmental impact of AI is an important and complex issue that also warrants serious attention. While it is beyond the scope of this article, I acknowledge its significance and hope to explore it in a future piece.

Ashutosh Ghildiyal

Ashutosh Ghildiyal is a strategic leader in scholarly publishing with over 18 years of experience driving sustainable growth and global market expansion. His diverse career spans customer service, business development, and strategy, where he has collaborated closely with authors, institutions, and publishers worldwide. Ashutosh currently serves as Vice President of Growth and Strategy at Integra.

Discussion

2 Thoughts on "Guest Post — Beyond Efficiency: Reclaiming Creativity and Wellbeing in the Age of AI and Scholarly Publishing"

This is such a timely and important piece, Ashutosh. While we often focus on AI’s efficiency gains, your reminder about the erosion of creativity and well-being is much needed. The publishing world must find a way to balance innovation with humanity, something your post articulates beautifully.

A very timely and necessary reflection. As AI reshapes the landscape of scholarly publishing, it’s crucial that we don’t reduce its impact to mere metrics of productivity or automation. The ‘human infrastructure’—researchers, reviewers, editors—deserves just as much attention. How do we ensure that AI enhances, rather than erodes, their intellectual engagement and well-being? Perhaps it’s time we include mental health and creative autonomy as key indicators in our evaluation of AI integration

Comments are closed.