Editor’s Note: Today’s post is by Mohamed Mannaa. Mohamed is a Lecturer in Plant Pathology at Cairo University. Dr. Mannaa also serves as a reviewer and editorial board member for several international scientific journals. Reviewer credit goes to Chef Haseeb Irfanullah.

I write this as a scientist trained before artificial intelligence became part of the everyday scientific workflow, and as an editor and reviewer who now sees its influence from both sides of the publication process. I’m not opposed to AI. I use computational tools myself and accept that technological progress in science is inevitable. What concerns me is not the presence of AI, but what it is increasingly replacing.

In the current research ecosystem, AI is no longer limited to technical assistance. It is now routinely involved in drafting manuscripts, restructuring arguments, refining responses to reviewers, and — more quietly — contributing to peer-review reports themselves. Reviewers, asked to carry an expanding workload with little recognition, are understandably tempted to rely on such tools. Authors then respond, again with AI assistance, to critiques that may themselves be partially machine-generated. The result is a strange loop: scientific exchange still looks human-led, yet the cognitive labor is progressively outsourced. Humans remain formally “in the loop,” while the struggle of thinking, the part that once shaped judgment and craft, is gradually removed.

3D rendering of a digital brain surrounded by light bulbs with question marks inside

I remember what it meant to write before this era. Writing a single paragraph often required reading dozens of papers, abandoning weak ideas, and slowly learning how to synthesize evidence into something coherent. It was difficult, frustrating, and time-consuming. But that difficulty was not incidental. It was how scientific thinking was formed. As many writing scholars have argued for decades, writing is not just a way to report what we think; it is one of the main ways we arrive at what we think (see, for example, Emig’s classic discussion of writing as a mode of learning).

When I look back at my own work produced before widespread AI tools, I feel pride, not only in the results, but in the intellectual path that led to them. At that time, a manuscript was generally a reliable reflection of the author’s understanding. Today, I find it harder to assume that polished writing necessarily corresponds to deep comprehension.

This concern extends well beyond scientific publishing. It has become normal for people to ask AI systems to draft routine emails, which are then read and answered by other AI systems. At first, this looks like efficiency. Over time, however, it can erode confidence in one’s own ability to communicate. When people repeatedly outsource wording, judgment, and tone, they begin to distrust their own voice. Eventually, the hesitation is no longer about expression, but about thinking itself.

I know similar anxieties accompanied earlier technological shifts. As a PhD student, I used software to generate graphs and run statistics. My supervisor — trained decades earlier — described the hours he spent drawing figures by hand and computing standard deviations manually. From his perspective, my generation already had it easy. He wasn’t wrong. But there is a critical distinction: those tools replaced manual effort, not cognitive effort. What we are facing now is different in kind. AI does not merely accelerate tasks; it performs synthesis, interpretation, and articulation — activities that were once inseparable from intellectual development. This is precisely why scientific communities are now debating where transparency, authorship, and accountability should sit when AI tools are used (for example, WAME’s recommendations that chatbots cannot be authors and that use should be disclosed).

As a microbiologist, I find it difficult not to view this through an evolutionary lens. In biology, long-term dependence on a host or partner can lead to reductive evolution: organisms that outsource essential functions lose genes, autonomy, and complexity over time. They may survive efficiently within a system, but they no longer function independently. A striking example is Mycetohabitans rhizoxinica (formerly Burkholderia rhizoxinica), an endosymbiotic bacterium living inside Rhizopus fungi. Descended from free-living relatives, it underwent substantial gene loss after adapting to an intracellular niche and became specialized around producing rhizoxin, a metabolite crucial for its host’s pathogenic lifestyle (a detailed genomic account is here). It persists, but largely by serving a narrow function within a larger system. It is difficult not to see a parallel risk in AI-driven knowledge systems. Survival inside an efficient structure does not require autonomy, only functional contribution. When thinking is systematically externalized, participation can remain while agency quietly declines.

The danger is not an immediate collapse of scientific quality. It is more subtle: a gradual shift from understanding to fluency, from judgment to plausibility, and from wisdom to surface coherence. Over time, we may produce researchers who can generate convincing text across disciplines without possessing the depth required to truly advance them. And because the output looks “good,” the loss is easy to miss, until it accumulates.

So, what would “self-control” look like in practice, and why would anyone choose it, especially under pressure to publish, teach, and compete? The honest answer is that incentives matter. If hiring, promotion, and funding reward speed and volume above clarity and originality, dependence will grow. But boundaries are still possible, even within imperfect systems.

One simple line to draw is between using AI as a tool and using AI as a substitute for cognition. Tools can help with grammar, formatting, translation, or code troubleshooting. The red zone begins when AI is doing the argument itself: generating the logic, drafting the claims, producing the “voice,” or writing the peer review you are supposed to think through. In that zone, the person’s name remains on the work, but the mind has stepped back.

A practical rule I’ve found useful (and increasingly recommend to trainees) is: write the first version with your own brain, then use tools only for polishing. If you cannot explain the central argument without looking at the AI output, you don’t own the work. Another is to treat peer review as one of the last protected spaces of human judgment: if we automate reviews, we send the message that evaluation is just another box to tick.

None of this requires rejecting AI or pretending we can roll back time. It requires something more modest and more difficult: acknowledging that thinking is a finite human capacity that weakens when unused. If the struggle to think is systematically removed from scientific practice, the ability to think deeply will erode with it. No amount of computational efficiency can compensate for that loss.

Mohamed Mannaa

Mohamed Mannaa

Dr. Mohamed Mannaa is a Lecturer in Plant Pathology at Cairo University. His research focuses on plant–microbe interactions, microbiome engineering, and sustainable disease management, with broader involvement across multiple areas of microbiology. He has authored over 55 peer-reviewed publications in indexed journals. He was recognized among the world’s top 2% of scientists in the Stanford/Elsevier global ranking (single-year dataset, 2025). Dr. Mannaa also serves as a reviewer and editorial board member for several international scientific journals.

Discussion

7 Thoughts on "Guest Post – When Thinking Is Outsourced: A Warning from a Scientist Trained Before AI"

This is indeed beautifully substantiated, especially as a microbiologist. My experience is the same though I do not depend on AI at all.

Yes, “writing is not just a way to report what we think; it is one of the main ways we arrive at what we think.” It is especially alarming to learn that some peer reviewers are employing it. I do not use it, though I gather it works in the background such as reporting misspelling. That’s OK, provided that it is not confusing my word with another.

But I am not so sure about your “critical distinction.” That is, that “those tools replaced manual effort, not cognitive effort.” In my generation there was first the slide-rule and then the hand-calculator. The cognitive effort in multiplying was replaced and, after multiple runs, was deemed reliable. All part of a growing role of AI that, if used with care, can greatly help.

Robert Maxwell’s insight was that academics did the creative work which could be vetted, bound and sold- one of the starting points for the pub/perish industry. Garfield’s Current Content is now a spot in the multiple journals now owned by companies such as Clairvate and Elsevier (RLAX). All sustained by an industry that vets for those who use the information for users who use this information for funding and ranking, impacting on academics decisions and futures.

A well-crafted article from academic areas with substative insights is the critical metric, but remains still born unless it is published. As AI advances it becomes increasingly difficult to ascertain how to attribute authorship of insights whether from past articles of colleagues, students, think tanks or the private sector. The validity rests with the strength of the “discovery”

Considering attribution (pub/perish), assigning patents, copyrights, and related validation, particularly with AI’s creative finger involved, needs to transcend the previous academic rationale around the table in the Scholarly kitchen

Thank you for so clearly outsourcing many of my half-formulated concerns. And now where I live, the provincial government just announced a partnership to bring AI instruction in to K-12, which I find incredibly worrisome. Done right, these tools could support struggling kids and help them master concepts and skills. But if it’s based on using LLMs to help with “research”, I don’t see that happening. I keep telling my kids their superpowers for succeeding in life are things like reading novels, writing articulately and being able confident in their conversational skills, along with being able to solve practical life problems and being resilient and creative when things don’t go their way; we are stripping all these skills away from an entire generation. I know people have always gone on about “kids these days”, but I know many teachers with 20+ years of experience who say something has massively shifted in the past few years.

Thank you all for these thoughtful comments. I am grateful that The Scholarly Kitchen gave this concern a serious space, and I am glad the article resonated with others who are watching similar changes from different positions in the scholarly ecosystem.
What worries me most is not AI assisted polishing, but the gradual normalization of AI-assisted judgment. In peer review, I increasingly see reports that are fluent and plausible, but sometimes patterned, generic, or weakly connected to the manuscript’s real scientific core. With younger researchers, I also see a growing hesitation to trust their own first thinking before asking AI.
That, to me, is the line worth protecting. AI can support the work, but it should not become the place where the question, the judgment, or the intellectual confidence begins.

Leave a Comment