Editor’s note: Today’s post is by guest blogger Claudia Taubenheim, who is Research Integrity Consultant, advising on integrity issues for PA EDitorial and the journals it supports. She holds a PhD in Microbiology (Kiel University) and has over 7 years’ publishing experience, including as Senior Managing Editor at PA EDitorial.

How often did you, as an author with English as an additional language (EAL), receive reviewer or editor comments like, ‘Please have a native speaker edit your work’? How much longer does it take you to write in a language that is not your first? And to fluent or native speakers – be honest: how often do you, consciously or not, judge the language instead of the science? My first language is German, and I often think, paraphrasing Gloria from Modern Family: ‘Do you know how smart I am in my language?’

For decades, EAL researchers have faced systemic disadvantages in publishing. Now, AI writing tools such as Grammarly, Paperpal, Perplexity, Claude, or ChatGPT promise relief of this linguistic burden. Yet, they bring new risks into science. They promise seamless language polishing, yet also carry the potential to blur our voice, standardize our style, and insert new biases.

global map showing connections between a variety of diverse people

Language Barriers in Science: Persistent Inequity

Over the last century, English has become the undisputed global language of science, with 98% of scientific publications written in English. This creates massive pressure on researchers from non-English-speaking backgrounds. Research shows that EAL authors spend significantly more time preparing manuscripts, face higher rejection rates for “language issues,” and even avoid conferences due to anxiety about English.

However, the language bias extends directly into peer review. One study shows that abstracts written in “native-like” English were rated higher for scientific quality than identical ones in “non-native-like” English. In my role as editorial manager for several journals, I have seen rejections that hinge on phrases like ‘improve your English’ despite robust data. But for many researchers, access to professional language editing is prohibitively expensive. As a result, research output becomes dominated by native English speakers, restricting the diversity of voices and ideas in global scholarship.

The Promises of AI for Non-Native English Speakers

Here, AI seems transformative. Large language model tools can polish grammar, refine tone, and enhance clarity and structure. Research suggests that these tools improve the overall quality of writing for EAL speakers. Another study shows that AI tools can help us to write better papers by improving coherence and flow. One way in which this could go spectacularly wrong is the case of this study using so-called tortured or even plainly wrong phrases such as “bosom peril” instead of “breast cancer”. It was recently retracted after being called out. However, this issue seems to go even deeper as we then discover that this was most likely a case of attempted fraud.

From my own experience, AI tools help streamline common tasks: drafting outlines, suggesting keywords, or rephrasing dense sentences. For example, for this article, I used Perplexity for the outline and ChatGPT, customized to my writing style, to help me formulate my thoughts. These tools help level the playing field, allowing EAL authors to focus more on ideas and less on language form.

The Dangers: New Biases and Failures of AI

Yet AI is not the cure-all. Current models’ training data is skewed toward Western-style English and Western ideas; thus, AI may favor that style and introduce even more biases. Or even worse, it can introduce factual inaccuracies known as ‘hallucinations.’ Also troubling is that non-native researchers have reported that AI detectors may erroneously flag their writing as machine-generated. A 2023 study found that simpler sentence structures, common among many non-native writers, triggered false positives, risking false accusations of misconduct.

Publisher Perspectives: Opportunity with Caution

Publishers are actively engaging in the current discussion; The Scholarly Kitchen alone has published more than 150 posts covering different aspects of AI. Current guidelines from most publishers generally permit the use of AI for light editing, specifically for grammar, conciseness, and clarity, provided that authors disclose its use and AI is not credited as a co-author. The responsibility firmly remains with the human writer. However, for the ethical use of AI, we still need clearer, risk-based guidelines that distinguish between low-risk support tools and high-risk generation tools, something which is discussed extensively in the field, for example, by the Committee on Publication Ethics (COPE). Best practice for non-native authors might include transparency in submissions, retaining initial drafts, and documenting all AI assistance.

From my own experience, the most effective way to use AI is as a ‘second reader’ rather than a ghostwriter. I draft in my own words first, then use tools like ChatGPT or Perplexity for grammar checks, clarity, or flow. I never accept suggestions blindly: I always make sure that my intended meaning is still communicated clearly and in the way that I wanted it to come across. This way, my academic voice remains intact while still benefiting from linguistic support. For me, AI is the most useful when treated as a language editor: helpful, but not the master of my ideas or style.

What Needs to Change? Pathways to a More Inclusive Science

If we are serious about equity in science, language must no longer be a gatekeeper. Publishers, editors, and peer reviewers have the power and the responsibility to reform language policies so that manuscripts are not dismissed before their ideas are even considered. Desk rejections based purely on linguistic grounds waste valuable knowledge and silence the diversity of voices essential for scientific progress. This does not mean lowering standards for clarity; it means evaluating the scientific content first, and only then working collaboratively with authors to refine expression if necessary.

The same intentionality must apply to AI. Policy should be transparent, risk-based, and inclusive. There is an urgent need to implement AI guidelines in publishing that are widely accepted. Most AI writing tools are trained primarily on English-language data from Western academic and journalistic sources (or even ‘just’ Reddit), which risks reinforcing a single ‘standard’ style and marginalizing other scholarly traditions. Integrating multilingual datasets and diverse rhetorical styles into model training can help reduce this bias.

Funding agencies and publishers are also crucial for providing subsidised or in-house editing support for researchers, waiving fees for low- and middle-income countries, and providing access to advanced language tools.

Finally, reviewers and editors need training in linguistic and cultural sensitivity. Understanding the challenges faced by EAL authors, alongside the capabilities and limitations of AI, will lead to more informed and fairer decision-making. Editors-in-Chief, in particular, are the crucial link between researchers, reviewers, and publishers. By applying journal-level policies, the use of AI, setting language expectations, and supporting EAL authors, they can make the biggest difference. An Editor-in-Chief who instructs reviewers to judge science before style, or who ensures affordable editing support, actively shifts the culture toward equity. Without their leadership and the support from the publishers, policy statements remain good intentions.

Conclusion

Language barriers have long influenced who is heard in science. AI tools now offer a way to close the gap, but without care, they risk reinforcing the very biases they aim to solve. Greater awareness of the challenges faced by EAL authors not only promotes greater inclusivity but also enriches the global scientific literature. To achieve this, we need thoughtful language policies, fairer peer review, and AI systems designed with diversity in mind.

In my own writing journey, AI has become both a relief and a risk. It eases the burden of writing in English, but I also need to reflect constantly on what part of the text is still ‘me’. My hope is that as AI becomes inevitably embedded in publishing, we move toward systems where Editors-in-Chief, reviewers, and publishers alike recognize both the potential and the pitfalls of AI and choose policies that amplify, rather than flatten, the voices of non-native scholars.

What one change would you make to make publishing fairer for EAL researchers?

Author’s note: I used Perplexity and ChatGPT to outline and write this article. 

Claudia Taubenheim

Claudia Taubenheim

Claudia Taubenheim is a Research Integrity Consultant at PA Editorial, where she advises on integrity-related topics. She holds a PhD in Microbiology from Kiel University and brings eight years of publishing experience, including her role as Senior Managing Editor at PA Editorial. In addition to her consultancy work, Claudia is a freelance medical writer, a project coordinator for a clinical research unit at University Clinic Schleswig-Holstein, and a career coach for scientists.

Discussion

6 Thoughts on "Guest Post — From Language Barrier to AI Bias: The Non-Native Speaker’s Dilemma in Scientific Publishing"

Small, but pretty important correction: While the picture of scholarly publishing seen from the Web o Science or Scopus suggests 98% is in English, you see a very different reality when you look at OpenAlex (estimated at only 68% English).[1] When we last examined the journals using PKP’s Open Journal Systems (the open source publishing platform) using data from 2020, we found only half published in English.[2]

[1] https://asistdl.onlinelibrary.wiley.com/doi/full/10.1002/asi.24979
[2] https://direct.mit.edu/qss/article/3/4/912/114119/Recalibrating-the-scope-of-scholarly-publishing-A

Thank you, Juan, very interesting numbers! I wonder if these numbers differs in the academic fields?

Thank you for this insightful and much-needed article. It brilliantly highlights the systemic challenges faced by EAL researchers and the double-edged sword that AI tools represent. Your point about AI detectors potentially flagging the simpler sentence structures common among non-native writers is particularly concerning.

This connects to a broader theme of how we navigate complex information across different fields. For instance, in community health work, professionals often have to decipher best practices for treatments from various sources. I recently came across a resource that discusses the importance of clinical guidance in this context, specifically regarding the use of certain antibiotics like ceftriaxone in trauma-informed care settings. It made me wonder, from a publishing perspective, how can we better ensure that critical medical information from non-native English speaking researchers is evaluated for its scientific merit first, without being dismissed due to language barriers? You can see the kind of nuanced discussion I’m referring to here: https://allerganresa.org/navigating-path-forward-michigan-trauma-informed-coalitions.

What steps do you think journals and publishers can take to create more robust safeguards against this type of bias, ensuring vital research from all backgrounds reaches the audience it needs to?

Thank you for your comment and a very interesting read!

To answer your point, I would suggest a more systemic solution: Journals could for example offer approved AI-supported translations, allowing papers to appear in both English and the author’s native language. These versions could be indexed and made searchable in databases like Google Scholar so that important findings are discoverable across languages.

On a smaller scale, publishers could also separate language review from scientific assessment and provide access to trusted (AI) translation or editing tools within the submission system to help authors with language issues.

Comments are closed.