One of the great promises of artificial intelligence (AI) is the potential for their use in translation. Though once the stuff of science fiction, and an ideal route to filling plot holes to allow characters from different planets to interact (e.g., Star Trek’s Universal Translator, Doctor Who’s TARDIS Translation Circuit, or the Babel Fish from The Hitchhiker’s Guide), automated translation technologies are increasingly close to reality. Researchers for whom English is not their first language often face barriers to publication in English language journals, where biases can arise from equating the quality of the language used in the manuscript with the quality and rigor of the research performed. Though we’re not there yet — the specific jargon-based language of many fields will likely take some time to filter into general use tools —  AI offers a potentially more level playing field for global authors.

This also changes the equation for readers and publishers as well. Many publishers invest significant efforts (and funds) toward creating different language versions of their journal articles, or at least their article abstracts (we’ve done some work on this at The Scholarly Kitchen, although readership of our non-English versions of posts remains fairly low). This seems likely to soon become a relic of the past, as readers could instead create their own translated versions of articles themselves. So too might be the end of translation rights sales. While there’s great value in a skillfully translated version of a book, will AI translations be good enough for this purpose? Will a machine-translated technical manual be more readily accepted than a book of poetry?

As we move toward an automatically translated future, a look back at the past below. How did English become the dominant global language, replacing French in the 20th Century? And when we think about systems to translate other languages to English, which of the 160 different “Englishes” do we mean?

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.


1 Thought on "AI, Translations, and the Dominance of the English Language"

There is also the same copyright issue that hovers over all AI output right now, at least in the US and Canada (don’t know about Europe). That is, translations done by humans have been separately copyrightable from the original (eg a fresh translation of an old public domain work). But courts have ruled that output by non-humans (whether it be a monkey grabbing a camera and taking selfies, elephants making art, or computer generated) is not copyrightable. That is being relitigated in courts now for AI specifically, but the precedents are against them.

Comments are closed.