In today’s post I hope to provide a template for scholarly societies wondering how to grapple with the overwhelming and omnipresent prospect of an AI future. The possibilities are broad, philosophical, alarmist, and technical. It is tempting for us to start singing “ La La La La La” as loudly as we can and hope it all goes away. Yet I do believe that, as scholarly societies, we are in the right place to ask questions, questions that may shape the academic community’s response to AI. We will not have answers right now, but let the discourse begin!
We are talking about nothing else these days, it seems. AI engines exploded into our consciousness over the last couple of years, though it was Spring 2023 when students started using them in the classroom. We are all pondering a future dominated by machines, with humans running scared. If this sounds like a movie script, it really is. My favorite AI movie, by the way, is Ex Machina, Alex Garland’s 2014 movie starring Alicia Vikander as Ava, a self-aware and deceptive robot – a sentient AI. This is quite a psychological thriller, a game of manipulation between a programmer, Nathan, played by Domhnall Gleeson, and AI.
As Nathan says, “One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.” Spoiler: Nathan doesn’t last long.
There is no question that the power of generative AI is rapidly increasing, at speeds that are hard to understand. We have real concerns about how human capacity will relate to AI’s undoubted future ability to overtake many of our functions. How will we relate to machines, and indeed, to each other? And yes, while there are many doomsday scenarios, pitting people against machines, countries against each other in a race for generative AI dominance, there is also optimism. Perhaps AI will democratize knowledge, leading to greater global wisdom, healthcare outcomes, and an enhanced quality of life across the world’s population.
So yes, I know, it is easy to dream and fear in equal measure, but let’s turn to the here and now.
In this post, I want to set aside the hype, set aside fear, and focus on how a scholarly society, such as my organization, the American Mathematical Society (AMS), is preparing for how AI may shape its discipline and community.
The President of the AMS, Bryna Kra (the Sarah Rebecca Roland Professor of Mathematics at Northwestern University), convened a group of eminent mathematicians — and the not so eminent me — as an AI Advisory Group to the society. Chaired by Fields Medalist, Akshay Venkatesh, FRS, at the Institute of Advanced Study, this group has begun to tease out the questions we may need to consider to grapple with AI in our field, across publishing, research, and education.
“The AMS Advisory Group charge is to frame questions related to AI that are of importance to the mathematical community, with the goal that existing committees in the AMS will then be able to study these questions and develop resources for the community.”
Below, I will disentangle our initial approach a little, to give you a sense of questions that are already front of mind, expecting that we will have many more.
As generative AI models grow, they incorporate data — what we are currently calling training data. This data may include published concepts, proofs and results. Yet, this assimilation of content hoovers up the good and the bad, the correct and incorrect, the published and retracted. There is no taking back data from generative AIs. What will this mean for integrity of the scholarly record? Will lecture notes include scraped, unpublished content generated from AI with no indication of its origin? This is already happening for mathematical software (see this excellent article in SIAM News December 2022 by Tim Davis and Siva Rajamanickam). Will AI generate false content based on inaccurate?
Peer review seems to be at the heart of the matter. On the one hand, it is increasingly difficult to find reviewers capable and willing to do high quality review – in math a proof may extend to 100 pages – just imagine! On the other hand, we know that quality, unbiased peer review is essential. AI may be able to help in mathematics, perhaps through automated proof checking. Even so, it may be that human curation becomes ever more important, in partnership with AI.
The training data is, of course, an issue in itself. Where is this content coming from? In a world of open access, copyright is still a force, a legal reality that sits with the authors, extending licenses to publish attached for any form of reuse, provided there is attribution to the author (CC BY). How on Earth is an AI engine going to attribute effectively? I suspect it will not. Will publishers restrict their content further and sue if this right is breached? It is already happening. Will copyright law need to change?
On the positive side, while we are all implementing guidelines and best practices for using AI in writing manuscript (our guidelines can be found here), there a real potential benefit for publishers and authors. The American Chemical Society has partnered with Writefull to integrate AI-based language services into the ACS publications workflow, leading to significantly enhanced efficiency in the production workflow. An interesting article in ACS Nano (Best Practices for Using AI When Writing Scientific Manuscripts ACS Nano 2023, 17, 4091-4093) lays out strengths and weaknesses of generative AI for authoring. Will AI be useful and acceptable in cleaning up language for an author whose first language is not English?
Colleges and universities appear to leaving it up to faculty to thread their way through AI. Will there be standard AI policies across standard courses, to guide faculty? Will there ever be a way to detect when student work has been AI-assisted?
What will generative AI mean for those in the K-12 classroom? This recent commentary from the Brookings Institution is useful. The commentary delves into whether AI should be banned or embraced, using fascinating current examples. In the end the recommendation is for schools to develop strategies to establish guiding principles across the many generative AI engines emerging, provide training resources for teacher professional development, empower educators to implement principles, and help overburdened districts with resources.
How do we help our scholarly communities engage with AI? Should societies provide guidance, tutorials, think spaces for society members to share best practices across AI?
For us at the AMS, an interesting question is to ask how do we facilitate the AI community engaging with mathematics? There is interest in AI mathematical theory, of course. Perhaps a society such as the AMS could host courses, annual prize challenges, curate datasets that are tuned to the mathematical community?
As AI tools develop, be they generative AI, or Large language models (LLM), we have to remember how to translate these developments for a broad scholarly community.
Scholarly societies are well placed to consider how AI may shape their community’s future.
We have a chance to develop our publishing products, teaching tools, and research methods collaboratively in service of our communities. This is what scholarly societies are here to do.
I hope that sharing our initial approach to considering AI for mathematics may help others begin to grapple with this exciting, yet overwhelming topic.