Editor’s note: Today’s post is by Sami Benchekroun, Co-Founder and CEO of Morressier.
There’s something within us that resists change, even when adaptation is a matter of survival. Our reactions as the pandemic took hold in early 2020 illustrated this powerfully. As societies, companies, families, and individuals, we were often slow or resistant to adapt to the deep changes we needed to make. It felt, at times, as if we simply didn’t want the pandemic to be happening. But will doesn’t change fact, and so, adapt we did: social distancing, remote work, and, of course, the incredible technological achievement of novel vaccines. This adaptation brought with it great opportunity. At Morressier, we began many of our partnerships during this period by supporting online and hybrid conferences. The shift from in-person to virtual was not perfect, but it greatly increased the accessibility of academic events.
Resistance, adaptation, opportunity: there are lessons for scholarly communication in this progression. I would argue, further, that we need to accelerate this progression and collaborate at a greater scale than ever before to achieve the transformation of our industry.
For years, this industry has planned and strategized to successfully navigate the market shift from subscription to open access (OA) business models, and the consequent transformation of author to customer. No doubt this shift has been profound, but it is only the beginning. Even though the academic publishing community is still grappling with the change of the last decade, there is much more radical transformation to come: from cultural shifts to technological revolution. In fact, that transformation is already here. We need to be looking forwards, and building the infrastructure that will provide an agile foundation for the next phase of scholarly publishing.
Emerging technology has given us the opportunity to deliver on changes that researchers have been calling for. Artificial intelligence (AI) will force adaptation and the use of tools that can evolve rapidly to ensure research integrity and accelerate our publishing workflows. We can also learn from web3 (Web 3.0), a decentralized vision for the Internet that could transform everything from financial systems to the fabric of our communities. It’s easy to see both of these as threats, risking our current approach to infrastructure and publishing workflows. But maybe they are solutions, instead. AI and web3 both have the potential to impact long-standing community challenges as varied as peer reviewer recognition, plagiarism detection, and the pressure to publish.
Peer review in a future with AI authors
The practical applications of machine learning are something we’re increasingly comfortable with in our daily lives, from apps that find the fastest route across the city or streaming services that build personalized playlists. These discovery and recommendation tools have direct analogies in scholarly publishing, and there are huge opportunities for further innovation in this space that can increase both the efficiency and quality of published research.
AI is still in its infancy, however, and as tools like ChatGPT spark our imagination for what is to come, we need to think about the deeper implications for our industry. The machine language and reward models that are available right now can do a passable job at writing abstracts, for instance; it’s not a huge leap to imagine a future where these tools can create novel work. How will the industry adapt? I would further challenge each of us to think about the underlying challenges in our community that AI-authorship could help solve. Could AI remove barriers for aspiring authors for whom (like myself) English is not their first language? Could AI add a layer of objectivity to peer review, in addition to its ability to analyze at a scale humans cannot? With researchers stretched more thinly than ever before, is a little authoring help from AI acceptable with appropriate industry regulation?
Furthermore, if AI has a potential future of authorship, then it must have parallel application in editorial processes. New tools utilizing machine learning are already making peer review workflows more efficient, with automated research integrity checks, and a roadmap with solutions for salami slicing, plagiarism detection, and more. These tools are being designed so they can evolve rapidly as AI advances. And as with all AI, the more information we input into these tools, and the more integration we have across the industry, the better they’ll be able to identify misconduct.
The evolution of governance from OA to web3
Web3 is much bigger than crypto, or blockchain, the underlying technology that enables decentralized systems, and now is the time to start thinking about how it might impact the scholarly publishing ecosystem. Scholarly publishing exists in a world connected by the internet, and as that connective tissue evolves, it is inevitable that change will come.
In some ways, I see parallels between the web3 paradigm shift in how information is generated and the Open Access movement. Web3 means new models of governance, ownership, and reward, and is marked by much more decentralization, both in terms of content generation and dissemination. OA has driven a profound shift in business models and reputational mechanics, and web3 has the potential to do the same. Imagine, for instance, the impact of web3 on reviewers being recognized and rewarded for the huge value they add to the scholarly community. While these changes will not happen instantaneously, and indeed web3 changes may happen in a way we haven’t even begun to predict, it’s time for scholarly publishing to start imagining the possibilities.
Technology strategies that center cultural change
The transformation of the peer review process through machine learning, or the community mechanics empowered by web3, position the research community for positive cultural change. As web3 facilitates a more decentralized model of content creation and distribution, perhaps we’ll find the opportunity to move beyond a publish-or-perish mindset, and refocus on sharing information without the fear of scooping that hinders scientific progress in some disciplines.
Today, the pressure to publish contributes to research misconduct. It leaves people vulnerable to predatory journals, and it even means that a researcher’s priorities might shift from cutting-edge science to publishable science. Emerging technology, especially AI, will provide incredible opportunities to our community. Cultural change is really hard, messy, and incremental. In some ways, it might be easier to change processes and infrastructure, and hope those changes ignite the cultural changes we need. But time and again, people are the catalyst for change. That’s why I think there’s so much potential when we apply emerging technology to some of the community’s biggest pain points: peer review, the pressure to publish, or even the reputational mechanics of our current career paths.
Many actors in the ecosystem will resist change, but if we are to draw lessons from the pandemic: adapt we must. How we change and which opportunities we leverage is up to us. I’m a firm believer in collaboration around disruption, and so I’d like to close with an invitation to work beyond the boundaries we’ve traditionally worked within, together. Let’s build the infrastructure that embraces the power of AI and other emerging technologies, because when we take hold of the reins ourselves, as a community, we can drive toward the meaningful cultural change this industry needs.
Let’s start with a mindset shift: from resistance to change to curiosity to discover more. That curiosity is the spark that will drive new solutions, new ideas, and a new, more efficient, infrastructure for scholarly communications.
Discussion
1 Thought on "Guest Post — Is Science Too Slow to Change the World?"
Sami Benchekroun points are all well taken, and I am sympathetic to most of them. I am totally on board with workflows and “salami-slicing” detection aided by AI. No question. But the piece made me reflect on what happens in my head when I am asked to review a manuscript for a journal.
When I evaluate a submission, so much thinking happens before I arrive at a recommendation, with many subjective contingencies. These subjective variables include such things as timeliness, which means more than “Is this new?”; it means something like “Given the state of the discipline and our society at this moment in time, is this manuscript worth publishing?” Consider what tacit knowledge (both diachronic and synchronic) is required to answer that question. For me, it means thinking about, for example, recent conferences I have attended and what discussions were happening during coffee breaks there. It means thinking about the world outside my discipline, reflecting on the news and what people are worried about these days. It means using heuristics to make projections about where my discipline is headed, including the heading of my sub-discipline specifically, and how it ties into the direction of the other sub-disciplines the journal may be seeming to prioritize at any given point in time. But that of course is not all, not by a long-shot. I also think about the readers of the journal for which I am supplying a review. That thinking involves real-world knowledge beyond a “large language model.” It involves knowledge of myself, who invariably has been a contributor to (i.e. author for) the journal that has requested the review. It involves knowledge of me as a reader of that journal, knowing what kind of epistemologies I expect to find when I open the journal, and what “conversations” in my sub-discipline have been particularly important in advancing it to the state of the art right now. It involves me putting myself in the heads of people I have actually met, which allows me to anticipate responses, both positive and negative. Along those lines, what I see as crucially different from what AI is capable of (at least today) in terms of my process as journal reviewer is feedback to the author. If it is a “reject,” well, then it’s a reject. But if the decision is “R&R,” then my feedback tends to be more lengthy, needing to explain, hedge and provide rationale for my queries and critiques. That in itself is complex, and involves more than statistical probabilities, algorithms or decision trees. It demands knowing how to signal that there is an aspect in the text of the submission that is a sine qua non for further consideration of the manuscript, as opposed to mere quibbles about use of terminology, for example. It requires me to see not the manuscript in its current state alone, but how it could shape up to improve —not on a mere discoursal or rhetorical level, which is AI’s wheelhouse now— but in terms of how, for example, certain ideas could be further developed not because the journal parameters require it or because algorithms show that articles in Journal X tend to follow that pattern, but instead because I as a reader was confused or had to read something more than once to grasp what the author was trying to say. As a reviewer, I am not only looking at the words on the page, but I am reading “intention” in the discourse, inferring (inter)locutionary force. One thing is asking AI to give feedback on the coherence, cohesion and clarity of a college essay on the Civil War. Asking any large language model right now to go through the same cognitive processes as a human scholar is asking too much. (For now, at least.) Now multiply all I have said by 2. The peer review process generally requires two disinterested assessments. AI should not be subjected to different rules. The data and programming that inform one “GPT” can bias it just as much as the data that inform you and me. Therefore, the only way I can see AI being used realistically (in our lifetimes) for judging the merits of the scholarship of our peers is if it was “Reviewer 3” (or similar). But let’s put it this way: If one day a journal I am considering submitting to suddenly says “Our reviewers are not human, but they’re faster,” that’s the day I submit somewhere else.