Editor’s note: Today’s post is by Chef Angela Cochran and Neil Blair Christensen, Sales Director at Morressier and the Molecular Connections Group.
In September 2018, SSP’s New Directions seminar facilitated an Oxford debate on artificial intelligence (AI) in Peer Review, where Angela Cochran represented the opposition and Neil Blair Christensen the proposition.
Now, they revisit the AI in Peer Review topic to reflect on what has transpired since 2018, discuss if their positions have changed, and consider directions for AI in Peer Review

2018 Angela
Ever the skeptic, having reviewed my notes for this 2018 debate, my concerns seemed mostly centered on questioning the premise that AI would make any part of editorial work better. I was and remain very concerned about bias in the training data and the loss of creative and serendipitous discovery. My arguments focused on humans, as flawed as we may be, being better than machines at performing thoughtful analysis in evaluating scholarship.
2018 Angela was worried that authors presenting new and emerging topics would be penalized by the AI that is only aware of what has been published in the past. I was concerned that early career researchers would be penalized by an algorithm that boosts previously published authors. Of course, both of these things are true of human review as well, but in 2018, it seemed MORE likely that the machine would get this wrong than a human potentially would.
I think more than any other factor, I believed that critical thinking of new ideas, new discoveries, and mind-blowing breakthroughs were tasks that only the human brain were capable of and relegating any of these tasks to machines was a bit of a resignation.
2018 Neil
My views in 2018 were heavily influenced by my work with an AI start-up called Unsilo. I didn’t consider myself a technological determinist, but I enjoyed, and still enjoy, exploring blended horizons of technology and human productivity.
At Unsilo, our search for product-market fit led to the launch of a suite of automated technical checks to help people reduce task workloads by checking compliance, extracting insights, and matching content and people. The solutions we developed used a mix of natural-language processing, machine learning, unsupervised vector modeling, and rudimentary rules. Not everything in “AI” solutions was or is done with sophisticated AI. We thought there were great opportunities for assistive AI in peer review, and so, I found myself invited to an Oxford Debate with Angela.
Surely, I suggested, increased automation in peer review was not a radical proposition. The use of these supportive technologies would be normalized, driven by systemic challenges in scholarly workflows that could not be addressed by people alone. Publishing complexity and scale rapidly expanded while the industry doubled down on finding efficiencies. Stakeholders, many voluntary, were increasingly pressed for time. Attention on average submissions was decreasing, integrity risks were increasing, and reviewer invitation rejections were increasing. It seemed to be a perfect storm. Directionally, the feeling was that the outcomes of scholarly incentives, academic behaviors, culture, business models, and workflows were on trajectories that could benefit from AI support in peer review. Risks could be mitigated, and would be relative to the risks of a trust- and volunteer-based ecosystem under pressure. AI in peer review, as I saw it, was about supporting rather than replacing productivity.
2026 Angela
A whole lot has changed in the 8 years since I did pretty well on the circuit as one of scholarly publishing’s AI-naysayers. Obviously AI development has made giant leaps (some for good and some putting us on a probable path of destruction – some things never change). Certainly my understanding, which is just a smidge higher than novice, has evolved. So what would I argue today about AI in support of peer review? A few things.
I was listening to clinicians on a webinar about how AI could help doctors with tasks such as documenting billing codes and getting pre-authorization from insurance companies. One clinician said the smartest thing of the entire presentation: We should not use AI to make tasks easier that should not be done anyway. I think about this a lot when contemplating the use of AI in support of peer review.
I guess what surprises me a bit is that 90% of the AI in peer review tools currently being demoed are in the research integrity space, which is nowhere near 90% of the opportunity for improvement. I am not saying that there is not a place for AI in doing integrity checks; however, what about the more likely issues in peer review – novelty scores, help in drafting letters that address the differences in reviewer comments, comparing the data presented in tables/figures to the text to ensure it’s not being overstated, analyzing a revised paper against the reviewer comments and highlighting where comments were addressed, or flagging for a reviewer when their comments don’t seem to support the decision recommendation they are making. How about a handy editor in the reviewer report workflow that nudges them about constructive feedback language?
I guess my concern is that we seem singularly focused on products that might allay fears of integrity issues and not actually improve the peer review process for our volunteer editors and peer reviewers.
2026 Neil
In 2018, I underestimated the institutionalization of academic culture and scholarly systems. Since 2018, humans rapidly developed mRNA vaccines in response to a global pandemic, captured the first image of the shadow of a black hole, made self-driving cars yesterday’s news, developed drone warfare, drove a robot around Mars to find rings of iron phosphate organic molecules that indicate ancient microbial life, launched dominant LLMs, and started work on next-gen AI World Models.
Next to these milestones, humanity published an increasing amount of research, including an increasing amount of questionable “research.” The industry came up with an ever-growing list of checks, requirements, and early signs of automated feedback to support growth, but it didn’t address the underlying disease. Rather, it grew its capacity to address more symptoms.
The vibe in our industry often feels like technology in search of needs rather than needs in search of technology. What sometimes surprises me is how far scholarly publishing has come since 2018, but not really. Angela’s finding, regarding the majority of tools being developed seems to address the minority of opportunities, is a really interesting one that I tend to agree with. However, I sense that this is institutionally driven rather than a lack of possibilities. The challenges of yesteryear are ubiquitously present.
Many discussions about AI in peer review are not much different today than in 2018. New capabilities are better, faster, or cheaper, but many merely check more papers for more things, enable more complex workflows, and get more papers reviewed. Sustaining existing workflows and, in a way, scaling and enabling systemic dysfunctions. Self-soothing instead of addressing the underlying causes or peer reviewing differently.
Our peer review challenges are foundationally tied to academic incentive, promotion, and tenure culture. Haruki Murakami coined the phrase “No matter how far you travel, you can never get away from yourself”.
8 years in the Future Angela
Assuming there are still journals, my hope is that we use the next eight years to transform the peer review experience. One of our Editors was lamenting that he was listening in on a “mentoring” talk to early career researchers and the senior person giving the talk told them not to waste their time on peer review activities. Further, the experience of participating in modern peer review processes in systems that are not up for the task is hugely problematic for digital native researchers that are significantly pinched for time and also dreaming of some semblance of work–life balance.
I don’t think the general principles of peer review require massive transformation, but the actual work itself needs to transform to be less time consuming, more portable, and extremely mobile friendly for the humans in the proverbial loop.
AI tools should also assist in expanding the pool of willing participants to those for whom English may not be their preferred language. The peer-review crisis is fueled by massive increases in submissions from researchers that are typically not asked to serve as reviewers, mainly over concerns about language barriers. Imagine how many reviewers we could find if people could read and review a paper in the language most comfortable to them.
AI tools should be implemented, not to do the tasks, but to make the tasks easier.
8 years in the Future Neil
My horizon is guaranteed to overlook small and unknown things with potential for outsized impacts on eight-year outcomes. New dependencies, benefits, and unintended consequences are guaranteed. I remind myself how basic applied AI was in 2018, before LLMs went mainstream and before agentic workflows became a topic. In 2034, we may chuckle when we think of today’s AI products, like a Ford Model T or texting with dumbphones.
AI will help us run workflows in smarter ways than today, and we will need it because our productivity will have to deal with more AI, complexity, and scale. As an example, consider the complexity, scale, and growth of content if AI-translation eventually allows the world to speak, write, review, and read in real-time in the many languages of its choice. Consider the impact on our industry as AI-generated content becomes non-radical. Consider the ramifications if chunked content is incentivized, scaled, and needs evaluation. Consider them all on top of each other. As our assisted capabilities scale, so will our complexities and expectations, and that again will drive next-generation AI adoption.
Publishing pressures related to academic promotion, tenure, and incentives will continue to motivate decisions to adopt AI in peer review. Sometimes in ways we expect, and sometimes behaviors and new horizons will create needs for productivity in areas and ways we do not predict at a distance. Rather than a for-or-against AI in peer review, 2034 will be a for-and-against future where we reminisce about the relative simplicity of past peer review processes, yet remain surprised by how similar some aspects of peer review are to 2018 and 2026. New technologies can normalize fast, but culture and behaviors can run deep.
At some point, the term “AI” will feel worn, and AI in peer review will cease to be radical, like all technologies that go mainstream.
Note: Angela would like to note that Neil “won” the debate at the SSP event and Angela swore never to participate in an Oxford Debate style event again!