With Peer Review Week fast approaching, I’m hoping the event will spur conversations that don’t just scratch the surface, but rather dig deeply into understanding the role of peer review in today’s digital age and address the core issues peer review is facing. To me, one of the main issues is that AI-generated content has been discovered in prominent journals. A key question is, should all of the blame for these transgressions fall on the peer review process?

To answer this question, I would like to ask a few more in response. Are you aware of the sheer volume of papers published each year? Do you know the number of peer reviewers available in comparison? Do you understand the actual role of peer review? And are you aware that this is an altruistic effort?

The peer review system is overworked! Millions of papers are being submitted each year and in proportion, the number of peer reviewers available is not growing at the same pace. Some reasons for this are that peer reviewing is a voluntary, time-consuming task which reviewers take on in addition to their other full-time academic and teaching responsibilities. Another reason is the lack of diversity in the reviewer pool. The system is truly stretched to its limits.

Of course, this doesn’t mean sub-par quality reviews. But the fundamental question remains “Should the role of peer review be to catch AI-generated text?” Peer reviewers are expected to contribute to the science — to identify gaps in the research itself, to spot structural and logical flaws and to leverage their expertise to make the science stronger. A peer reviewer’s focus will get diverted if instead of focusing on the science, they were instead asked to hunt down AI-generated content; this in my opinion dilutes their expertise and shifts the burden onto them in a way that it was never intended.

Ironically, AI should be seen as a tool to ease the workload of peer reviewers, NOT to add to it. How can we ensure that across the board, AI is being used to complement human expertise — to make tasks easier, not to add to the burden? And how can the conversation shift from a blame game to carefully evaluating how we can use AI to address the gaps that it is itself creating. In short, how can the problem become the solution?

The aim of AI is to ensure that there is more time for innovation by freeing our time from routine tasks. It is a disservice if we end up having to spend more time on routine checks just to identify the misuse of AI! It entirely defeats the purpose of AI tools and if that’s how we are setting up processes, then we are setting ourselves up for failure.

Workflows are ideally set up in as streamlined a manner as possible. If something is creating stress and friction, it is probably the process that is at fault and needs to be changed.

Can journals invest in more sophisticated AI tools to flag potential issues before papers even reach peer reviewers? This would allow reviewers to concentrate on the content alone. There are tools available to identify AI generated content. They may not be perfect (or even all that effective as of yet), but they exist and continue to improve. How can these tools be integrated into the journal evaluation process?

Is this an editorial function or something peer reviewers should be concerned with? Should journals provide targeted training on how to use AI tools effectively, so they complement human judgment rather than replace it? Can we create standardized industry-wide peer reviewer training programs or guidelines that clarify what falls under the scope of what a peer reviewer is supposed to evaluate and what doesn’t?

Perhaps developing a more collaborative approach to peer review, where multiple reviewers can discuss and share insights will help in spotting issues that one individual could miss.

Shifting the Conversation

Instead of casting blame, we must ask ourselves the critical questions: Are we equipping our peer reviewers with the right tools to succeed in an increasingly complex landscape? How can AI be harnessed not as a burden, but as a true ally in maintaining the integrity of research? Are we prepared to re-think the roles within peer review? Can we stop viewing AI as a threat or as a problem and instead embrace it as a partner—one that enhances human judgment rather than complicates it?

Let’s take a step back and look at this SWOT analysis for AI versus human expertise in peer review

Table showing SWOT analysis of AI in peer review

The challenge presented before us is much more than just catching AI-generated content; it’s about reimagining the future of peer review.

It’s time for the conversation to shift. It’s time for the blame game to stop. It’s time to recognize the strengths, the weaknesses, the opportunities and the threats of both humans and AI.

Roohi Ghosh

Roohi Ghosh

Roohi Ghosh is the ambassador for researcher success at Cactus Communications (CACTUS). She is passionate about advocating for researchers and amplifying their voices on a global stage.

Discussion

1 Thought on "Strengths, Weaknesses, Opportunities, and Threats: A Comprehensive SWOT Analysis of AI and Human Expertise in Peer Review"

AI-generated *text* isn’t necessarily a problem. I would focus on AI-altered data and images. Does a reviewer’s skill in detecting misuse of AI correlate with the quality of their scientific review? Because I don’t assume that an author’s writing skills correlate with scientific merit. Translation, editing, and writing assistance are legitimate uses of GenAI. I think we want reviewers to focus on the authenticity and legitimacy of ideas more than language.

Leave a Comment