This year’s Peer Review Week is dedicated to exploring innovation and technology. Every year as we write for Peer Review Week, we make some of the same observations:  peer review is essential to the integrity of research; transparency in practice and equity of access are critical for the health of scholarly publishing; diversity across disciplines and fields make homogenous, unitary applications challenging or even ineffective. Throughout this Peer Review Week we will be drawing attention to some of the promise and some of the challenges that the tempo of enthusiasm for AI and adjacent technologies offers for peer review in the context of those essentials, as well as highlighting some innovations that are not technology-dependent. We hope you’ll read along with us, and share your thoughts in the comments and on social media – and even directly with your colleagues and human collaborators.

But what do we mean by “technology” and what counts as “innovation?” Certainly AI may be top of mind. AI (artificial intelligence) was Collins Dictionary’s word of the year in 2023, with use of the word quadrupling over the 12 months. Within our own world of scholarly communications, it certainly feels like AI is everywhere! And, with innovation and technology in peer review the theme for this year’s Peer Review Week, AI won’t just be one of the topics we cover in this introduction to the week; it will also be covered specifically in  guest posts by Chris Leonard of Cactus Communications and Zeger Karssen of World Brain, on Tuesday and Wednesday. Later this week (and even into the following one!) we’ve gathered perspectives on innovation and technology in peer review more generally from a range of stakeholders, many of whom — unsurprisingly! — also touch on AI.

3d rendering humanoid robot reading a book in front of a chalkboard covered in mathematical equations and symbols

However, AI is certainly not the only form of innovation — technological or other — that we’ll be addressing. All sorts of other innovations and technologies are being developed in peer review, as you’ll have seen in the array of responses from our fellow Chefs in answer to last week’s question: What is, or would be, the most valuable innovation in peer review for your community? We’ll be sharing some more thoughts in this post, as well as hearing a variety of other perspectives over the course of the week.

What does innovation in technology for peer review look like? For some, it’s a continuous cycle of learning, adapting, augmenting, evolving, and partnering. There have been quite a few strategic partnerships between organizations that have only recently begun to recognize overlapping aims and goals in their missions. For instance, Kriyadocs and DataSeer joining forces to enhance data sharing and research integrity. Demonstrating the necessity of partnership in innovation and sharing what you know with others is a practice to strengthen the larger ecosystem.

Because, while a strong infrastructure is essential for innovation in peer review at the organizational level, for publishing professionals, at the personal level a mastery of basic publishing knowledge is equally important — reacquainting yourself with fundamental skills ensures that you have a solid foundation to build upon. There must also be a willingness to acquire new knowledge and expand your expertise, perhaps extending into areas less familiar. Learning and enhancing skills that are both directly and indirectly related to peer review adds valuable tools to your publishing professional toolbox.

This could be technical: becoming more familiar with markup language, to better understand the underlying technology driving your peer-reviewed content and ensuring that it’s displaying correctly on your publishing platform; or enhancing your proficiency with Excel and raw datasets to better assess your reporting needs during a data modeling session with engineers. Understanding these and other tools is not just about keeping up — it’s about effectively managing processes and making data-driven decisions. Both indispensable skillsets when assessing new peer review tools and technologies. Or it could be cultural: cultivating within yourself, then your teams, departments, and organization, an environment ready to withstand the change management required so that technology and innovation are adopted successfully.

Because it’s important to remember that developing these new innovations and technologies is arguably the easy part, compared with managing the change needed to implement them. Behind great innovation are people who are able to drive it. Before even beginning to design comprehensive workflows that make it easier to find new reviewers, peer review — and closely adjacent editorial teams — must have the skills to manage them.  And it isn’t just about managing processes and updating policies. Leading people into change requires a deeper level of human connection and emotional intelligence. Some people thrive on process-driven work, but in the world of peer review, where people are at the center of nearly every process, strong leadership is a must. When introducing structural change, expect challenges! Change isn’t always welcomed, and that’s where the need for stronger leadership skills comes into play. True innovation doesn’t just come from technological advances; it’s also about building better relationships, better structures and infrastructures, and a stronger ability to lead teams through complex transitions.

Later this Peer Review Week we will dive deeper into these themes through an interview with peer review managers from various organizations. We’ll explore how they’ve been guiding their teams, what they’ve been teaching, and how technology has been shaping their processes. How do other peer review teams find the time and resources to learn these tools advertised to reduce turn-around times for authors (TATs)? Are you able to drive innovation in your day-to-day practices, or, are you having to budget for external consultant groups? Hear their stories and share their insights on change management, as we transition from traditional publishing business models to more innovative, tech-driven approaches.

Change is hard, and bringing people along with you — both internally (staff) and externally (authors, editors, reviewers, etc) — is absolutely essential. So this post will also consider some of the issues around managing change in peer review, something that we’ll come back to later in the week.

Managing change is one of several hidden costs of innovation and technology that we’ll cover, along with some of the opportunities that technology in peer review could create for those working in scholarly communications, both now and in the future. As technology and innovation continue to expand peer review teams, there’s a growing need for cross-departmental collaboration and organizational partnerships that bridge global missions and the advancement of open science at large. This is especially true as innovation gives way to newer roles and key stakeholders who were not traditionally teammates, like publication ethics specialists or peer review managers. There was once a time when peer review managers were rare additions to editorial teams, if they even had dedicated peer review workflow jobs; now, their role and similarly focused positions are indispensable. This growth is inspiring, but it also means that collaboration between often siloed groups on traditional publishing teams like production and peer review is more crucial than ever.

And now, back to AI…

Like so much technology, AI provides us with both opportunities and challenges — and this is true for its use in peer review as much as anything. For example, a recent study reported in The New York Times found that, “significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.” Sounds a bit problematic, doesn’t it? But then consider this editorial in Nature Biomedical Engineering, which paints a much rosier scenario: “Picture this: you receive an invitation to assess a new manuscript in your areas of expertise. The manuscript has already been peer reviewed by expert AI agents, and your task is to review the peer-review outcomes…You don’t need to check for clarity of language, for reporting accuracy and thoroughness or for the reliability of code; any such shortcomings would have been ironed out during earlier interactions between the authors and the AI agents. Instead, you focus on higher-level matters.”

It can often feel like we are being driven by technology, but that second scenario is a great example of what might be possible if we were the ones doing the driving. However, in order to be successful, the individuals and organizations using the AI agents described here would need to be confident that these automated “experts” were trustworthy — that they were using high-quality, unbiased, constantly updated content. And there’s the rub… At the moment, when it comes to the use of AI in peer review, it feels as if we are trying to build the plane — and write the instructions for using it! — after it’s already taken off.

In addition, like many things in scholarly publishing, the promise of AI assistance is a lot about publishers managing volume, namely the volume of research production in STEM fields, especially biomedical research.  The peer review challenges are different elsewhere, however. That slightly rosier scenario from Nature Biomedical Engineering about how AI assistance alleviates the need to “check for clarity of language” because it’s been “ironed out during earlier interactions between the authors and the AI agents” will chill those researchers – and their publishers – for whom the arrangements of words constitute the work result.  More urgently, as we look at the extraordinary challenges for humanist early career researchers with finding secure employment, never mind employment that actually supports their research, the number of available researchers and the ethics and practicality of eliciting reviews may be the most significant challenge to scholarship in a very long time. What is the peer review innovation that will help support research and researchers in these fields? It may be more systemic to the full research ecosystem, looking to fuller library access and more targeted research funding for those not employed in academia, for example.

For the purposes of this post, we’d now like to take a step back and consider — if we had all the time (and money!) in the world — whether and, if so, how we would want to use technology in the service of peer review. Would it be equally valuable across all disciplines and communities or would it simply amplify existing differences and inequities?

There is a version of a social media comment that runs something like this:  “I want robots to do the dishes and scrub the toilets so I can have time to read and write; why are we seeing so much investment in AI to read and write leaving us to do the housework?” This kind of comment strikes at several different realities; one is that a lot of service labor is highly gendered, and another is that technologies may either replace work that feels inherently rewarding or will create new categories of service work that may be as gendered as their predecessors. So what tasks, in a world of limitless resources and possibilities, might we want AI to support, and what tasks do we feel, as professionals, feel best equipped and most rewarded for doing independently?

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Jasmine Wallace

Jasmine Wallace

Jasmine Wallace is the Senior Production Manager at the Public Library of Science (PLOS). She is responsible for the production processes and day to day production and publication operations for the PLOS journal portfolio. Previously, she was the Peer Review Manager at the American Society for Microbiology (ASM). She was responsible for ensuring peer review practices, workflow, processes, and policies were up-to-date and applied consistently across the entire portfolio of journals. She served as Treasurer for the Council of Science Editors and was the creator and host of their podcast series S.P.E.A.K. In the past, she was a Teaching Assistant at George Washington University for a course on Editing for Books, Journals, and E-Products.

Karin Wulf

Karin Wulf

Karin Wulf is the Beatrice and Julio Mario Santo Domingo Director and Librarian at the John Carter Brown Library and Professor of History, Brown University. She is a historian with a research specialty in family, gender and politics in eighteenth-century British America and has experience in non-profit humanities publishing.

Discussion

1 Thought on "Some Thoughts on the Promise and Pitfalls of Innovation and Technology in Peer Review "

Talent at research, like many human attributes, should follow a bell-shaped distribution, with a few individuals of great talent at one tail of the distribution. The aim of the peer-review system of evaluation should be to detect such individuals and crown them with publications, funds, and the accolade “expert.” However, just as in democracies ability at marketing rather than in true expertise has come to determine which politician will be elected, so in democracies ability at marketing rather than true ability at research has come to determine which researchers will be published and funded. To the public and politicians, this has come to define “high expertise.”

The reason is glaringly obvious. In marketing, simple messages work. The same applies to the marketing of research ideas. This means that subtle research ideas tend to lose out to simple research ideas, and subtle researchers lose out to the unsubtle. Great researchers come up with great ideas that, because they are great ideas, are difficult to communicate to the researchers of lesser merit who review papers and sit on grant committees. Researchers of lesser merit come up with less great ideas that, because they are less great ideas, are not difficult to communicate to the researchers of lesser merit who review papers and sit on grant committees. Thus, the peer-review system tends to judge great researchers as less great, and less great researchers as great.

The standard answer to all this is that if the great researchers are so smart, how come they cannot figure out how to game the system? I suspect that many great researchers tend to be constitutionally incapable of marketing ploys. They can no more compromise their personal integrity than tortoises can lose their shells. Here is what one Nobelist had to say (Szent-Gyorgyi 1974):

“The foundation of science is honesty. The present granting method is so much at variance with the basic ideas of science that it has to breed dishonesty, forcing scientists into devious ways. One of the widely applied practices is to do work and then present results as a project and report later that all predictions were verified.”

Happily, many great researchers make it without such deviations, but we could do much better.

Leave a Comment