In today’s Peer Review Week post we hear perspectives on innovation and technology in peer review from a diverse group of users from different countries and disciplines.

Researchers Leo Anthony Celi (Beth Israel Deaconess Hospital and MIT, USA) and Kaushik Madapati (Troy High School, USA), Grace Pold* (Swedish University of Agricultural Sciences), and Nora Slonimsky (Iona College, USA) share their thoughts as authors and/or reviewers (Leo is also editor of PLOS Digital Health). Ivone Cabral (Cadernos de Ética em Pesquisa, Revista da Escola de Enfermagem da USP, and Revista Enfermagem UERJ, Brazil) and Josh Piker (William and Mary Quarterly, USA) provide a journal editor’s perspective — and they are both also researchers. Our publisher contributors are Heidi Koch-Bubel and Michael Roy (American Psychiatric Association, USA), Sophie Reisz (Mary Ann Liebert, Inc, USA), Michael Willis (Wiley, UK), who collectively represent a range of publishing organizations — a not-for-profit/society, and a small and large commercial publisher, respectively.

Among the themes that emerge across all these groups are (inevitably!) AI — both pros and cons; the importance of the human element, including the need for better processes, training, and recognition; and a related wish for innovation-driven technology rather than technology-driven innovation. My thanks to all today’s guest authors for contributing their thoughts. We hope you’ll share your own perspectives here too, in the comments section.

magnifying glass atop colorful charts

Researcher perspectives

Leo Anthony Celi and Kaushik Madapati: When most of us contemplate innovations in peer review, we conjure a variety of technological advances. However, therein lies the crux of the issue. Our understanding of innovation has evolved to become synonymous with technology. A system that cannot be fixed without technology is unlikely to be fixed by technology. Technology typically produces shiny band-aids for a breaking dam. Technology cannot be the core component of change; it will not have a return-on-investment unless it addresses the drivers of the problem it is trying to fix. Systems like that of peer review are inherently flawed in their design, and that isn’t something that technology can fix. Rather, we need to take a step back and critically re-evaluate the system itself, identify where changes need to take place, and design how technology can enable those changes.

Grace Pold: Again and again, we hear that the peer review system is broken, that it is harder than ever to find reviewers at the same time as more and more articles of questionable scientific rigor are being released into the science – and perhaps more worryingly – the public sphere. However, I would argue that peer review has not gotten better or worse in recent years, but rather that the quality spectrum has broadened, and the indicators that people used in the past as a quick screen to check that a paper has likely undergone stringent peer review are no longer valid. At the “better” end, some journals, such as mBio (where I am on the early career editorial board) are actively invested in improving the quality of reviews and moving to models where a more restricted pool of people actually interested in reviewing is invited, and extensive training and mentoring in the review process is provided. This is intended to resolve the problem that people (myself included) need to get better at providing productive reviews. While this intensive training approach is probably not scalable to all journals, and having a core, internally-trained and self-selected group of reviewers risks allowing only select voices, continued recruitment of new cohorts of trainees reduces these risks. Furthermore, there may be ways to incentivize journals to improve editorial and review practices in order to accelerate this process. For instance, libraries negotiating APCs could prioritize journals with reviewer training programs and strong records of peer review, which would have the secondary beneficial effect of reducing the proportion of articles submitted to certain select for-profit publishers with questionable review practices.

Nora Slonimsky: When Elizabeth Eisenstein described printing as a “divine art, infernal machine,” she could have been just as insightfully describing most forms of communication technology, past and present. When it comes to the generation of ideas, knowledge, research, the technologies that we use to express them are an integral part of the process, and peer review is no exception. Over the last year, evolving communication technologies like generative AI have been front and center in conversations about plagiarism, copyright, and citation practices, all of which are integral to peer review. But, much like AI itself, these are not new concerns for humanities scholars, especially digital humanists. To reference another famous Elizabeth, AI likely “neither deserves such praise nor such censor.” Although AI is certainly an innovation, albeit on a different scale than its fifteenth century predecessor, most technological changes in media and how we consume it come with promises and pitfalls. Rather than AI being the cause of those issues, which often stem from a lack of credit or acknowledgement for the labors, expertise, and contributions of others, an innovation I would like to see is its development as a solution. As both an author and a reviewer, it is alarming to see people’s research, writing, or other creative expressions essentially being taken without asking. Even more troubling are the inaccuracies and misinformation that so many generative AI programs replicate, particularly when it comes to citations of scholarly work. In other words, how can generative AI programs help us to perhaps catch research we should know about but haven’t yet encountered? Can these programs function as a resource for provenance or metadata, supporting archivists, librarians, and others who are looking for gaps or silences in evidentiary records? Should generative AI move forward in an equitable way, one that acknowledges and compensates for the prior work in the humanities that it relies upon, then it might have the potential to help us to solve some of the relevant critiques of the peer-review process itself.

Journal editor perspectives

Joshua Piker: The William and Mary Quarterly (WMQ) receives between 100 and 120 manuscript submissions a year.  Most field-specific journals in history receive a good deal less, but of course 100 manuscripts is a slow month at some journals in the sciences.

And at the WMQ, everything in re: relations between authors and editors and editors and peer reviewers is handled via personal email. The manuscript is submitted (and acknowledged by an editorial assistant) via email; the Editor recruits peer reviews via email; the Editor informs the author via email that the manuscript is out for review and what the process will be from there; the editorial assistant sends the peer reviewers the manuscript and instructions via email; the peer reviewers submit reports via email; the Editor acknowledges the reports via email and briefly tells the reviewers how their conclusions relate to those of the other readers; and the editorial assistant sends the Editor’s decision letter and the reports to the author via email.

If everything goes right, reviewers get a sense of the larger conversation about the manuscript in question within a few days of submitting their reports, and the author receives four or five readers’ reports and a detailed decision letter three to four months after submitting the manuscript. At every stage, the author and readers are communicating with a person, not receiving auto-generated messages.

It’s retail editing. And it’s labor intensive. But it’s also effective, and not just because of the excellent scholarship that it helps produce.

I tell authors that the review process isn’t the gate that they wait at to find out if they get to enter the conversation; the review process is part of the conversation. I’d say the same thing to editors: just like the material that you publish, the review process is part of your conversation with your colleagues, and how you handle that process helps shape how your journal is viewed and how your field functions. At each stage of the process, there are opportunities, if someone is there to take advantage of them.

Ivone Evangelista Cabral: Despite facing criticism and challenges related to publication speed, the peer review system remains a key method for evaluating the quality of a manuscript. A socially committed reviewer, who is an expert in the manuscript’s central theme, methodology, and interpretation of findings, plays a crucial role in identifying analysis errors and research misconduct, thereby preventing the publication of scientifically unsound work. In today’s research landscape, where funding and career progression hinge on publications in high-impact journals, the peer review system is expected to act as a crucial gatekeeper in the “Publish or Perish” environment.

One way to enhance the system is by valuing the role of peer review in research careers and funding and providing qualified assessments with efficient evaluation processes. Journals should also recognize the role of peer reviewers by offering exemptions from article processing charges (APC) and enabling the publication of reviewer opinions through mutual agreement between the evaluator and the corresponding author.

As a peer reviewer, I view the system as an ongoing learning process that is constantly evolving. I have experimented with various types of manuscript peer review, including double-blind, partial, or fully open review, with and without publication. In my belief, the peer review system will remain an important part of promoting good science for many years to come. However, we need to address the current challenges and criticisms of the system by developing incentive policies for training and guiding the next generation of researchers. Researchers need to see participating in peer review as an integral part of their work. Anyone looking to publish their research should also be willing to participate as an evaluator in the peer review system.

Publisher perspectives

Sophie Reisz: Peer review is the cornerstone of scientific integrity, ensuring that published research meets high standards of quality and rigor. By relying on subject-matter experts to evaluate submissions, the process enhances the credibility of journals and improves the work itself. When properly anonymized, peer review can also reduce biases, ensuring more objective evaluations.

However, the peer review process, and the powerful systems that support it, are far from perfect. Despite the availability of sophisticated technologies, most peer review systems often rely heavily on a single workflow, leading to delays in fast-moving fields where timely research is critical. The quality of reviews varies widely—some are thorough and constructive, while others are overly critical or cursory. Reviewer fatigue is another growing issue, as experts face increasing requests to review, which can compromise the quality of feedback.

To address these challenges, the future of peer review should focus on how these tools can allow greater efficiency and meaningful recognition for reviewers. Faster turnaround times, perhaps through the use of authorized AI-driven tools, could streamline the process. More importantly, these peer review systems should incorporate meaningful strategies in which to recognize reviewers’ contributions, with formal certification that can be incorporated into their career development and institutional evaluations. This extra recognition might incentivize higher quality reviews and acknowledge the critical role of reviewers in academia, especially in the life sciences. Additionally, better reviewer training, transparency, and more accurate matching of reviewers to manuscripts are essential.

Ultimately, improving peer review requires balancing rigorous standards with the evolving needs of researchers and institutions. Supporting peer reviewers with appropriate recognition is not just a necessity but a key step in sustaining the quality and integrity of the academic publishing process around the world.

Heidi Koch-Bubel and Michael Roy: Peer review systems play a significant role for society publishers like American Psychiatric Association Publishing.  For a society publisher, a peer review system provides the opportunity for scalability. It would be difficult to manage the number of submissions that we receive without having one. The system we use offers a well-organized series of queues for monitoring the status of reviews and pending decisions. The various user levels allow appropriate access for Editors and Guest Editors as well as staff.

Also, peer review systems provide editorial staff with a way to easily track correspondence with authors, reviewers, and editors, to share information, and to have that information accessible to everyone on staff in the form of a “shared brain”.  Peer review systems make correspondence easier through the form of template letters, which can be easily updated. When correspondence with authors is outside of a system, information and submissions can fall through the cracks, which can lead to delays in response and receipt of decisions.

Peer review systems provide publishers, big or small, with a way to easily recommend other journals in their portfolio and implement a cascade model. Being able to offer an author the opportunity to stay with a publisher and remove some of the burden of resubmitting to another journal, is important for authors in terms of speed to publication and important to publishing to help reduce peer review burden.

It would be more useful if peer review systems could improve their reporting. It is currently available but could be more nimble and intuitive.

Michael Willis: The first generation of electronic peer review systems essentially replicated – albeit with incremental enhancements – the physical peer review process in a standalone online environment. The next generation of systems must be based on up-to-date technology including AI, align with evolving publishing standards, and be embedded in the end-to-end publishing ecosystem. As a large publisher representing a wide diversity of journals across different peer review platforms, we’ve thought hard about which capabilities are essential, desirable or unnecessary, and how we can use AI to make peer review more effective. These insights are helping us develop our own submission and peer review platform, Research Exchange. Take two examples: reviewer data, and text analysis. Peer review systems must think beyond a static database of reviewers, which quickly grows outdated and needs regular maintenance (updating affiliations, contact details, areas of expertise…). Most reviewers decline to review a manuscript because they don’t feel it’s aligned to their area of expertise (see p11 of our recent peer review survey report). Can we help the editor predict the likelihood of a reviewer accepting a review invitation, based also on their behavior? Harnessing robust, comprehensive, and structured data from multiple sources enables manuscripts to reach the hands of skilled, trustworthy, and available reviewers more rapidly. As for text analysis: this can save reviewers time in screening manuscript content, verifying references, or checking the appropriateness of statistical data. It can be applied equally to reviewers’ comments; reviewers highly value getting feedback on their review, so imagine a system which gives instant feedback about the quality or usefulness of their review –perhaps shortlisting the best for an award – or flags inappropriate language and suggests alternative wording, or analyses what they have written and asks follow-up questions to help improve the overall quality of their comments. With AI, opportunities abound.

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Kaushik Madapati

Kaushik Madapati is a high school senior at Troy High School. He is passionate about machine learning and improving equitability in fields such as medicine.

Grace Pold

Grace Pold is an Associate Senior Lecturer in soil nutrient cycling at the Swedish University of Agricultural Sciences.

Nora Slonimsky

Nora Slonimsky, PhD, is an Associate Professor and Chair of the Department of History at Iona University, where she is also the director of the Institute for Thomas Paine Studies (ITPS). For more about Nora’s work, please see her personal website, www.hamiltonsolo.com.

Ivone Cabral

Ivone Evangelista Cabral, RN, MNSc, PhD, is Adjunct Professor in the Faculty of Nursing, State University of Rio de Janeiro; and Full Professor/Volunteer Collaborator at the Anna Nery School of Nursing/ Postgraduate Program in Nursing, Federal University of Rio de Janeiro. She is a Researcher for the National Council for Scientific and Technological Development (Brazil), and acts as Scientific Editor for Cadernos de Ética em Pesquisa, which is published by the Brazilian National Committee of Ethics in Research (Conep); and Associate Editor for Revista da Escola de Enfermagem da USP, and Revista Enfermagem UERJ.

Joshua Piker

Joshua Piker edited the William and Mary Quarterly from 2014 to 2024. He is now a professor of history at William & Mary and the Omohundro Institute’s Scholarly Communities Coordinator.

Sophie Reisz

Sophie Reisz is Vice President & Executive Editor at Mary Ann Liebert, Inc., Publishers, where she oversees strategic editorial direction and journal operations across a diverse portfolio of 100+ biotech, biomedical, and life sciences journals.

Heidi Koch-Bubel

Heidi Koch-Bubel has worked in scholarly publishing for almost 15 years. She currently works as the Editorial Support Services Manager at American Psychiatric Association Publishing. In this role, she oversees the peer review system for the journals.

Michael Roy

Michael D. Roy is the Executive Editor for the journals of the American Psychiatric Association’s Publishing Division, a portfolio that includes the American Journal of Psychiatry, the longest continuously published specialty journal in the United States. He has been involved with the journals for more than 30 years.

Michael Willis

Michael Willis is Senior Solutions Manager at Wiley, based in Oxford, UK. He helps translate researcher needs into journal editorial and peer review workflows, operations and products.

Discussion

Leave a Comment