Next week is Peer Review Week, and we’ll, as always, have a full slate of posts on this most important — foundational? — practice. The theme this year is “Peer Review and the Future of Publishing.”

We’ve asked the Chefs to identify and to write about “the single most pressing issue for the future of peer review in scholarly publishing. Not surprisingly, they have thoughts.

Stay tuned for more next week’s posts, and please share your reactions in the comments. A robust discussion about these most important subjects is always fruitful.

peer review week logo

Rick Anderson: 

Peer review faces multiple challenges at the moment, not least of them a simmering feeling in the scholarly community that peer review is no longer necessary, or doubt that it even works. But I think the most pressing problem is the ongoing increase in research output coupled with the decreasing number of scholars and scientists willing to accept review invitations. There’s no great insight behind this observation – we’ve been talking about reviewer fatigue, the problem of ghosting, and the increasing ratio of submissions to existing reviewers for years. 

But the fact that it’s a commonplace notion doesn’t have any bearing on its importance, and as research volume continues to grow (which I suspect it will for the foreseeable future) and the size of faculty continues to stay more or less the same (as it will unless there’s some kind of sudden explosion of institutional bandwidth, either in the form of number or size), this problem is going to keep compounding every year. Open and post-publication peer review models, which I think represent a deeply flawed idea to begin with, will do nothing to address this issue, and honestly I don’t see any obvious solution on the horizon. (The suggestion that faculties stop requiring their members to produce so much publishable scholarship strikes me as a cure worse than the disease.)

Lisa Janicke Hinchliffe: 

Having observed the angst over, frustration with, and critique of peer review writ large for the past few years, the most pressing issue for peer review is to align the value proposition for peer review with the resources required to carry it out. The work undertaken by STM and then NISO to identify and standardize definitions and terminology in peer review practices has been useful (disclosure: I served on the NISO working group). But, describing the process and practices of peer review is different than articulating what it is supposed to signal — and to whom — when we label an article “peer reviewed”? 

A few years ago I tried to work out a statement of meaning and found it quite difficult. Through feedback on drafts and with various interlocutors, I came to this: “This article has been certified by peer review, which means scholars in the field advised at the time of review it was worth this journal investing resources in editing, typesetting, disseminating, and preserving it so that it can be read by current and future scholars in the field.” I am confident this could be further improved. Nonetheless, unless it is radically understating the value in some way (e.g., that peer review identifies what is true, what can be relied upon as the consensus of the field, etc.), I also can’t help but think that it takes an awful lot of resources to deliver that value. It also points to why it may be that publishers who adopt a more “lightweight” approach to peer review are confident in claiming to do peer review even while others may question that fact. If the question for peer review is whether it is worth a particular journal investing resources in publishing an article — given the data that almost all manuscripts get published somewhere and so it is a matter of where not whether — the threshold for making that judgement of investment-worthiness is not uniform across publishers. 

Does the label “peer reviewed” signal more than what I’ve described here? Honestly, I’d like it to. But, does it? And, if it doesn’t, what is the argument for investing so many resources in the peer review process that could otherwise be allocated elsewhere? Those are the pressing questions I see for peer review.  

Hong Zhou:  

The single most pressing issue for the future of peer review in scholarly publishing is the discrepancy between the accelerated pace of research generation, facilitated by advancements like AI-powered research assistants and the open science movement, and the existing infrastructure and solutions of the submission and review process. As researchers produce outcomes faster, the traditional and often manual systems are strained. 

Key challenges include:

  • Difficulty in locating qualified reviewers and editors due to a limited pool and insufficient incentives.
  • The evolution towards richer research outputs in an Open Science framework further complicates the submission and review stages. 
  • Current processes, tools, and data formats are disparate, leading to operational and maintenance inefficiencies. As automation becomes prevalent, there’s a pressing need to redesign these processes to have an end-to-end workflow and leverage automated checks earlier in the submission phase. 
  • The rise of AI also introduces concerns over research integrity, notably the increased risk of plagiarism, image tampering, and paper mills, making fraudulent content more challenging to detect.

Avi Staiman:

For what purposes, and how fast, will AI be integrated into the peer review process?  This question may raise some eyebrows as it seems to overlook a more fundamental question of whether AI should be used in the peer review process at all. After all, leading grant funding agencies such as NIH have banned the use of AI for in peer review in grants altogether.

However, my off the record conversations with both publishers and reviewers has taught me that AI is already experimented with, and so long as no reliable AI detector is created, its use will only proliferate over time. When I recently raised the issue of AI in peer review at a workshop I held with researchers, they seemed primarily excited by the opportunity to receive feedback faster and publish in less time. Publishers couldn’t agree more.

Therefore, the important questions to ask are, in my opinion, how can AI be reliably helpful in the peer review process, what should we be using it for, how does it compare with human review for those tasks, and what can we do to mitigate risks and biases? To start, I would advocate for an honest and transparent audit of the entire peer review process to see what specific tasks AI can help to automate. The publisher that makes the jump and shares their process has the potential to become a real thought leader and arbiter of constructive change for the entire industry. 

Alice Meadows:

Peer review — and scholarly publishing in general — are facing a lot of challenges, but I think one of the most pressing is the lack of formal recognition for the work that goes into writing a constructive review: one that, when done well, helps make a strong paper stronger, and a weak paper publishable. In a world where peer reviewers are increasingly short on the ground, making sure that reviewers are recognized for their work should be a no-brainer — especially since this shortage is likely to get worse rather than better, given both the continuing growth in scholarly publications and the increasing diversity of scholarly outputs. However, despite the fact that many stakeholder groups in our community advocate for recognizing a wider range of contributions than publication, and despite the fact that there are now a number of tools to help us do so, we don’t seem to be any closer to making this a reality.

Initiatives like DORA (San Francisco Declaration on Research Assessment) and Knowledge Exchange’s Openness Profile have garnered widespread support for moving away from the traditional reliance on publications and, in particular, the journal Impact Factor (at the time of writing, there are nearly 24,000 DORA signatories, including institutions, funders, publishers, and many many individuals). And yet we have seen little in the way of change… In the past, there was a lack of infrastructure to support recognition for other types of contribution, including peer review, but recent years have seen the introduction of several tools and services that enable this. CRediT (Contributor Roles Taxonomy) was one of the first, and has now been formalized as an ANSI/NISO standard, with a standing committee that is considering how it can be further developed and expanded. Support for peer review in ORCID records, launched in 2015, enables both publishers and third parties such as Publons (now Web of Science Researcher Profiles) to directly add information with more or less detail depending on how open the review is. 

It doesn’t help that there is still relatively little in the way of formal training for reviewers, and (as far as I can see) even less in the way of feedback to reviewers about the quality of their reviews, both of which will be increasingly critical as reviews of datasets and other non-traditional outputs become more and more commonplace. Lots to tackle — but the sooner we can give meaningful credit where it’s due to peer reviewers around the world, the better!

Haseeb Md. Irfanullah:

As I’ll explain more fully in a Scholarly Kitchen post next week, inequity exists widely in peer review. But it is not sufficiently being challenged given its depth and dynamics. So, I think inequity in peer review will intensify over the next few years.

Equity in peer review can be described from peer reviewers’, authors’, and editors’/journals’ perspectives. For the first one, we may say that inequity is now being addressed by better balancing among reviewers’ age groups, experience levels, genders, and geographic locations. We may also argue reviewers are now well recognized by having their names included with some published papers — possibly one of the best forms of recognition we have at this moment. Capacity building programs for young researchers are improving their reviewing skills, thus increasing their opportunity to become associated with reputable journals. In many cases, reviewers could follow a career path by becoming editors of journals, thus shaping their disciplines’ knowledge ecosystem — a fantastic, equitable incentive indeed.

 But inequity in peer review is quite multidimensional, and sometimes not so obvious. A 2021 study, for example, showed that researchers with lower citation numbers and h-index scores review journals with lower or no Impact Factor, while high Impact Factor journals engage high impact researchers. This link between the rankings of the researchers and that of the journals they review for indicates a clear “reputational divide” in the peer-review landscape.  Reviewers’ negative bias against authors from low- and middle-income countries and female authors is well documented for single-anonymous peer review. But positive bias could also be seen towards the “authors with authority”. For example, the anonymized top urology journals publish fewer articles by their editors as authors compared with the non-anonymized journals. This essentially indicates that when the peer reviewers know the identity of journal editors as authors of the manuscripts, they might recommend positively for publication.

One widely accepted way of tackling bias-related inequity is double anonymity. In September 2023, IOP Publishing (IOPP) received the first ALPSP Impact Award for its innovative peer-review model. They combined double-anonymous peer review with transparent peer review, which includes full disclosure of the whole review process along with the published article. Interestingly, out of 29 initiatives considered for two ALPSP awards this year, five were aiming at improving the peer-review-associated systems.  Such innovations are relentlessly happening in peer review, where, not only diverse, but also opposing approaches are invented, piloted, and practiced. On the one hand, we still see strong support for anonymity in peer review; on the other hand, it is becoming more and more open. Similarly, although we are struggling to get quality, reliable reviewers, we are still expecting to get enough enthusiastic reviewers to make different peer-review models successful, be it community-based, open review, journal-independent peer review, review of research at different stages of research cycle (e.g., Registered Reports, Octopus), prompting the concept called co-review, AI-assisted review, post-publication peer review, or publishing reviewed preprints.

Most of these models seem to be reducing inequity by removing major barriers and concerns of diverse sections of scholarly publishing community, so the future should be great. But these are also exponentially increasing the reviewers’ workload. I want to call this the “burden of equity”, since every peer-review model is built on the altruism of the reviewers.

Angela Cochran:

One of the most pressing issues for the future of peer review is the cost. I am taking some liberties here in expanding the definition of peer review to include integrity checks, conflicts of interest disclosure collection, and author validation, because all that happens during that magical period of time called “under review.”

Journals staff and/or volunteers are responsible for not just identifying whether a paper is academically or scientifically sound, novel, and appropriate to the scope. We are now forensically reviewing content for fraudulent data and figures; commissioning biostats reviews; enforcing data sharing mandates; validating that authors are who they claim to be; validating that reviewers are who they claim to be; collecting up-to-date disclosures for authors, editors, and reviewers; and reviewing claims from readers of any number of scientific sins. 

All of these activities require software tools and people trained to use them and analyze what they find. My concern is that more and more journals will stop investing in these activities as margins get tighter and tighter in an increasingly open access world. It gets harder and harder every year to keep up with the “journals must do this…” and the “journals should do that…” on top of the threats that come from new technology. It will only get worse as the fraudsters have cheaper and more sophisticated tools at their fingertips.

There are certainly big publishers that publish lots of papers that don’t appear to be investing in these activities at all. They are playing with fire but also have little concern for the reputational hit they will take when something goes wrong. Those low cost APCs come at a price to scientific integrity. 

Charlie Rapple:

For me, it all comes back to the fact that peer review is an unpaid task. (Don’t try the “it’s part of what academics are paid to do” — what other job comes with unspecified additional responsibilities that you need to carry out for someone who isn’t even your employer?) We’re facing unprecedented challenges around faked research – manipulated images, AI-generated articles etc. Dealing with this requires a level of expertise and rigor that we cannot guarantee is offered by our volunteer reviewers, whether in terms of them not having been trained with the necessary skill, or in terms of not having the capacity to properly validate methods and results. Review not rigorous enough? How could it be when you are doing it in your ’spare’ time? Process too slow? Well, you try finding a qualified, thorough reviewer who can turn it around more quickly. Not enough transparency? Accountability and potential conflicts of interest provide another reason for an overburdened potential reviewer to turn it down. Lack of diversity in our reviewer pool? I read a recent comment in an EDI audit that “It is a middle-class thing to be able to take on extra work.” That really struck me. Reviewing is such a pivotal part of growing your academic standing and knowledge. How do you take that on if you don’t have the privileges of a well paid and not overly demanding job? Peer review is too heavy a burden, and too vital a part of the scholarly communication process, to be run on a goodwill basis. Recognizing and addressing the unpaid nature of peer review is crucial for sustaining a fair, equitable, and high-quality scholarly publishing ecosystem.

Karin Wulf:

The most pressing issue for peer review in humanities scholarly publishing in the United States and the UK is the slashing of secure academic jobs and indeed whole departments and programs that were the engine for research and publication.  Without those positions, there are not only fewer reviewers available but there is simply less research available to review – or more research that is being squeezed out of a smaller group. None of the issues around costs, technologies, or processes will matter in the face of this. This crisis is so multidimensional, so deeply connected to the political-cultural-economic crises that we face, and will affect our societies in ways we are simply not discussing — or discussing enough.

Karin Wulf

Karin Wulf

Karin Wulf is the Beatrice and Julio Mario Santo Domingo Director and Librarian at the John Carter Brown Library and Professor of History, Brown University. She is a historian with a research specialty in family, gender and politics in eighteenth-century British America and has experience in non-profit humanities publishing.

Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.

Lisa Janicke Hinchliffe

Lisa Janicke Hinchliffe

Lisa Janicke Hinchliffe is Professor/Coordinator for Research Professional Development in the University Library and affiliate faculty in the School of Information Sciences, European Union Center, and Center for Global Studies at the University of Illinois at Urbana-Champaign. lisahinchliffe.com

Hong Zhou

Hong Zhou

Hong Zhou leads the Intelligent Services Group in Wiley Partner Solutions, which designs and develops award-winning products/services that leverage advanced AI, big data, and cloud technologies to modernize publishing workflow, enhance content & audience discovery and monetization, and help publishers move from content provider to knowledge provider.

Avi Staiman

Avi Staiman

Avi Staiman is the founder and CEO of Academic Language Experts, a company dedicated to empowering English as an Additional Language authors to elevate their research for publication and bring it to the world. Avi is a core member of CANGARU, where he represents EASE in creating legislation and policy for the responsible use of AI in research. He also is the co-host of the New Books Network 'Scholarly Communication' Podcast.

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Haseeb Irfanullah

Haseeb Irfanullah

Haseeb Irfanullah is a biologist-turned-development facilitator, and often introduces himself as a research enthusiast. Over the last two decades, Haseeb has worked for different international development organizations, academic institutions, donors, and the Government of Bangladesh in different capacities. Currently, he is an independent consultant on environment, climate change, and research systems. He is also involved with University of Liberal Arts Bangladesh as a visiting research fellow of its Center for Sustainable Development.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Charlie Rapple

Charlie Rapple

Charlie Rapple is co-founder of Kudos, which showcases research to accelerate and broaden its reach and impact. She is also Vice Chair of UKSG and serves on the Editorial Board of UKSG Insights. @charlierapple.bsky.social, x.com./charlierapple and linkedin.com/in/charlierapple. In past lives, Charlie has been an electronic publisher at CatchWord, a marketer at Ingenta, a scholarly comms consultant at TBI Communications, and associate editor of Learned Publishing.

Discussion

17 Thoughts on "Ask the Chefs: What is the Single Most Pressing Issue for the Future of Peer Review?"

Finding qualified reviewers who will review and submit reviews on time

Congratulations chefs on some very acute observations across a wide range. As a researcher working on/with early career researchers many of these observations resonated. One big insight from interviews relates to the recognition of reviewing. They want to be perceived as the top people in a narrow field and therefore sometimes the best people to be asked but alas one journal in our field will not ask them deliberately because this would be exploitative. Yes to Charlie’s offering. They do want to be paid but I think recognition comes first. None of them mentioned https://www.reviewercredits.com/. Sad.They do not mention Publons either, though they used to do once.

GERIATRIC NO MORE!

The advantages of internet communication that emerged in the 1990s, with its power to revolutionize peer review, is not fully appreciated. Prior to that, when an academic “retired” he/she did not remain academically engaged, because of the difficulties of accessing the research literature and travelling to conferences. Now this has all changed. Expertise built up over decades can continue its application, and operations, such as (the sadly now defunct) PubMed Commons and “Zooming” into conferences, permit the “retired” to identify themselves to editors in search of qualified reviewers.

The limiting factor is that many academics are worn out through engagement with that other form of peer review – getting the agencies to fund one’s work. “Thank God that is all over!” a colleague exclaims as his/her Department waves goodbye and he/she fades into the sunset. Step 1 in improving publication peer review is to push for reform of funding peer review, to make it a happier experience. I have suggested one reform (“bicameral review”) but there may be other alternatives out there.

I’m always concerned when people talk about peer review in terms of being valuable to publishers (Hinchliffe) or unpaid labour for researchers (Rapple); although it is, of course, both of these things.

Emphasising these characteristics risks any conversation about change becoming focussed on antagonism toward the publishing industry (“a free ride!”) or economically and operationally unrealistic (“pay the reviewers!”).

I think there is greater value to be had in reminding ourselves that peer review is an integral part of the scholarly process, allowing members of the academy to critique each other’s work before it is disseminated. On this basis, we can properly think of peer review as a vital part of the normal day-to-day work of academics. And, as such, it is already paid work, if institutions (and their funders) recognise the work that is undertaken, and demonstrate that they value this essential contribution, by their people, to scholarship as a whole, not a side-line supporting “someone else’s publication”.

Only in this way will peer review obligations (to the institution and the academy, not to publishers) get written into academics’ job descriptions, promotion guidelines and tenure criteria, and recognised as part of their normal workload. Only in this way will we all work together to improve quality and streamline peer review processes, by making peer review central to scholarship, and possibly by uncoupling peer review from publication processes.

Given that academic publishers have notoriously high profit margins, I’m not convinced that paying reviewers is “economically and operationally unrealistic.” In many situations, peer review means that researchers are providing unpaid labor to for-profit corporations, which seems profoundly unethical to me.

Some academic publishers ‘notoriously’ have high profit margins (although this has never been the case for some societies, and many ‘fat cats’ are looking a bit thinner these days).

I suppose neither of us has done the maths, but it seems to me that simply grabbing these high profit margins (somehow) and giving them to reviewers one-by-one (somehow) is both unlikely to result in large enough payments to be satisfying (see the Overjustification Effect), and likely to be operationally complex.

I think it was exactly my point that: ‘providing unpaid labor to for-profit corporations’ does indeed sound unethical, but ‘paying scholarly societies, university presses and (yes) corporations for the work they do to support the academy in the administration of its own peer review processes’ sounds like an ethical and useful activity. And one that is potentially subject to useful collaborative improvement, rather than mere confrontational wishful thinking.

The bold ‘ethical’ statement is easy to make – the practical detail is what actually matters, as we know from 15 years of incredibly wasteful OA turbulence, caused (IMHO) by ‘profound ethics’ being superficially more satisfying than the heavy lifting of working on practical solutions.

I find it interesting that you interpreted what I wrote as articulating the value to publishers when what I’m interested in is articulating the value to readers.

I accept, of course, that your own focus is on the reader – you have demonstrated plenty of evidence that this is the case. But your actual phrasing here: “worth this journal investing resources in editing, typesetting, disseminating, and preserving” could give people the idea that peer review is primarily a publisher’s way of determining whether to produce an article, not the academy’s way of determining whether the article is of value to readers.

Thank you, Mark. The perspectives of institutional research offices, research administrators, editorial board members and researchers themselves would be welcome additions. Rigor, reproducibility, and innovation thrive with quality peer review. The stakes can be high and the communities small, presenting challenges. I recommend The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race by Walter Isaacson – he weaves in a story of how peer review, conferences, and publication are intertwined with curiosity, collaboration, competition, leading to breakthroughs.

Readers may be interested to learn about Peer Review Information Service for Monographs (PRISM) from DOAB.

PRISM is a standardised way for academic publishers to display information about their peer review processes across their entire catalogue. At the publisher level, all their peer review processes are visible (as they may have more than one process in use across all their series and titles). At the level of an individual publication, the peer review process applied to that work is displayed.

By providing more transparency about the peer review process(es) that apply to specific works, PRISM’s aim is to help build trust in open access academic book publishing.

Learn more about PRISM https://doabooks.org/en/publishers/prism and joining DOAB https://doabooks.org/en/publishers/join-doab.

My hunch is that researchers will become increasingly unwilling to review for journals that they don’t themselves publish in. This might already be happening. Enlightened journals will – or should – nurture their reviewers and authors to make them feel invested in the journals’ success.

More use might be made of retired academics, especially those still actively publishing and attending conferences. However, the same difficult issues will apply – how to ensure that reviews are careful, honest and rigorous, and how best to offer the reviewer some form of recognition. In these days of automated responses, there may not be any proper acknowledgement of the reviewer’s efforts at all.

Re: Wulf
My work has benefitted from peer review. Peer review has made me refine arguments and explain my work more clearly. It has given me the confidence in my own decisions, such as when equal number of reviewers returned comments that disagreed on a point that I was making – I had the power to decide! I usually agree to review manuscripts because I want to be part of the process that makes my field better and it keeps me informed of new work. Until recently, I was in non-tenure eligible positions where neither my publications nor my participation counted toward any return from my institution, but reviewing and publishing kept me in the field. What if hiring and promotion committees asked for more than a list of “review services” from applicants and instead counted how many manuscripts and for what presses? What if departments set expectations that participation in one’s field meant shaping the scholarship through review?

Academic presses should publish an annual list of how many faculty from each university reviewed for them. University presidents love seeing the names of their institutions in print and revile seeing weak representation. It would generate interesting data and maybe change the culture.

To answer Charlie’s question – “what other job comes with unspecified additional responsibilities that you need to carry out for someone who isn’t even your employer?” – at least one answer is lawyers. Many firms often expect their people to complete a certain amount of pro bono work as local community good will, or for other reasons. Some individuals may choose not to do this, but if you want to make partner…

Busy week last week and just catching up on SK. Really enjoyed this article. Question for Rick: can you explain more about why “The suggestion that faculties stop requiring their members to produce so much publishable scholarship strikes me as a cure worse than the disease.”…?

Comments are closed.