Next week is Peer Review Week, and we’ll, as always, have a full slate of posts on this most important — foundational? — practice. The theme this year is “Peer Review and the Future of Publishing.”
We’ve asked the Chefs to identify and to write about “the single most pressing issue for the future of peer review in scholarly publishing. Not surprisingly, they have thoughts.
Stay tuned for more next week’s posts, and please share your reactions in the comments. A robust discussion about these most important subjects is always fruitful.
Peer review faces multiple challenges at the moment, not least of them a simmering feeling in the scholarly community that peer review is no longer necessary, or doubt that it even works. But I think the most pressing problem is the ongoing increase in research output coupled with the decreasing number of scholars and scientists willing to accept review invitations. There’s no great insight behind this observation – we’ve been talking about reviewer fatigue, the problem of ghosting, and the increasing ratio of submissions to existing reviewers for years.
But the fact that it’s a commonplace notion doesn’t have any bearing on its importance, and as research volume continues to grow (which I suspect it will for the foreseeable future) and the size of faculty continues to stay more or less the same (as it will unless there’s some kind of sudden explosion of institutional bandwidth, either in the form of number or size), this problem is going to keep compounding every year. Open and post-publication peer review models, which I think represent a deeply flawed idea to begin with, will do nothing to address this issue, and honestly I don’t see any obvious solution on the horizon. (The suggestion that faculties stop requiring their members to produce so much publishable scholarship strikes me as a cure worse than the disease.)
Lisa Janicke Hinchliffe:
Having observed the angst over, frustration with, and critique of peer review writ large for the past few years, the most pressing issue for peer review is to align the value proposition for peer review with the resources required to carry it out. The work undertaken by STM and then NISO to identify and standardize definitions and terminology in peer review practices has been useful (disclosure: I served on the NISO working group). But, describing the process and practices of peer review is different than articulating what it is supposed to signal — and to whom — when we label an article “peer reviewed”?
A few years ago I tried to work out a statement of meaning and found it quite difficult. Through feedback on drafts and with various interlocutors, I came to this: “This article has been certified by peer review, which means scholars in the field advised at the time of review it was worth this journal investing resources in editing, typesetting, disseminating, and preserving it so that it can be read by current and future scholars in the field.” I am confident this could be further improved. Nonetheless, unless it is radically understating the value in some way (e.g., that peer review identifies what is true, what can be relied upon as the consensus of the field, etc.), I also can’t help but think that it takes an awful lot of resources to deliver that value. It also points to why it may be that publishers who adopt a more “lightweight” approach to peer review are confident in claiming to do peer review even while others may question that fact. If the question for peer review is whether it is worth a particular journal investing resources in publishing an article — given the data that almost all manuscripts get published somewhere and so it is a matter of where not whether — the threshold for making that judgement of investment-worthiness is not uniform across publishers.
Does the label “peer reviewed” signal more than what I’ve described here? Honestly, I’d like it to. But, does it? And, if it doesn’t, what is the argument for investing so many resources in the peer review process that could otherwise be allocated elsewhere? Those are the pressing questions I see for peer review.
The single most pressing issue for the future of peer review in scholarly publishing is the discrepancy between the accelerated pace of research generation, facilitated by advancements like AI-powered research assistants and the open science movement, and the existing infrastructure and solutions of the submission and review process. As researchers produce outcomes faster, the traditional and often manual systems are strained.
Key challenges include:
- Difficulty in locating qualified reviewers and editors due to a limited pool and insufficient incentives.
- The evolution towards richer research outputs in an Open Science framework further complicates the submission and review stages.
- Current processes, tools, and data formats are disparate, leading to operational and maintenance inefficiencies. As automation becomes prevalent, there’s a pressing need to redesign these processes to have an end-to-end workflow and leverage automated checks earlier in the submission phase.
- The rise of AI also introduces concerns over research integrity, notably the increased risk of plagiarism, image tampering, and paper mills, making fraudulent content more challenging to detect.
For what purposes, and how fast, will AI be integrated into the peer review process? This question may raise some eyebrows as it seems to overlook a more fundamental question of whether AI should be used in the peer review process at all. After all, leading grant funding agencies such as NIH have banned the use of AI for in peer review in grants altogether.
However, my off the record conversations with both publishers and reviewers has taught me that AI is already experimented with, and so long as no reliable AI detector is created, its use will only proliferate over time. When I recently raised the issue of AI in peer review at a workshop I held with researchers, they seemed primarily excited by the opportunity to receive feedback faster and publish in less time. Publishers couldn’t agree more.
Therefore, the important questions to ask are, in my opinion, how can AI be reliably helpful in the peer review process, what should we be using it for, how does it compare with human review for those tasks, and what can we do to mitigate risks and biases? To start, I would advocate for an honest and transparent audit of the entire peer review process to see what specific tasks AI can help to automate. The publisher that makes the jump and shares their process has the potential to become a real thought leader and arbiter of constructive change for the entire industry.
Peer review — and scholarly publishing in general — are facing a lot of challenges, but I think one of the most pressing is the lack of formal recognition for the work that goes into writing a constructive review: one that, when done well, helps make a strong paper stronger, and a weak paper publishable. In a world where peer reviewers are increasingly short on the ground, making sure that reviewers are recognized for their work should be a no-brainer — especially since this shortage is likely to get worse rather than better, given both the continuing growth in scholarly publications and the increasing diversity of scholarly outputs. However, despite the fact that many stakeholder groups in our community advocate for recognizing a wider range of contributions than publication, and despite the fact that there are now a number of tools to help us do so, we don’t seem to be any closer to making this a reality.
Initiatives like DORA (San Francisco Declaration on Research Assessment) and Knowledge Exchange’s Openness Profile have garnered widespread support for moving away from the traditional reliance on publications and, in particular, the journal Impact Factor (at the time of writing, there are nearly 24,000 DORA signatories, including institutions, funders, publishers, and many many individuals). And yet we have seen little in the way of change… In the past, there was a lack of infrastructure to support recognition for other types of contribution, including peer review, but recent years have seen the introduction of several tools and services that enable this. CRediT (Contributor Roles Taxonomy) was one of the first, and has now been formalized as an ANSI/NISO standard, with a standing committee that is considering how it can be further developed and expanded. Support for peer review in ORCID records, launched in 2015, enables both publishers and third parties such as Publons (now Web of Science Researcher Profiles) to directly add information with more or less detail depending on how open the review is.
It doesn’t help that there is still relatively little in the way of formal training for reviewers, and (as far as I can see) even less in the way of feedback to reviewers about the quality of their reviews, both of which will be increasingly critical as reviews of datasets and other non-traditional outputs become more and more commonplace. Lots to tackle — but the sooner we can give meaningful credit where it’s due to peer reviewers around the world, the better!
Haseeb Md. Irfanullah:
As I’ll explain more fully in a Scholarly Kitchen post next week, inequity exists widely in peer review. But it is not sufficiently being challenged given its depth and dynamics. So, I think inequity in peer review will intensify over the next few years.
Equity in peer review can be described from peer reviewers’, authors’, and editors’/journals’ perspectives. For the first one, we may say that inequity is now being addressed by better balancing among reviewers’ age groups, experience levels, genders, and geographic locations. We may also argue reviewers are now well recognized by having their names included with some published papers — possibly one of the best forms of recognition we have at this moment. Capacity building programs for young researchers are improving their reviewing skills, thus increasing their opportunity to become associated with reputable journals. In many cases, reviewers could follow a career path by becoming editors of journals, thus shaping their disciplines’ knowledge ecosystem — a fantastic, equitable incentive indeed.
But inequity in peer review is quite multidimensional, and sometimes not so obvious. A 2021 study, for example, showed that researchers with lower citation numbers and h-index scores review journals with lower or no Impact Factor, while high Impact Factor journals engage high impact researchers. This link between the rankings of the researchers and that of the journals they review for indicates a clear “reputational divide” in the peer-review landscape. Reviewers’ negative bias against authors from low- and middle-income countries and female authors is well documented for single-anonymous peer review. But positive bias could also be seen towards the “authors with authority”. For example, the anonymized top urology journals publish fewer articles by their editors as authors compared with the non-anonymized journals. This essentially indicates that when the peer reviewers know the identity of journal editors as authors of the manuscripts, they might recommend positively for publication.
One widely accepted way of tackling bias-related inequity is double anonymity. In September 2023, IOP Publishing (IOPP) received the first ALPSP Impact Award for its innovative peer-review model. They combined double-anonymous peer review with transparent peer review, which includes full disclosure of the whole review process along with the published article. Interestingly, out of 29 initiatives considered for two ALPSP awards this year, five were aiming at improving the peer-review-associated systems. Such innovations are relentlessly happening in peer review, where, not only diverse, but also opposing approaches are invented, piloted, and practiced. On the one hand, we still see strong support for anonymity in peer review; on the other hand, it is becoming more and more open. Similarly, although we are struggling to get quality, reliable reviewers, we are still expecting to get enough enthusiastic reviewers to make different peer-review models successful, be it community-based, open review, journal-independent peer review, review of research at different stages of research cycle (e.g., Registered Reports, Octopus), prompting the concept called co-review, AI-assisted review, post-publication peer review, or publishing reviewed preprints.
Most of these models seem to be reducing inequity by removing major barriers and concerns of diverse sections of scholarly publishing community, so the future should be great. But these are also exponentially increasing the reviewers’ workload. I want to call this the “burden of equity”, since every peer-review model is built on the altruism of the reviewers.
One of the most pressing issues for the future of peer review is the cost. I am taking some liberties here in expanding the definition of peer review to include integrity checks, conflicts of interest disclosure collection, and author validation, because all that happens during that magical period of time called “under review.”
Journals staff and/or volunteers are responsible for not just identifying whether a paper is academically or scientifically sound, novel, and appropriate to the scope. We are now forensically reviewing content for fraudulent data and figures; commissioning biostats reviews; enforcing data sharing mandates; validating that authors are who they claim to be; validating that reviewers are who they claim to be; collecting up-to-date disclosures for authors, editors, and reviewers; and reviewing claims from readers of any number of scientific sins.
All of these activities require software tools and people trained to use them and analyze what they find. My concern is that more and more journals will stop investing in these activities as margins get tighter and tighter in an increasingly open access world. It gets harder and harder every year to keep up with the “journals must do this…” and the “journals should do that…” on top of the threats that come from new technology. It will only get worse as the fraudsters have cheaper and more sophisticated tools at their fingertips.
There are certainly big publishers that publish lots of papers that don’t appear to be investing in these activities at all. They are playing with fire but also have little concern for the reputational hit they will take when something goes wrong. Those low cost APCs come at a price to scientific integrity.
For me, it all comes back to the fact that peer review is an unpaid task. (Don’t try the “it’s part of what academics are paid to do” — what other job comes with unspecified additional responsibilities that you need to carry out for someone who isn’t even your employer?) We’re facing unprecedented challenges around faked research – manipulated images, AI-generated articles etc. Dealing with this requires a level of expertise and rigor that we cannot guarantee is offered by our volunteer reviewers, whether in terms of them not having been trained with the necessary skill, or in terms of not having the capacity to properly validate methods and results. Review not rigorous enough? How could it be when you are doing it in your ’spare’ time? Process too slow? Well, you try finding a qualified, thorough reviewer who can turn it around more quickly. Not enough transparency? Accountability and potential conflicts of interest provide another reason for an overburdened potential reviewer to turn it down. Lack of diversity in our reviewer pool? I read a recent comment in an EDI audit that “It is a middle-class thing to be able to take on extra work.” That really struck me. Reviewing is such a pivotal part of growing your academic standing and knowledge. How do you take that on if you don’t have the privileges of a well paid and not overly demanding job? Peer review is too heavy a burden, and too vital a part of the scholarly communication process, to be run on a goodwill basis. Recognizing and addressing the unpaid nature of peer review is crucial for sustaining a fair, equitable, and high-quality scholarly publishing ecosystem.
The most pressing issue for peer review in humanities scholarly publishing in the United States and the UK is the slashing of secure academic jobs and indeed whole departments and programs that were the engine for research and publication. Without those positions, there are not only fewer reviewers available but there is simply less research available to review – or more research that is being squeezed out of a smaller group. None of the issues around costs, technologies, or processes will matter in the face of this. This crisis is so multidimensional, so deeply connected to the political-cultural-economic crises that we face, and will affect our societies in ways we are simply not discussing — or discussing enough.