Previously in The Scholarly Kitchen, I have written about  a wide range of flaws of our current peer-review systems (on several occasions, lately). In this post, I explain why I think human-dependent peer review has lost its human element, thus its relevance, and what we can do to install a new system by abandoning the present one. I offer eight points of reflection.

Reflection 1: Reviewer fatigue is worsening.

The publishing industry is currently publishing an unreal amount of research which demands millions of peer reviewers. The painful delays in the peer-review process have long been recognized. These delays are caused less and less by reviewers being slow in returning their reports, and more and more by the amount of time wasted to getting reviewers to agree to review a manuscript in the first place. There is an increasing level of inertia among potential peer reviewers across the community. More researchers declining review invitations, even authors who have recently published articles in the host journal, indicates that peer review is no longer seen as a “good karma” activity by many.

cartoon of scientists with a magnifying glass examining a large ehrlenmeyer flask of colored liquid
Doctors and scientists working together, examining a laboratory flask containing pink liquid, symbolizing medical research and analysis

Reflection 2: Peer review is now a compliance issue. It doesn’t nurture a culture of collaboration.

Another level of delay happens when new reviewers are brought into the process during the revision stage. When the first-round reviewers decline to review a revised manuscript, often a new set of reviewers is invited in. Those new reviewers are checking the manuscript for the first time, and so new comments and requests are introduced beyond those of the first round of review. I’ve seen cases where a revised manuscript was simply rejected by apparently new reviewers, even though the authors responded to and met all the reviewer comments from the first round. (And, appealing to the editor didn’t work!). This is a difficult situation – if the new second-round reviewers are only being asked to ensure the authors have responded to the first round of review comments, isn’t this something that the editor could do themselves? I think many are unwilling to take this on because we are now running journals like a factory, and the peer-review process is seen as a part of compliance. But we do authors and reviewers a disservice when we forget that the goal of what should be a collaborative process is to improve the paper and to collectively take the discipline forward.

Reflection 3: A researcher can get torn between their author-life and peer-reviewer-life.

All peer reviewers are also authors of journal articles; after all, the “peers” of publishing authors should be publishing authors. If I’m an efficient, highly praised peer reviewer, however, that does not qualify me to be above providing the same thoughtful responses to reviewer comments when my own papers are going through review. There is often a disconnect for researchers when serving in these separate roles, creating a contradiction where they feel free to make outrageous demands when reviewing, yet react angrily when the same thing happens to them as authors.

Reflection 4: The industry doesn’t want to corrupt peer reviewers’ selfless acts with monetary incentives.

Peer reviewers remain the only entity involved in publishing workflow who receive zero financial incentives for their services, in almost all journals. Monetizing the journal-reviewer relationship has been widely criticized, since some kind of reward system might challenge the motivational aspect of peer review. One reason is, it will significantly increase journal publication costs to develop and run a reviewer payment system. But more importantly, such a payment will cause a conflict of interest. Oddly enough, in the publishing industry, we see no conflict of interest when an author is asked to pay APCs to publish a paper or when an institution pays a publisher both to read the latter’s journals and to publish in them, although these charges/fees are used to pay the editor — the final decision-maker in a journal. It seems the publishing industry strongly believes that the peer-reviewers are not regular humans who should expect compensation for their services, but rather that they are altruistic superheroes who feel humble receiving badges, certificates, profiles on platforms, free access to papers, or getting their names published along with the authors.

Reflection 5: Peer review undermines authors’ personal research journey.

Pre-publication peer review not only undermines the whole process of manuscript preparation, but also the people involved in it. We know too well that a research manuscript is an output coming at the end of a fairly long process, involving numerous steps: starting with conceptualizing a research idea, and ending with all authors approving the manuscript for journal submission after many iterations. But, can we truly justify why a couple of subject matter specialists—who have no knowledge of the deep, personal journey of the researchers—are qualified to judge the latter’s manuscript and to recommend if it is worthy of sharing with the world or not? Why should peer review not be considered a system that undervalue the specialized, dedicated collective efforts of a research team and their respective institutions?

Reflection 6: Peer reviewers have responsibilities, but are not responsible for any dark side of publishing.

I often say, the way we treat peer reviewers they seem like the “Guardians of the Scholarly Galaxy.” Their responsibility can’t be highlighted enough. But the real value of something becomes obvious during a crisis. When a journal article gets retracted or even removed, we see how authors, editors, and publishers get blamed for not performing their duties, and their reputation gets tarnished. It takes a lot of effort to rebuild the damaged image. But, despite their ‘guardianship’, have we ever seen a peer reviewer be named and shamed for being part of a retraction shambles and blacklisted? Persecuted? This might not be possible due to anonymous peer-review systems, but, will it change under increasingly open and transparent peer review?

Reflection 7: Is peer review’s importance overrated?

Despite obvious importance of peer review, the industry considers peer reviewers as “free laborers” as part of their academic responsibility. Peer review is a researcher’s scholarly contribution that is not appreciated in almost all academic/research institutions’ career advancement qualifications (or funders’ decisions for where to award grants). And publishers are doing nothing to change administrators’ mindset. In a world, where non-peer-reviewed research documents (including pre-prints) are getting endorsed by being cited in peer-reviewed journal articles and seen as entirely sufficient by some funding agencies, what is the value of peer review? Given all these, can anyone run a model how the world would look like in absence of peer review? Here, I am not talking about assumptive case studies.

Reflection 8: Publishing industry needs disruptions in peer review.

As we have lost the “peer” in peer review, I am not urging to put a human face back on it. I am rather proposing two options, both devoid of anonymous peer reviewers.

Option 1: Our publishing future will inevitably be shaped by GenAI. Peer reviewers are already using GenAI tools to review papers. I imagine, and advocate for, a rapid improvement of the existing and budding elements of a GenAI-based article review system. Given the current pace, we may see some pioneering journals moved to a 100% GenAI-based review system by 2026, if not earlier.

Option 2: The above scenario may not be acceptable to many authors, editors, and publishers as there is no human involvement in quality assurance, one of the most important duties of academic journals.

To ensure human touch in the review system, I propose to take away review responsibility from publishers and give it to the host institutions and research funders. In this arrangement, respective institutional research review committees (RRCs) will be responsible for internally reviewing research manuscripts produced by their research staff, and ensuring its quality. A final manuscript will then be submitted to a journal along with an endorsement letter signed by the head of the institution (e.g., vice-chancellor, dean, or director) depending on the authors. The journal, if finding the manuscript matching its scope, will do light editing and formatting, communicate minor questions or clarifications to the corresponding author, finalize the piece, and publish it. The above endorsing authority /responsibility may also lie with the respective research funders.

This arrangement should enhance both research and publishing integrity by giving more responsibility to the host institutions which enjoy lots of benefits (e.g., university rankings, and research grants) once a paper is published with their affiliation. In case of multiple institutions involved in research, the lead author’s institution will endorse on behalf of all partners, as it does for ethical clearance. This arrangement (i.e., internal review then institutional endorsement) can raise the issue of biasness. I would argue that if we believe all other institutional committees (e.g., ethical review committees, procurement committees, recruitment committees, examination committees, and thesis committees) are honestly performing their duties, why it can’t be the same done for research papers? Similarly, funders’ involvement in endorsing research ensures their post-research responsibility.

In the last few years, we have tried numerous innovations in peer review, despite knowing too well that it is an unjust system. Still, we don’t feel the urge to dismantle it, only because we still have sufficient supply of free workforce—the peer reviewers. But, when will we be true to ourselves and remove the ‘exploited peers’ from ‘peer review’?

Haseeb Irfanullah

Haseeb Irfanullah

Haseeb Irfanullah is a biologist-turned-development facilitator, who often introduces himself as a research enthusiast. Over the last 26 years, Haseeb has worked for different international development organizations, academic institutions, donors, and the Government of Bangladesh in different capacities. Currently, he is an independent consultant on environment, climate change, and research system. He is also involved with the University of Liberal Arts Bangladesh as a visiting research fellow of its Center for Sustainable Development.

Discussion

19 Thoughts on "Peer Review Has Lost Its Human Face. So, What’s Next?"

Great piece and timely assessment of the contemporary challenges in scholarly knowledge production. We are currently at crossroads where past traditions and future challenges converge. Without new insights, disruptive/revolutionary innovations cannot be developed and commissioned. In this context, your contributions will shed light on future-oriented current actions for authors, publishers, and related corporate/financial bodies.

I thought I’d get the ball rolling on discussing why Reflection 8, Option 2 is a really bad idea. There is a reason why journals exist, and that’s largely to provide a neutral, third party review of research results and claims. Allowing researchers (or their employers) to review their own work removes that neutrality from the situation. How harshly are you going to criticize the holes in your own work or the work of the person in the room next to you? How willing would research institutions be to damage their own reputations (and potential future grant funding)? There’s a reason why funder-owned journals (like eLife) or university presses don’t just publish internal work from university employees. It’s because it would lack all credibility.

We know how universities would act in such a scenario, because we see it all the time through the press releases they put out to accompany the announcement of new research results. These press releases are frequently misleading and make overly broad claims as to what was accomplished. This approach would largely turn the research literature into a series of advertisements for universities.
And if the journal exists to simply rubber stamp their approval on a submitted paper, then why have a journal at all? Why would anyone need Nature if it’s just a collection of stuff already approved by universities and no judgements are made on it? Why wouldn’t the universities just put out the papers themselves? What work is the journal actually doing in this scenario (and who is paying for it)?

Let’s dig deeper into those finances and who would pay for all this review (since much of this post is about the “free labor” of peer review). Dimensions lists some 35,000 papers with at least one author with a Harvard affiliation in 2024. Let’s assume most of those are not the corresponding author, and suggest that maybe 20% are. That means 7,000 papers must be reviewed and discussed by Harvard staff. Roughly 20 papers a day, every single day of the year. Who is doing that work, how many new employees will have to be hired full time to keep that going? Or are we going to pull researchers away from their research to do the work? Then there’s the overhead and management of the work, tracking it, making sure it gets done, sending the papers out to journals, etc. Who pays for all that?

And what happens to researchers at less well-funded universities that can’t afford that infrastructure? And what about researchers that aren’t at universities? How can a commercial researcher provide the same review? An independent researcher? Someone at a small teaching school without qualified peers to do the review? Isn’t this discriminatory and inequitable, helping the rich researchers at rich universities thrive while leaving everyone else unable to publish?

There is a reason why it is more efficient to centralize the infrastructure needed for the research literature, and though many complain about the costs, it’s vastly more efficient than recreating the same wheel at every single research institution. Also by outsourcing the work to neutral third parties, readers can assume (mostly accurately) that the work was given an unbiased review, free from conflicts of interest. What you’re proposing here would be more expensive and less useful.

As an effectively independent researcher, and one who has lost a university affiliation for incomprehensible reasons, I endorse your comment wholeheartedly. And as a regular reviewer for specialist journals, I do not think the human face has vanished within specialist, non-glamourous fields. Most often, my reviews centre on improvements, based on specialist knowledge which no one in my (ex) university possesses. The object is not simply accept or reject.

It was great to read—very timely. Being in the role of looking for reviewers, I found this text very close to what I feel myself. I’m not sure that 100% AI-based peer review could work. AI can assist with certain steps, as it already does in writing papers, but we still need humans to have the final say.

Leaving peer review to institutions is quite risky, as not all institutions maintain the same standards of scientific quality, research ethics, etc. This is especially sensitive in small communities or countries with some form of authoritarian regime.

Better recognition probably needs to be in place. For instance, institutions could acknowledge peer review as real work and count the hours spent toward FTEs—just as they do with published papers and other academic duties.

If AI-enhanced tools were used to support the review process—eliminating the need to check technical aspects—reviewers could focus on the core ideas and methodology of the paper. Combined with appropriate incentives and the integration of peer review into FTE calculations, this would make a much stronger case for accepting review invitations.

Here’s a thought. What about universities paying their researchers for doing peer review? Since if publishers paid peer reviewers the cost of doing that would just get passed back to universities via higher APCs and or charges to libraries (via all the different types of OA agreements and subscriptions). The money would have to come from somewhere! Therefore, if the university compensated academics for peer reviewing it would cut out the “middle man(person)” and also cut admin costs (and any mark up of costs which would likely be added by commercial publishers).
Hang on a minute …. maybe universities are already paying for peer review with the assumption that this is part of a researcher’s job?

Universities do pay their researchers to do peer review. It’s covered in the “service” portion of the “Research, Service, and Teaching” requirements. And when all those are done adequately, the researchers are paid with tenure by getting a large permanent increase int heir base salary. And once again if they reach full.

In most cases it’s not likely that there will be other experts at the same institution who can review the papers produced there. Especially, at smaller and lower-ranked institutions. In some fields things are highly specialised and even top institutions might not have relevant people.

Apologies for a cranky comment,…..but….while these are many known challenges in peer review (grossly generalized here), be careful of unintended consequences. Like most–err almost all–other critiques, this piece views peer review and scholarly publishing as only for scholars. It is not. Peer review and the use of the “peer reviewed literature” has critically important societal benefits and is codified in the US and international legal (Dow v. Dow Chemicals), Regulatory (many), and Advisory systems (e.g, IPCC and many more). These require use of the “peer reviewed scholarly literature” to justify and assess expert testimony and laws and regulations, and be used in formally advising governments at all levels. These uses benefit science too and are part of a shared contract with society that provides that benefit, and well trust. Note that these and other uses depend on the “body of peer-reviewed research” not one anti-vax, retracted, or “global warming is caused by pirates” paper that might be published. These are but a few (codified) uses in and by society. there are many more, and society funds and supports science. Sure, there might be other solutions to these uses, but this contract benefits both and we should think seriously before upsetting this apple cart in ways that might affect this symbiotic relation.

Also, many agencies and companies (yes they produce peer reviewed literature too, so +1 to phil’s comment), already do internal review and have for a long time. All the agencies i’ve dealt with welcome the publishers review including for the above reasons (they recognize very well the societal contract).

To be constuctive, the biggest miss in peer review today, and largest challenge imho, is that increasingly the most important output of research and science is usable data (really FAIR, not just sometimes FA at Figshare or their ilk), and increasingly these data are not reviewed or curated well (which involves review). This miss is growing exponentially and greatly risks garbage in/garbage out results that will do more harm than fraudulent text. What might be needed is not more reviewers of the “paper” per se, but partnerships with scholarly communities through societies and curated data repositories, and shifting resources from say copyediting text to these goals and partners, and other fundamental changes in the format of outputs (e.g., notebooks vs. PDFs).
thanks!

So many thoughts in response to this post, but I will focus on this from #5.

“But, can we truly justify why a couple of subject matter specialists—who have no knowledge of the deep, personal journey of the researchers—are qualified to judge the latter’s manuscript and to recommend if it is worthy of sharing with the world or not? Why should peer review not be considered a system that undervalue the specialized, dedicated collective efforts of a research team and their respective institutions?”

Science is fact. Scientific papers report facts. Reviewers, selected to review based on their professional experience, examine those given sets of facts with care and precision. Reviewers giving of their own time are committed, dedicated, and honorable people who care first and foremost about advancing science and improving healthcare.

I am at a loss to understand the above quote.

Science is an attempt at arriving at facts. Reviewers, no matter how professional and sincere they may be, are still susceptible to biases. Introducing AI tools to augment these reviewers may help to strengthen the robustness of the overall review process.

The statement “reviewers giving of their own time are committed, dedicated and honorable people” is highly subjective and cannot be accepted in its totality. Anyone who has worked on a journal board can easily attest to the contrary with many examples.

You raised a few great pain points in peer review.

However, the AI review has massive structural fail points, which you already mentioned. And the only thing more terrifying is your second approach at the end. Putting this in institutional hands would not be wise.

Papermills are a big enough issue as it is even for well run journals, allowing universities to govern their departments review would cause lots of ethical issues, and normally, peer reviewers are discouraged from being in the same country, let alone same university or department, from the authors.

Back to the drawing board.

This is a provocative post. Much to agree with. And to disagree with. Hard to tell if this sweeping condemnation of present peer review practices is based on experience or by repeating internet complaints. My experience is much more positive, coming from an environmental science perspective. There I switch hats between author, reviewer, and editor roles. Yes it takes too long, difficulty in finding reviewers is maddening, some reviewers are excessively critical, and some are cursory. But most are sincere, timely, and constructive. Our culture undoubtedly differs from highbrow biomedical or glam journals, but most articles that go out to review get a rejection or decision to revise after one round of review. Editors are encouraged to decide themselves whether revisions are responsive and appropriate rather than sending it back around to external reviewers and adding weeks. I probably send a quarter back around for round 2 – usually because there’s some specific point of contention between a reviewer and the authors.
The idea of relying on institutions and funders doing the reviews and the publishers only concerning themselves with publishing mechanics? Be an interesting trial. As an institutional author, I’ve long been subject to pre-submittal review and approval. That’s its own special experience. My institution approaches these reviews with great exuberance and sincerity, funding a team of senior scientists who do nothing but, and with hierarchical controls to enforce it. The thought that this could be the norm across institutions seems nonsensical – it would just be a rubber stamp. And place funders in an approval role? In the environmental sciences, funders are often clients with a financial stake in the research outcome and publication. I doubt this is unique to the environmental science.
Hope your piece provokes further debate.
(and I have a quick-read rant criticizing peer review practices in environmental science at https://academic.oup.com/etc/article-abstract/44/2/318/7942965)

Chris Mebane’s article is well worth a read. Not a rant at all! It’s good, thoughtful, well-argued, with helpful suggestions. I agree with a lot of it. He puts forward six arguments for why he feels “double-blind peer review practices increase vulnerability to scientific integrity lapses over more transparent peer review practices”. I was able to access it a few months ago but it now sadly seems to be behind a paywall.
The article was also the subject of a lively and interesting discussion between Dan Quintana and James Heathers on the Everything Hertz podcast (#188, 30 Jan 2025). A heads up – the first few minutes don’t reflect what they actually think – another commentator and I on Bluesky didn’t get this – so stick with it for a great discussion!

Yikes. Point 8 is a painful one. No. It gatekeeps scholarly publishing only to highly productive research institutions, and shuts out independent scholars entirely. Internal reviews eliminate any possibility of unbiased reviews. And it also laughs at the notion of actual “peer” review: generally universities do not have multiple individuals with the same specialty, capable of reviewing each others’ work with any rigor.

Thank you all for sharing your thoughts and responding to some of my reflections. I understand the Option 2 (Reflection 8) I’ve proposed have many questions to answer. Your points not only touch the peer-review systems, but brings in other elements of the publishing ecosystem. I would like to share my thoughts on some of the points frequently mentioned above.

1) Institutional review is biased; and 2) institutional standards/capacities vary: I think, if proper institutional review is installed (despite Chris Mebane’s experience), it won’t be biased since institutional reputation will be at stake if flaws are discovered after publication. If you think, peer review is full of ‘randomness’ and ‘uncertainties’: author’s submission to a journal is often random among many potential journals, editor’s invitation to reviewers shows the same randomness, so is the latter agreeing to review, and finally the ‘acceptance level of manuscript quality’, which varies from reviewer to reviewer and journal to journal. So, in that sense, peer review is nothing but a roll of the dice (https://scholarlykitchen.sspnet.org/2024/10/24/scholarly-publishing-the-elephant-and-other-wildlife-in-the-room/)! I believe instead of this randomness, an organized institutional approach to review would be better since it will address both research and publishing integrity. Indeed, institutional standards vary across the globe, but same is true for publishers and their journals. That’s why we see same publisher’s highest APCs is more than 50 time higher than the lowest APCs (https://scholarlykitchen.sspnet.org/2024/10/24/scholarly-publishing-the-elephant-and-other-wildlife-in-the-room/).

3) High volume of research outputs cannot tackle internal reviewed; and 4) there are not enough experts in an institution to review: While thinking of the ToR of the research review committees (RRCs), I didn’t expect them to conduct “peer” review as we define it now (also not working as rubber stamp). I believe in the long run we should have such institutional research system which will allow only quality manuscripts going out of the institutions to the journals. If we are thinking of how to handle the ‘volume’ of manuscript, we have to ask ourselves: what is the actual meaning of that ‘volume’? Every six second a piece of research is out (https://wordsrated.com/number-of-academic-papers-published-per-year/)! What is the ‘value’ of such research?

5) Journal will just rubber stamp their approval (and will lose their authority/reputation): The peer review, as we practice it, is not the only quality assurance activity managed by a journal. When a journal put 20% of its submitted manuscripts through peer review process by rejecting 80%, it is already one step ahead of approval in the anticipation that these 20% manuscripts are worth publishing in their journal. I know, not many journals can afford checking manuscripts rigorously before forwarding them to peer reviewers (like BMJ: https://www.bmj.com/about-bmj/publishing-model). But still, no editor wants to waste volunteer peer reviewer’s time by forwarding an unworthy manuscript with great chance of rejection. In my proposed Option 2, journal will still choose what to publish and what not, they just don’t rely on peer reviewers, but on their editorial team.

6) Academics are already paid by their institutions to peer review; and 7) peer reviewers should be recognized more for their contributions: Researchers conduct research and authors publish papers from obligation to the institution and it is reflected in their employment contract/job descriptions (JD). Unless peer review is mentioned in the JD, we cannot assume it is their ‘academic responsibility’. I checked a few reputed universities’ promotion criteria where peer review is mentioned (https://scholarlykitchen.sspnet.org/2023/09/29/ending-human-dependent-peer-review/). But is it also mentioned in the staff’s contracts/JDs? What about other tens and thousands of universities all over the world? We hear from researchers that they struggling to publish papers. Have we heard researchers complaining that they are not getting enough papers to review? ‘Academic responsibility’ has different faces. If we institutionalize peer review as a ‘true’ academic responsibility, then will come the question of i) quality of our peer review, ii) which journal we are reviewing for, iii) ranking of peer reviewers, and so on. It would be interesting to see how that discussion will evolve.

8) Independent/non-institutional researchers will be excluded: I know a case of a couple of independent researchers who, while submitting manuscript to a journal, had to mention the names of organisations, which had nothing to do with their research (i.e., did not cover their time cost or research expenses) only because the journal would not accept a manuscript without an institution. This again shows publishers/journals recognize institutional affiliation as crucial for authors/researchers.

No doubt peer review needs disruption. Brooks Hanson suggested a transformative change in his comment. Look forward to other ideas.

This response misses the main point even more than the original article and reveals a misunderstanding of the nature of universities and research and being a professor. Here are a few refutations:

1. Regarding “author’s submission to a journal is often random among many potential journals.” Not historically true (it may be true in today’s unhygienic explosion of low quality journals). Authors should pick journals for a number of specific reasons, including: fit in terms of topic, quality of the journal, speed of reviewing, time to publication, institutional acceptance of, desired length of articles, fees required. The fit in terms of topic should be excellent or desk rejected. The rest are matched to the author’s needs. There should be nothing random in selection. (Are scholars submitting randomly? yes. But that’s a problem with solutions outside the narrow focus you want to keep on fixing one narrow slice of the crushing problems in academia)

2. You are a scientist and you seem remarkable blindered to any other research happening across campus. Do you really think English faculty are capable of assessing anything about a biologist’s research and methods? A quant sociologist can’t even evaluate their departmental colleagues’ qualitative work. Can anyone evaluate experimental physics but another experimental physicist? And how many of those are bouncing around a tier 2 campus? One. Exactly one.

3. You know a couple of independent researchers who had to con some organization into affiliating them to publish. (this one steams my corn) Do you think that’s a good idea? The independent local historian working at the local town historical association doesn’t have the resources you are demanding and really does not need them to produce quality research. And your unemployed anthropologist still needs to produce research if they are going to find a position, and they are completely capable of producing high quality research without an institution. Your response to the criticism of this simply doubles down that you are intentional about your gatekeeping and see nothing wrong with it. That may perhaps makes sense in a lab-based field, but not anywhere else.

4. Tenure obligations are not specific in any way you imply. But Service, which is short for Service to the Profession (as well as the department and the college), absolutely includes providing peer review. Just as it includes serving on doctoral committees and being involved in shared governance. Being *faculty* is defined by combining research with these service and teaching obligations. Without them, a person is just a researcher and not a participating member of a faculty.

To me the proposals and the response to them reeks of deeply internalized STEM bias that utterly ignores the existence of any other scholar or researcher in any other position. But even within bench science these ideas and understandings of modern universities are impossibly off the mark.

Suggestion 8 is simply awful.

Leave a Comment