The theme of Peer Review Week back in 2016 was, Recognition for Review, “exploring all aspects of how those participating in review activity – in publishing, grant review, conference submissions, promotion and tenure, and more – should be recognized for their contribution.” In the subsequent two years, the idea that peer review is important and valuable has largely been spread to, and accepted by, the research community. This year’s Peer Review week seemed to take that as a given, yet everything I read seemed to gloss over any practical implementation of how credit for peer review that might work. No progress seems to have been made on questions of how efforts should be measured, who will monitor it, and to whom it will matter. Back in 2015, I wrote a Scholarly Kitchen post, “The Problem(s) With Credit for Peer Review“, asking those questions, and I felt it was time to revisit them.
The original post pointed out the psychological and motivational changes that happen when one shifts an activity from volunteerism to a required or commercial realm. It noted that being asked to peer review for a journal is largely something that is outside of a researcher’s direct control, and so any rewards system based on such activity would concentrate significant power on journal editorial offices and publishers. But perhaps most importantly, the question, “who cares?” was asked — who are we asking to recognize peer review efforts and what rewards do we expect from them?
In the intervening years, we’ve seen Publons grow and be acquired by Clarivate. At the same time, there’s been no widespread indication by funders or institutions that the information collected has been used in evaluating funding or job candidates. One hears anecdotes about individual researchers who have listed their peer review work on their CV or applications and received job offers or tenure, but I have yet to see any funder or university explicitly declaring that peer review experience is an important criteria for hiring or money decisions.
Much of Publons’ recent focus seems to have shifted away from the promise of recognition and toward building much-needed tools to help journal editors find peer reviewers. Perhaps this is something of a business pivot from Clarivate. They still need to drive the idea of “credit” to get participants to feed them data, but the real utility of that data may be elsewhere, in particular the enhancement of Clarivate’s ScholarOne peer review system in an increasingly competitive market. The recent acquisition of Editorial Manager by Elsevier opens up a new front where the two companies seem to be squaring off against one another. Publons can no longer be seen as a neutral third party service in the data/workflow sphere.
The two most common suggestions one hears from researchers are that either 1) peer review should be explicitly included in the job requirements of researchers, or that 2) peer reviewers should be paid for their service. Many researcher contracts include vague requirements for “service” but don’t specifically define what that means. It could mean being on thesis committees or mentoring students, but it could also mean serving the community as a peer reviewer. But let’s not make assumptions or force anyone to guess — would it make a difference if performing peer review was clearly stated as an expectation in one’s contract with a research institution?
This would codify the argument that is commonly made — peer review is part of the job of being a researcher, and if you’re earning a salary, then you’re already being paid for these activities. Would this be a welcome opportunity for researchers to demonstrate their contributions to the community or just another burden added to the already lengthy list of hoops through which researchers must jump? I could envision a tenure or hiring committee giving some small level of favor to a candidate who is seen by leading journal editors as an expert in the field. But does this then harm candidates who aren’t as skilled at networking, or those who choose to spend all of their time on their own research? Does it bias evaluation against lower-profile researchers at lesser-known institutions, or those from developing countries who studies show are less likely to be invited to peer review?
Regardless, any such credit is only going to be a tiny fraction of what’s offered for one’s own original research or teaching activities. No sane committee is going to care as much about your beautifully argued critiques of the work of others as the work you’re able to accomplish yourself. And if we struggle with getting evaluators to read the actual papers (rather than relying on the Impact Factor), then how likely are they going to spend time reading peer reviews?
Direct payment for peer review is an approach with its own drawbacks, mainly adding significant costs to a system that many already consider too expensive. How much should a reviewer be paid? If I recall correctly, the Company of Biologists used to pay reviewers $25, and stopped the practice at the request of those same reviewers — the money offered wasn’t worth the paperwork and hassle it took to receive it. One also has to look beyond the actual payments themselves. New systems would need to be built and maintained to facilitate and track those payments. The same goes for any sort of payment done as credit toward a future open access fee or color charge. How will the journal’s submission system remember what reviewing you’ve done in the past and what you’re owed? Either method would mean a major (and costly) revamp of existing infrastructure.
Paying peer reviewers would mean higher subscription prices or higher article charges for open access journals. It is possible that the largest commercial publishers could absorb those increased costs into their significant profit margins and keep costs flat. But this would not be possible for most not-for-profit and independent publishers who rely on much smaller margins. The unintended consequence here is further consolidation of the market, increasing the power of the largest commercial players.
Neither suggested route seems satisfactory, and so we remain in limbo. We know peer review is important, and we know that some sort of credit should be given for the hard work that goes into it. We’ve seen a host of companies try to monetize peer review from a variety of angles (e.g., here, here, here, here, here, and here), yet all seem to have failed.
We now have systems where that work can be tracked and verified. But we still don’t know what to do with it and we still haven’t answered the most important question: what is it really worth?