Loading Mail onto Railway Post Office Car
Image by Smithsonian Institution via Flickr

When multiple scientists come up with similar solutions simultaneously, it may be time to take notice.  In this week’s issue of Science, a letter to the editor, titled “Battling the Paper Glut,” proposed a solution to finding competent reviewers:

Journals should demand that for every paper submitted, an author provide three reviews of other manuscripts

If it sounds familiar, it is.  The “golden rule” to peer-review was reviewed just last week in the Scholarly Kitchen in a fully formulated proposal to privatize the peer review system (see: “Privatizing Peer Review — The PubCred Proposal“)

In my review, I wrote that the PubCred bank solution was based on a tentative premise (that editors are experiencing a more difficult time finding competent reviewers), which is based on an even more tentative premise (that competent reviewers are overloaded with requests and others are unwilling to pull their fair share of the reviewing load).

It was time to hit the books and find out if these assumptions had any basis.  Luckily there are a few peer-review experts out there and a number of exceptional studies:

  • In a 2007 survey of three thousand academics conducted by Mark Ware for the Publishing Research Consortium, Ware reports that the most active reviewers were indeed overloaded.  While 90% of responding authors claimed they were active reviewers — reviewing an average of eight manuscripts per year for three to four journals per year, and occasionally for an additional four journals — some reviewers clearly took on much more of the load. The most active reviewers claimed they reviewed 14 manuscripts per year on average, and 20% of requests were declined.  Older and more senior academics claimed to review more papers than their younger colleagues.
  • The 2009 Peer Review Survey of more than 4,000 authors conducted by Sense About Science reports that reviewers, on average, refuse to review only two manuscripts per year, with 39% accepting all requests and only 7% rejecting more than more than five requests.  While there is clearly a group of reviewers rejecting many requests, the number of “overloaded” reviewers appear to be quite small.  These statistics do not seem to suggest a “crisis” in peer review.

To boost participation and improve the timeliness of review, all three studies point to similar conclusions:

  • Provide reviewers with free access to journal content
  • Acknowledge reviewers periodically in the journal
  • Provide reviewers with feedback on the outcome of the review decision
  • Give reviewers feedback on the quality of their review
  • Reward the best reviewers with appointments to the editorial board

Other incentives to reward reviewers also include waiving page or publication charges to reviewers or providing other in-kind forms of gratitude, such as a swell reception at a conference.

Reviewers generally felt uneasy with direct financial compensation for their time, especially if it requires authors paying the tab. Most academics view reviewing as part of their academic duties and not a form of moonlighting. Tapping into the academic reward system seems key to improving participation. As Tite and Schroter conclude in their study:

Reviewing should be formally recognised by academic institutions, and journals should formally, and perhaps publicly, acknowledge the contribution of their reviewers.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

15 Thoughts on "Are Peer-Reviewers Overloaded? Or Are Their Incentives Misaligned?"

“Provide reviewers with free access to journal content
Acknowledge reviewers periodically in the journal
Provide reviewers with feedback on the outcome of the review decision
Give reviewers feedback on the quality of their review
Reward the best reviewers with appointments to the editorial board “

I don’t see #1 happening without increase in the publisher’s costs and that would just get passed on.

#2 could be done at nearly zero cost.

#3 would be a fun can of worms to open. If reviewers sees that their reviews are being ignored, they would probably stop reviewing for that journal, wouldn’t they?

#4 Is this serious? Who would provide a review on a review? How long until there would be a cry for a review of the review of the review?

#5 Great idea.

Don’t most journals do #3? We certainly let peer reviewers know the final outcome of the papers they’ve reviewed.

In my field, many, though not all, journals do indeed do #3. Indeed, many journals don’t just let referees see the final decision letter from the handling editor, but also let the referees see the other reviews. Never heard anyone complain about any of this.

#1 is of very little relevance in most cases – most reviewers will already have access to journal content through their institutions

#2 is done already by a few journals. In my experience the only outcome is suspicious looks at meetings by colleagues who seem to think that you must have been the killer reviewer for their last paper…

#3 and #4 is likewise already done by many of the top journals, simply by providing access to the other review reports and in some cases also the editorial decision. The reviewer sees if his/her report is in line with the other (anonymous to him/her) reviewers opinions, and if they identified the same strengths and weaknesses in the paper. This is de facto feedback both on how much the review contributed to the editorial decision, and the perceived quality of the review.

# 5 is likewise not a new idea, and is already being done for some time at quite a few journals

Thanks for digging out these studies Phil. Seeing some real data makes the alleged “burden” seem even lighter than was seen for my small informal survey here. My numbers may have been skewed a bit high as I was talking to mostly high profile researchers at top universities and institutions.

I do think today’s researchers are overloaded, but peer review is not to blame. The real burdens come from the intense demands of the career, generating results, finding funding, teaching responsibilities and endless committees. 8 papers a year, 0.66 per month doesn’t seem an enormous workload.

The Publishing Research Consortium study and the Sense About Science study both show that the PubCred approach is misguided, that the level of researchers turning down requests is fairly low. As noted here, it’s a solution in search of a problem.

I also challenge the authors of the Science letter to provide data to support the statement, “The top journals now are flooded with numbers of manuscripts beyond most editors’ capacity to handle.” I know we have a lot of editors who read this blog, are any of you in over your heads?

I do think the work done by peer reviewers should be acknowledged, but as you noted in your previous post on the subject, knowledge is not democratic. The system is designed to find the most competent and relevant reviewers, not to spread the burden evenly over everyone. That means some researchers may have a heavier workload, others may be left out altogether. It would be nice for institutions to acknowledge the hard workers, but I don’t think they should be punishing those left out.

David, I have to admit I’m a little surprised by your confidence in current state of the peer review system, and its resilience going forward. I question whether the studies Phil cites provide clear-cut support for your confidence. The studies Phil cites are indeed reassuring in a number of respects, but they also give cause for concern in other respects. For instance, the fact that a small fraction of reviewers do a large fraction of the reviewing arguably makes the current system rather fragile. There are indeed good reasons why we might want some people to do more reviewing than others. But that doesn’t change the fact that if those people start changing their behavior in the face of strong incentives and increasing pressures on their time (pressures that indeed have many sources), then the system could well start to break down.

On a personal note, I also wish you would stop implying that those who disagree with you are uninformed and naive, as if the correctness of your position were obvious to any thinking person. As I stated above, the studies Phil cites give many reasons to feel confident about the current state of the system. But they also indicate some existing problems, and some features of the system that make it vulnerable to breaking down. Combine those existing problems and vulnerabilities with strong incentives to submit and lack of incentives to review, incentives which have become more stark as the “publish or perish” culture has strengthened. Those incentives alone provide a good reason to wonder about the current state of the system and its resilience going forward. And combine all that with the increasingly widespread *perception* of problems in the system. Where there’s smoke, there’s not necessarily fire–but there may well be, and there’s definitely good reason to go check, and make sure one has a fire extinguisher handy. So in light of all that, please explain to me why Owen and I, and the many others who have discussed possible reforms to the system, are “naive” or “misguided” to do so (you’ve used both those words in your posts and comments). There’s a difference between debating the extent of problems in the current system and their likelihood of getting worse, and stating or implying that there’s no debate worth having and that anyone who thinks there is is “naive”. I’ve just made an argument why Owen and I and those who agree with us are not naive. If you care to make a counterargument, I’m all ears.

I’m not sure I’d characterize my point of view as an absolute level of confidence going forward. I am, however, looking for concrete evidence of any current or impending “crisis”, and very little has been offered. Every single report I’ve seen shows an overwhelming level of support for peer review and little perception of any ongoing “crisis”.

I can’t find any instance of my using the word “naive” in a posting or a comment. Please let me know where I did so and I’ll try to better explain my position.

As for “misguided”, I use that term because I think you have an admirable goal in mind (improving the peer review system), but your PubCred system is not capable of achieving that goal. Your proposed system is 1) not compatible with the best practices of peer review and 2) I think it places far too much emphasis on the value of being a peer reviewer. This was covered in my recent blog posting here, but to briefly summarize:

1) Knowledge is not democratic, and to work properly, editors need to find the best qualified, most knowledgeable reviewers for any paper. Many of the flaws seen in our current system are due to failures to find qualified reviewers. Your PubCred system asks editors to spread review duties evenly across all researchers, rather than seeking out the best reviewers for a particular paper. That, in my opinion, leads to a lower quality review system.

2) The PubCred system requires a researcher to perform peer review if he is to publish his own work. This seems a backwards set of priorities as far as I’m concerned. Peer reviewing is not as important as conducting research. Take away the research and there’s nothing to peer review. The reverse is not true though, take away peer review and research still has value (it’s just harder to sift through). Your proposal puts peer review ahead of research.

It also concentrates power in the hands of the editors who run the peer review process. An editor, by choosing peer reviewers, will essentially be choosing who is and who is not allowed to have a career as a scientist. If I don’t like you (or have never heard of you), then you get no peer review assignments, and thus your career is over because you can not publish your research.

The punitive nature of the PubCred proposal is what’s most problematic about it (though paying for the system is also a major stumbling block). Make it a system for positive reinforcement, a +1 added to any researcher’s score, a cherry on top of the sundae built from their research results, and it’s much more appealing. Leave it as a way to drive people out of careers doing research and I can’t support it.

“Reward the best reviewers with appointments to the editorial board”

If that’s the reward, I’d hate to see the punishment!

Do you really think that the in-demand reviewers are hurting for editorial board appointments?

It’s interesting that reviewers have compunctions about being paid to review articles, but have no compunctions about accepting monetary payment for reviewing book manuscripts. If they review 6 to 8 articles a year, they are reviewing the equivalent of at least one monograph. The costs involved in reviewing books do indeed get covered as part of a publisher’s general overhead expenses and are one among many factors that go into determining the prices at which books get sold.

Comments are closed.