“It takes a researcher 3-5 hours to review a manuscript,” editors quip, “whether you give him/her a week or six months!”

I’ve heard many variations on this joke, but the principle remains: most people are motivated by deadlines and need periodic reminders to meet them.

carrot

One academic discipline that could use stronger motivation is economics, where researchers commonly wait months –if not years– for their manuscripts to be published (Ellison, 2002). In the early 1990’s, economists were playing with the idea of paying reviewers to speed up the process (Mason, 1992). At present, the American Economic Association is one of the few publishers that pays reviewers for reviews. The B.E. Journal of Economic Analysis & Policy continues to use a banking model where researchers, by reviewing other manuscripts, accumulate credits that they can apply to have their own manuscripts reviewed. This is similar to the PubCred banking model proposed for ecology.

With the exception of the BMJ, few editors feel comfortable experimenting on their journal, relying more on anecdotal evidence or the preferences of editors to help guide journal practices. When it comes to publishing, everyone believes they are an expert.

In a recent paper, “How Can We Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics,” Raj Chetty, Harvard economist and editor of the Journal of Public Economics, decided to experiment on his own journal, testing whether shortened deadlines and cash incentives increased the speed and quality of peer reviews.

During a period of 20 months, 1,500 referees of the journal were randomly assigned to one of four groups:

  1. A control group with a 6 week (45 day) deadline to submit a referee report
  2. A group with a shortened 4 week (28 day) deadline
  3. A cash incentive group rewarded with $100 for meeting the 4 week deadline, and
  4. A social incentive group in which referees were told that their reviewing times would be posted publicly

Chetty reported that shortening the reviewer deadline from 6 to 4 weeks reduced the median review times by 12 days (from 48 to 36 days). You’ll note that most reviewers are missing their deadline, but much of the increase in speed occurs in the last week before their deadline. Writing on the journal website:

If you shorten the deadline by two weeks you receive reviews two weeks earlier on average. In fact, we noticed that whatever timeframe you give, most people submit their review just prior to the deadline.

Providing $100 cash incentives for submitting a report within 4 weeks further reduced review times by 8 days. In addition, there was no evidence that when the cash incentive stopped, reviewers in this group reverted back to slower reviewing practices.

The social incentive treatment reduced median review times by just 2.5 days, smaller than the effects of other treatments. However, the social incentive treatment had more of an effect on tenured faculty, who are less sensitive to deadlines and cash incentives, Chetty explains.

Their Kaplan-Meier plot reveals the differences in review times between the four groups. You’ll note how fast the cash-incentive group rushes to submit their reviews just before the deadline. Nearly 50% of reviewers in this group submitted their report between their reminder email and deadline.

Chetty, Saez and Sandor (2014, forthcoming). How Can We Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics
Chetty, Saez and Sandor (2014, forthcoming). How Can We Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics (forthcoming)

If you’re worried about reviewers rushing to send a few perfunctory comments in exchange for some quick cash, shorter deadlines did reduce the length of the submitted reported somewhat, but it did not have an effect on the quality of the review (as measured by whether the editors followed the reviewers decisions), the researchers reported.

The researchers also looked at whether changing the incentives at one journal affected the performance of reviewers for other journals. If one journal were offering cash incentives, would it reduce their willingness to review for a competing journal, for example? Restricting their comparison to 20 other economics journals published by Elsevier, Chetty found no detectable effects on referees’ willingness to review manuscripts; nor did the experiment affect their review times at other journals.

If paying editors to review does become commonplace among economics journals, I do wonder whether reviewers would change their decision-making process, initiate a competition between journals on reviewer compensation, and leave those journals who cannot afford to pay reviewers at a distinct disadvantage. I believe there are many things that we currently do on a voluntary basis that we would stop doing knowing that a functioning market exists for our services.

straw poll I conducted on The Scholarly Kitchen last year found that monetary rewards for peer review services were deplorable for some respondents, while others appeared to view them as either a solution to a growing problem, or as a potential source of additional income. Monetary and reputational rewards can coexist in the same scholarly marketplace and attract different participants for different reasons. There is no reason to believe that there will be just one future model for peer review.

I posed the question about whether faculty should be paid for reviewing papers at a dinner party and, as expected, got a variety of answers. Two biologists were fundamentally opposed to the idea whereas the business school professor was willing to entertain it. The sociologist thought context was vitally important and wondered who was ultimately going to pay this cost. These Cornell and Ithaca College faculty assumed they would still be reviewing relevant, high-quality, well-written manuscripts. When I asked them if they were willing to review a unintelligible, poorly-written manuscript, no one thought $100 was worth their time.

When priced too low, cash incentives and other discounts to reward voluntary work can have the opposite intended effect, as this systems biologist gripes about being offered a $10 book coupon for his work. In this case, a simple thank you letter would have been more appropriate. Likewise, the Company of Biologists discontinued the practice of paying reviewers $25 as a token of appreciation when the transaction costs at both ends didn’t seem to be worth the effort and served only to frustrate and infuriate some reviewers. If you have any experience with incentivizing reviewers (successes and failures), please let us know in the comment section.

What I like about the Chetty paper is that approaches the issue of timeliness of the review process as a complex problem, tests various solutions rigorously, and proposes small changes that can nudge the system to a more desirable state, at least for economics.

The “dismal science” can offer the rest of us some important lessons.

 

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

27 Thoughts on "What Motivates Reviewers? An Experiment in Economics"

Ah, reviewer behavior. About 2.5 years ago we started looking deeper into our metrics. We chopped up the turn-around time into snippets– time from submission to first reviewer invited, time to decision once all required reviews received, etc. We did this because the time from submission to final decision was way too long and the editors mostly blamed reviewers for taking forever. A colleague and I have a paper on the topic in the most recent issue of Science Editor. We were giving reviewers 45 days and the editors were reluctant to change the deadline. So we left the due date at 45 days but changed the reminder structure. Instead of getting a reminder 5 days before the review was due, we sent it out 10 days before, then 15. It was clear that most of the reviewers do the review when they get the reminder. We have since been able to change the reviewer times for most journals to 30 days with a reminder at 20.

Recognizing reviewer contributions is always difficult but it’s important, We started an Outstanding Reviewer program a few years back. The editors choose their best reviewers and we post the list of names on the website. We also send them a certificate and we send a letter to their dean or department chair recognizing their contribution. This is not an insignificant amount of staff work with 34 journals but in the end, it is totally worth it. A thank you and a note to the boss goes a long way.

Market forces would certainly result in quick evolution from the currently experimented token incentives to whatever is fair compensation to the particular scientist, who would be in effect selling expertise to industrial buyers in a competitive marketplace. Rates would be commensurate with the scarcity of said expertise. So dropouts and new postdocs would be relatively cheap, while elected members of national academies would be very, very expensive. Journal prestige would depend on who the journal owner can afford. If history is any guide, sadly so would the prestige of individual papers. It would be a global market; the global south would end up being reviewer of the north. Scientific articles would be very expensive, either to write, or to read.

It would not be a good world.

Of course the above assumes that a customer emerges who would be willing to pay the necessary premium on the price tag of the publishing product (i.e. paywalls) or -service (i.e. APCs). Fortunately, that will never happen.

Reblogged this on The Moldamancer and commented:
Is doing work for free and without recognition the best way to ensure rigorous, honest, and timely peer review?

I wonder if the “no one thought $100 was worth their time.” could be extended to any manuscript? 3-5 hours to complete a review is arguably between 1/2 to a full day of work. What would have happened if the remuneration were $150-$300/hour (consultant level pay) range instead of the $20-$33/hour offered in the study?

But as Janne Seppanen mentions above, who could afford it?

Actually $25/hr works out to around $50k per year full time, quite enough to support a cadre of full time reviewers. It is an interesting prospect. For example they could focus on the state of the field, rather than their own narrow research as things now are with “amateur” reviewers.

How much would adding $300 to the direct cost of each article add to the subscription price? But then having professional reviewers might reduce the editorial cost of training and hand holding, including reminders, etc.

You have to remember though that you’re not adding a cost to each article that appears in the journal, you’re adding that cost to every single submission that goes out for peer review. If a high end journal has high rejection rate, costs are going to be a lot more extensive than just adding $300 per article that is eventually accepted.

Good point. It could be more like $1000 to $1500 per published article. Over a million dollars a year if you publish a thousand articles. Definately not nothing. Still an interesting concept, since it may cure some of the supposed ills.

As usual with TSK pieces, this article talks only about journal publishing. But the practice of paying reviewers to prepare reports on book-length manuscripts has existed for a very long time, so why not look to experience in that sector for hints about reviewer behavior? In my 45+ years as an acquiring editor for university presses, I have seldom had any reviewers balk at agreeing to evaluate a manuscript because of the size of the honorarium offered. (Honoraria for this service typically are very modest indeed, ranging between $150 and $250, when you consider that many manuscripts are the equivalent of six or more articles in length.) Reviewers are usually given four to six weeks to complete reviews, evidently about the same amount of time they are given to review articles (why?), and the variation in time of submission of a report varies widely–though probably most reviews are submitted close to the deadlines given.

Another opportunity is acknowledging peer review activities in a more formal way that supports validation and citation. In this spirit, some of the open peer review services are getting DOIs for review reports. Other organziations are starting to ask reviewers to register for a unique person-identifier (such as an ORCID). We are working with a number of organizations in the research and scholarly community, including CASRAI, to determine what a peer review citation would look like and make it possible for publishers and other organizations involved in reviews (such as funders, associations, academies) to be able to post information on review activity; essentially move this information from journal front or back matter into a more discoverable venue. The initial specification is going out for community input in June and we’d welcome feedback. More on this in our blog: http://orcid.org/blog/2014/04/08/orcid-and-casrai-acknowledging-peer-review-activities.

That is an interesting initiative, Laure. I’m looking forward to the release of the initial specification to read about it in some more detail.

On a related note, how to incentivize reviewers to spend their time giving quality reviews is a question that’s been bothering me as well. At the moment I feel that the most important advance would be for the institutions and funders to take review reports into consideration in promotion reviews and research assessment.

Following that, it’s simply a matter (although not a simple matter at that) of coming up with a way to efficiently share review reports either exclusively with only the stakeholders included in the process or openly with the entire public. I noted two distinct approaches to that problem in the blog post you linked: the “open peer review” approach, and the “summary information” approach with regards to organizations practising double-blind review. Both seem to have some serious shortcomings.

1. The “open peer review” approach would put reviewers in a disadvantage if they were to recommend rejection of a manuscript, no matter how exceptional and useful their review was. Of course, this issue could be solved by publishing each and every manuscript review regardless of a given manuscript’s acceptance, but that doesn’t seem to be a viable option and I don’t really see publishers devoting their resources to publishing such a potentially huge volume of review reports.

2. The “summary information” approach, on the other hand, makes it impossible to assess the quality of a review; it’s just a bland “this researcher has contributed x reviews for publisher/journal y in the past year” without offering much insight into the review reports themselves. This could be easily solved by publishers providing full review documentation (again, for all submitted manuscripts) to the stakeholders. Presumably it would require the publishers to use a considerable amount of resources to do this, and I’m not convinced that—from a publisher’s perspective—the supposed benefits (fast & thorough peer review, satisfied reviewers) would justify the costs involved in such an operation from a publisher’s perspective.

Although it’s unlikely that such a major change will happen any time soon, the ORCID and CASRAI collaboration looks like it might be a significant step in that direction.

One worrisome thing in peer review reward systems, be it money or tallying up the number of services rendered, is that they seem to assume review reports are created equal.

They most certainly are not.

Some are worthless scribbles where the reviewer barely read the paper, some are thorough and insightful pieces of scientific brilliance worth publishing themselves. Reviewer’s career stage, institutional prestige, or personal h-index do not predict reviewing quality (if anything, I’ve heard many editors say that it is the senior scientists who tend to provide the most hasty and careless reviews).

Thanks for your comments, Janne. A longitudinal study by Callaham and McCulloch did indeed find that the quality of peer review does decline as reviewers age, see:

Callaham, M., and McCulloch, C. 2011. Longitudinal Trends in the Performance of Scientific Peer Reviewers. Annals of Emergency Medicine 57:141-148. http://dx.doi.org/10.1016/j.annemergmed.2010.07.027

Thanks Phil! I did not know about that paper, this would show rather embarrassing inaptitude if I was in a field where I should know these things… oh wait… sorry.

An informal look at career stage vs review quality (no statistically significant effect) in our database can be found in a recent talk by yours truly (starts at 11:50). I should run a proper analysis on h-index versus quality, could be interesting.

Callaham and McCulloch (2011) is an interesting paper (statistically) because they measure the same reviewer over time rather than using a population-based analysis. This allows them greater precision because they can control for inter-reviewer variance. The effect, while statistically significant, was rather small–a decline of 0.04 points/year or about 0.8%/yr. Can you send me a copy of your paper?

I agree the decline over time is rather small, and probably not important. The more important thing I think was that reviewer’s first few reviews were a fairly good predictor of later quality, or lack of it. So there is variation, and some people are consistently good reviewers, some others are not, and generally they don’t get any better over time.

I did not refer to a paper, it was just an informal presentation of early data in my recent talk, which is available via youtube http://www.youtube.com/watch?v=DMI-pJlfS3k#t=709 (link jumps directly to the part where I discuss this).

Since 2010, we have offered pre-submission peer review services at Journal Prep. In the past four years, we’ve learned a few things and the notion that shorter deadlines and cash incentives motivate reviewers is not surprising. Here are a few points I thought would be relevant to share:

1) When a reviewer agrees to review a paper for us, he/she is given 7-10 days to complete the review. In some cases, reviewers ask for an extension of 3-5 days, often highlighting a busy travel schedule as the reason for the request.

2) More than 95% of reviewers who accept to review a manuscript for us follow through with the review.

3) We provide reviewers with a fillable PDF template that is flexible enough to enable it to be used across many disciplines. This was developed because several reviewers asked for “better reviewer guidelines”. (I’d be happy to share the template with anyone who would like to see it.)

4) We have worked with reviewers in just about every domain imaginable (with the exception of some fields of computer science) and we have never had someone respond to a review request and indicate that he/she was opposed to the idea of being paid to review a manuscript.

5) We have had a handful of reviewers request a higher level of compensation but I don’t believe there is any relationship between the position (ex. academic appointment) of the reviewer and this type of request. When we receive a paper for pre-submission peer review and we determine that the paper needs to go to a highly specialized, well-published expert, we find that one of two things often happens: The person accepts the request, indicating that the paper is of great interest to him/her; or the person refuses the request, citing a potential conflict of interest because he/she is involved in very similar research.

Wherever money enters, honesty and objectivity gets out!
Whenever peer-review involves money, peer-reviews becomes corrupted, science too.
Will the journals/publishers continue to pay peer-reviewers if they reject recommend?
I do not think so, because repetitive rejections is not in the interest of journals, paticularly open access ones, as it reduces their revenues. To keep their revenues, o even increasing them, editors may ask reviewers not to reject; ==> corrupted science!
Also, in case of pay, only rich publishers/journals can afford it. Small publishers on the other hand will disappear ==> Science will be transformed to capitalistic science and corruption!

Maybe the ideal solution doesn’t exist yet! Or, everything is free and ethical. It is the best solution!

Money is the dirtiest enemy of science in all its aspects.

At Medwave, my journal, we give our peer reviewers 15 days. All reviewers are assigned simultaneously in order to reduce time spent on follow-up. A few days before, reminders are sent out. Then downright harassment occurs before due date and up to the deadline. Since more reviewers are regularly assigned than actually needed, we are mostly able to complete the review process within this fifteen-day time frame. When two reviews are not submitted on time, then the whole process begins again, with a new batch of reviewers assigned. We have found that this makes follow-up easier. Those reviewers who do not comply with deadline are disinvited (no extra days allowed). Many reviewers respond with excuses (too late though) and stating that next time they will not fall short. Herd effect takes hold – somehow news gets around fast quickly crossing country borders – and increasingly response rates are improving. Quality though, that’s a whole different matter…

With my research group, we built a game-theory inspired lab experiment with students to examine the impact of different incentive schemes for reviewers. We published our findings in Research Policy last year (see http://www.sciencedirect.com/science/article/pii/S0048733312001230; if you are interested to read it, contact me at flaminio.squazzoni@unibs.it). We found that fixed material incentives (e.g., a fixed compensation) perform worst than any other type of incentives. Only variable incentives (e.g., incentives that depend on outcomes, as implemented in business companies) can work but these are difficult to implement for peer review in journals. Indeed, it’s hard to estimate how much the reviewers’ contribution was fundamental for the success of a published article or to avoid the risk of publishing low quality articles. Our results confirm that adding material incentives is problematic and no compensation seems a better option, also to avoid crowding out effects on reviewers and escalation of compensation, which would be unsustainable for most journals.

Comments are closed.