Rubric Highway
Rubric Highway (Photo credit: jenhegna1)

How do you motivate reviewers? In a voluntary market, journal editors provide non-financial rewards to incentivize researchers. Many shower their best reviewers with accolades at national conferences, print their name in journals, and consider their best and brightest reviewers for prestigious positions on their editorial boards. Most academics will gladly work for free if treated well and lauded with the respect of their peers.

The trusted relationship between editor and reviewer takes time and, like many trusted relationships, cannot be taken for granted. In return for loyalty, an editor must carefully select which manuscripts — and how many — are sent to each reviewer. As many editors understand deeply, it is easy to upset reviewers by sending them inappropriate manuscripts (out of scope, poorly written, or poor quality), or by burning out the best with far too many requests to review. Like any organization that runs on voluntary labor, there is no obligation for a reviewer to stick around. Reviewing is perceived as a favor, not a job.

In last Friday’s post, I explored the notion of paying reviewers for their time. While Rubriq is proposing paying reviewers $100 to incentivize timely reviews, serious discussion of paying reviewers goes back over 20 years to an article published in 1992 in (not surprisingly) an economics journal. Publication speed in economics is believed to be the slowest in the industry, with authors waiting 2-3 years between submission and publication for some journals. With this lag, it is not surprising why working papers are part of the publication pathway for economists. In addition to paying reviewers, economists proposed blacklisting slow reviewers. One even wished for the untimely death an ineffectual editor.

Today, reviewers for journals published by the American Economic Association are payed $100 for “timely reports.” It is not clear from the journal website what is meant by “timely.”

Rather than speculate whether an independent review company with no connection to a learned society or academic association (e.g. Rubriq) would succeed on paying reviewers $100 for their services, I set up a straw poll last Friday to gather results from our readers. I’m among the first to recognize the many potential biases in these surveys (sampling bias, response bias, leading questions, priming, among others); however, the results yielded some interesting results:

  1. Editors are hesitant about outside peer-review. Of the 125 votes, just 5 (4%) would “always” accept a paper based on a Rubriq report. Over half (54%) would consider the report and 42% would always insist on putting the manuscript through their own journal’s peer review process. Based on these general findings, it appears that Rubriq may function to supplement, but not replace, journal-centric editorial and peer review.
  2. Reviewers have multiple motivations. Of the 172 votes, those who would be willing to review for an external company based their participation on 1) being paid sufficiently for their time and expertise, 2) being guaranteed to receive high-quality, relevant manuscripts, and 3) receiving reputational (non-financial) rewards for their work. Less important was that the company was a non-profit organization.
  3. Reviewers have two price points. When asked how much money reviewers expected in exchange for providing a review, the results were bimodal. While one-third (33%) of respondents stated that they would not accept financial compensation for reviewing a paper, 73% of those respondents requiring compensation would only review when the rate was at least $100/manuscript, and 41% would only review for at least $200/manuscript.
  4. Authors want objective, high-quality reviews from experts. Of the 280 votes cast, authors ranked expertise, objectivity, and speed as the most desirable qualities from the review process. Not surprisingly, authors also considered that the service should be affordable (or paid by someone else). While 30 votes were cast for improving the chances of having a manuscript accepted for publication, no votes were cast that Rubriq would guarantee a favorable review. As editors frequently request authors to provide a list of suggested reviewers upon manuscript submission (and often a list of unfavorable ones as well), I find it odd that respondents would not select this choice. Did this question evoke authors’ social-desirability bias?

This straw poll suggests that many editors are hesitant with the notion of outsourcing the peer review process. The results are based on perceptions, however, and perceptions can change when a company can prove itself to deliver results. Remember that editors were also skeptical of outsourcing line editing, layout, and production services when first presented with the option.

Second, the results may suggest two separate review markets — one, a volunteer model based on developing and maintaining trust relationships between editor and reviewer that rewards reviewers with prestige; the other, a commercial model that is based on delivering a valuable service for the right price. Both can coexist in the same market (think the United States Army and private contractors), and each has its benefits and weaknesses. A prestigious society publisher may have no problem soliciting experts in the field for high-quality timely reviews. A commercial archival journal may have a much harder time finding a competent reviewer for its manuscripts. For these publications, a commercial solution may work, especially if the author (or supporting institution) is willing to pay for the services. The journal publishing market is large enough — and diverse enough — to support both models.

If you know of other ways to reward and incentivize the review process, let us know in the comments. And thank you for your participation in the survey.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

12 Thoughts on "Rewarding Reviewers: Money, Prestige, or Some of Both?"

” I find it odd that respondents would not select this choice [guaranteed favorable review].”

I can only speak for myself: it struck me that a service that guaranteed favourable reviews would be near-useless, as you pretty much ensure that journal editors would not then trust them.

Thanks Jake. An author wishes for peer-comments that would help improve the manuscript. But authors ultimately want to be published and, given the opportunity, select those peers who are likely to support manuscript acceptance and avoid those who are likely to be overly critical. The motivations of the editors, however, are completely different.

Phil,

Thanks for the report out on the results. They validate much of the research we’ve done to date at Rubriq. That our reports would supplement, not replace the full peer review that happens in traditional journals. Even in the mega-OA “valid science” peer review model, I would expect at least an Editorial Board review of the Rubriq Report prior to formal acceptance.

There has been a lot of focus on the reviewer pay model, but as you state in your post, that isn’t a new concept. We will continue to test various compensation models to ensure timely, quality reviews and we may end up with a different approach. That’s okay.

The core of Rubriq isn’t direct compensation of reviewers, but the development of the Rubriq scorecard. See sample here: http://bit.ly/11ZrxW9

The scorecard is at the heart of our model and we hope to present our numerous validation tests at the Peer Review Congress this fall.

It’s the scorecard that provides a new way of thinking about peer review. We don’t make recommendations on accept/reject decisions, nor do we claim to provide a “valid science” stamp. The journal will have to map what they believe is valid science to our R-Score. Those are decisions that editors can make. But the scorecard ratings and comments provide a number of advantages:

– It’s a way to stratify/organize papers in a mega-OA journal upon publication allowing for better initial filtering after publication

– It’s a better way to kick-off and organize post-publication peer review. Now post-publication peer review can build off of the ratings and comments that were collected in pre-publication peer review

– It’s standardized and thus portable; so individual journal editors can map the standard scorecard to their own peer review process.

– It provides a mechanism for directing papers to journals based on the quality and importance of the research. The current tools for finding journals are based on keyword/semantic matching which I believe exacerbates the journal loops problem. Without the quality and novelty of the paper, suggesting journals (other than the mega-OA journals) is probably causing more problems than it solves.

Thanks again for running the survey Phil. Good insights.

Keith, thanks for responding. I think the scorecard approach you use is very interesting as a way to stratify the literature in “mega-OA” journals that place emphasis on validity rather than novelty. It does make me wonder why a publisher, like PLoS ONE, wouldn’t just use a scorecard like yours in their review process, rather than supplementing their own peer review with an additional round of review?

It does also make me wonder whether using scorecard ratings would create relationship problems between the editor and author. For instance, say my article was accepted in mega-journal X but that the reviewers gave me an R-Score that was lower than I believed was appropriate. Would I appeal to the editor that the number wasn’t fair and ask for another round of review? Would I refuse to publish the article and submit it elsewhere where I might get a more favorable review score?

Stratifying the literature in mega-OA journals may be very helpful to readers, but it does create some unanticipated problems with potential authors.

As far as relying on a commercial approach to lining up peer reviewers (essentially paying for peer review rather than working with the current community volunteer system), it would seem to have diminishing returns for landing top researchers as reviewers. As a scientist progresses in his/her career, they both earn more money and have more and more time constraints. For a graduate student or a postdoc, $100 is a windfall, a nice dinner out with one’s husband/wife or the replacement for that tire that’s going bald. For a senior researcher, the value in a $100 check doesn’t outweigh the productive use that he/she could get out of several hours work concentrated elsewhere (writing a section of a multi-million dollar grant for example).

So that commercial system is likely limited to some junior faculty, but mostly postdocs and students, if the motivating factor is a payment on this level. And that’s probably good enough for some functions, like doing a round of triage on an initial draft submission. But I suspect that the more selective, top journals really want to have the top experts in a field doing the peer review. The amount you’d need to pay them as motivation would be difficult to financially sustain.

As I recall, the Company of Biologists had a system where peer reviewers were paid that they discontinued. I’m told this was at the suggestion of the reviewers themselves:
https://twitter.com/cshperspectives/status/301682595526217728

Perhaps others with more knowledge of what happened there can fill in some of the details.

Can’t speak for others, but when I reviewed for Company of Biologists journals, the payment offered for a review was such that by the time one paid bank charges on the transfer and taxes on the income, it was not worth the hassle. Their standard form included the option of waiving the payment, in which case they put those funds into their student travel awards scheme. That was always the option that I chose…

Note that the same considerations would apply to a $ 100 one-time fee from Rubriq. On the other hand a monthly e.g. $ 1,000 paid as a consultancy fee would be a different story, but no way would I be willing to do more than one review a week, no matter how much payment would be offered. Life is just too short and too many other things that I have to do.

We’ve just taken a slightly different approach to rewarding our best reviewers. Quite a few journals publish a list of their best 20 or fewer referees, and others publish a list of everyone who has reviewed. With the former, only a tiny proportion of the reviewer community gets recognition, and breaking into that elite group involves having an editor constantly feeding you papers. The latter contains no prestige, as even the worst reviewers are on there.

We’ve tried to hit a happy medium by coming up with an index based on number of reviews completed, the proportion of accepted review requests that led to a review being returned, and the average time taken per review. The top 300 people (about 8%) were listed as our ‘best reviewers’. The list contains many junior researchers, and we hope that their resumes will benefit from having an explicit recognition of their contributions as a reviewer on there.

It’s so far been very popular. Our post about it is here: http://www.molecularecologist.com/2013/02/mol-ecol-best-reviewers/

Rubriq is an offshoot of American Journal Experts (AJE); an internal AJE email described Rubriq as a “sister company”. I’m a former AJE editor, and I’m skeptical. When I worked for AJE from 2008-2010 (and according to a current editor I talked to, nothing has changed), I was struck by the gap between what AJE advertises — editing by subject experts with degrees from a list of elite schools — and the reality — editing by grad students in totally unrelated subjects, who may not have completed their degree at the elite school they once attended, working for piecework wages that corresponded to very low hourly wages. As a grad student trying to earn some extra money and finding I was being paid effectively sub-minimum wages if I tried to do a good job, I was under pressure to rush through my edits and turn in sub-par work. At the same time, AJE’s managing editors often applied counter-pressure to put in more effort (a large part of the reason why, as I wrote about in my blog, the IRS recently ruled that AJE’s grad student editors are employees, not independent contractors, despite the terms of the contract that AJE asks its editors to sign).

Questionable employment practices and advertising that’s exaggerated (to put it charitably) are not the way to acquire a good reputation in academia. It’s also difficult for me to understand what the authors would get out of a service like Rubriq. When I reviewed papers for AJE, most were of poor quality — even when I wasn’t a content expert, I could often tell that the science in a paper was thin. I wondered what kinds of publishers would ever accept these articles. If Rubriq is targeting the same market as AJE (which seems likely), what’s the advantage of paying for a review (positive or negative) when you’re never going to get published? I also have a hard time understanding why postdocs and faculty members, who are typically paid more generously than grad students are, would agree to spend their time reviewing for no professional credit and a very small amount of money.

Rubriq does share the same parent company as AJE (Research Square), but we are two separate companies with very distinct services and operations. The functions of language editing and peer review are very different, as are the target customers. These differences were significant enough to warrant the creation of an entirely new company. The past employment issues you’ve had are probably better addressed directly with AJE rather than within the context of this thread about Rubriq.

We think of reviewer compensation as an honorarium or stipend in appreciation for reviewer time vs. a wage or employment relationship. We will soon be implementing alternative compensation options (such as contributions to charitable organizations) for those who prefer not to receive any direct payment. We do not consider money to be the primary motivator for our reviewers, yet it is a tangible way of thanking them for their time. Ultimately, we hope that Rubriq will provide a centralized way for reviewers to keep track of their body of review work that can go alongside their publication record on a CV. And since reviews can be viewed by multiple journals, Rubriq enables a reviewer’s opinions to be read by a wider variety of editors. We see non-monetary benefits like these evolving as we grow, so that payment will become just one small piece of the package.

We present an award to one or two exceptional reviewers at an awards banquet during our association’s annual meeting.

We will soon begin offering continuing education (CE) credit for reviews completed on time and according to a specific format, called a “structured review.” A reviewer must also agree to review each revision of a particular manuscript until decision. There is no cost to the reviewers, who will save the money they would have spent on alternative CE opportunities. It is our hope that reviewers will feel more appreciated. We may receive more timely and more thorough reviews, and our pool of reviewers may grow.

Comments are closed.