Carrots
Image by synx508 via Flickr

Incentives that reward scientists with cash bonuses when they publish in prestigious journals drive up submission rates but have no effect on publication success, a new study reports.

The article, “Changing Incentives to Publish,” by Chaira Franzoni, Giuseppe Scellato, and Paula Stephan, appears in the August 5 issue of the journal Science.

The researchers analyzed the submission and publication numbers in Science magazine from 2000 through 2009 and compared the effects of three distinct types of incentives on author behavior:

  1. Personal incentives that reward publication success with direct cash bonuses (practices common in China, Korea, and Turkey)
  2. Institutional incentives that link publication success with departmental funding (e.g., the UK’s RAE)
  3. Personal incentives that link publication success with individual career success — promotion, tenure, and salary increases (e.g., the promotion and tenure system common in the United States and Canada)

Controlling for external factors such as time and national funding for research, authors who were rewarded with direct cash bonuses increased their submission rate by 46% over the observation period but experienced no corresponding increase in publication success. Publications and acceptance rates for these authors actually decreased slightly.

Institutional incentives also drove up submission rates, although not nearly as high (24% increase), and had similar null effects on publication success.

The only group that experienced real publication success were authors who received career incentives for publishing. For these researchers, submission rates went up by 12% and publications success by 34%. Commenting on their findings, Franzoni writes:

It is career incentives that matter; neither institution-based incentives nor cash incentives to individuals show statistically significant association with publications. The results also suggest that acceptance rates are negatively correlated to cash bonuses.

While their study is limited to manuscripts submitted to one journal, the findings seem to confirm what many of us have experienced anecdotally in recent years: a growing trend of low-quality submissions from particular countries.

The study also sheds light on how three different policy approaches that encourage and reward productive scientists result in very different outcomes. At least for Science, providing long-term career rewards appears to trump short-term cash payouts.

Because the study is focused on a prestigious journal, for which publication is rewarded highly in all three policy scenarios, I’d be interested in whether specialist and archival journals show similar trends. Editors, feel free to share your experiences.

Furthermore, the study didn’t consider why a manuscript was rejected (and indeed, whether it even made it out of the editorial office for peer review). If manuscripts from cash-incentivized authors are less likely to be sent out for review, then this form of incentive is actively discouraging an efficient matching of quality manuscripts with quality journals.

Creating incentives that lead to poor submission decisions ultimately wastes the time of editors, reviewers, and authors themselves and unnecessarily leads to duplication of efforts as manuscripts are resubmitted down the pecking order of journals, cascading after each rejection until they finally reach an appropriate outlet.

Recent calls in the UK to improve the system of peer review have focused entirely on the processes that take place after an editor has a received a manuscript. What may be equally important are the processes that take place before a manuscript has been submitted.

Avoiding policies that cost the system more in collective time and effort may be the key for potential cost savings.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

11 Thoughts on "Are Cash Bonuses the Right Incentive for Science Authors?"

I seriously doubt that one can separate out the incentive programs as the determinant of these statistics, vis a vis all the other features of the relevant research programs, up to and including national culture.

I agree that in observational studies it is difficult to control for all the many factors that may have an effect on submission and publication strategy. However, there is a theoretical basis for their research, and unless you can propose a competing explanation for the differences–indeed, very large differences–I think we have to accept that the reward structure may be driving poor journal submission behavior.

I don’t know what theoretical basis you are referring to. But as for a competing explanation, how about that the differences in incentive structures are merely one aspect of the amount of pressure the authors are under. It is the pressure not the incentives that is causing this behavior.

By theoretical basis, I mean they have proposed a causal link between three types of rewards and publication behavior. “Pressure” is an abstract construct and something that cannot be measured directly, but one can derive the three incentive structures from pressure. A gun-to-the-head would also be a form of pressure, and one would predict that it would also increase submission rates but have little effect on publication success.

If you are claiming that these incentives are the only form of pressure these folks are under I find that hard to believe. There is a term in statistics for this situation, but it escapes me at the moment. Two things are correlated but one does not cause the other, rather both are caused by something else. Pressure to submit inferior material is deeper than incentives, because the incentives are merely a manifestation of the pressure. “Pressure” to publish may be an abstract construct but it certainly exists.

The statistical term you are referring to is called Confounding. I follow your argument and respect your point, but if you were to conduct such a study, what would you measure as an indicator of “Pressure”?

That is a good research question, Phil. I would start with interviews to identify diagnostic behavioral features then develop a good set of polling questions to measure them. In addition to polling data we might pick up some other measurable aspects of pressure along the way. Operationally defining and measuring pressure might be quite useful.

Likewise among students. Here in the USA I am concerned about “test pressure” in K-12 education, which may have become excessive.

Comments are closed.