Image by DavidDMuir via Flickr
Citations are the coin of science. They convey peer-recognition, prestige, and the utility of one’s work. They can determine funding, promotion, and tenure. It’s no wonder why scientists are so focused on being cited, and why some may be willing to pay to ensure that their papers are as visible as possible.
Many studies have confirmed the existence of a citation advantage for Open Access articles. In a well-analyzed study of author-sponsored OA articles published in , Gunther Eysenbach reported that OA articles were 1.7 times more likely to be cited 0-6 months after publication and 2.1 times more likely to be cited 4-10 months after publication (it is indeed curious why one should see a citation advantage so early on). While Eysenbach’s study attempts to control for factors that may explain more citations, there is always the risk of ignoring variables that could explain the results.
In a new analysis of the PNAS data, two Swiss doctoral students in economics, Patrick Gaulé and Nicolas Maystre, claim that Eyesenbach’s results are invalid, or at least highly exaggerated. Their article, “Getting cited: does open access help?” is currently available as a working paper.
In their analysis, they add several other plausible explanation for why articles would be more highly cited, such as whether the corresponding author was an intramural researcher of the National Institutes of Health (NIH) or worked at the Howard Hughes Medical Institute (HHMI). In addition, Gaulé and Maystre also coded when the manuscript was submitted to PNAS — their thought was that those who were willing to pay the $1,000 optional OA fees would be more willing to do so at the end of a budget year (to expend one’s budget) than at other times of the year. The result of adding these and other variables made the PNAS Open Access effect insignificant.
In other words, there was no evidence that Open Access was a contributing cause of the citation advantage.
This study adds to a growing literature that casts doubt on early research suggesting that free access to the scientific literature leads to increased citations. Self-selection, whereby higher quality articles are made freely available, is beginning to seem a much more plausible explanation. Indeed, when articles are randomly made freely available, there is no evidence of any citation advantage.
The phrase “open access” conveys a false dichotomy, suggesting the access door is either open or shut. As most of us experience firsthand, access barriers are very porous. Sharing is the rule, not the exception. The Open Access metaphor can make for very good rhetoric, but it makes for bad science, especially since science is based fundamentally on openness. The authors conclude:
it is perhaps not surprising that open access does not appear to have an important effect on citations. In a world of open science full disclosure of results is a central norm. One manifestation of the norm is that authors are almost always willing to send electronic copies of their papers to anyone who requests them. Our advice to authors is to abide by that norm and to use the 1000-3000 dollars open access publication fee for something else, unless they have time-limited budgets to use!
This summer I reported on a similar study of author-sponsored OA articles published in 11 biological and medical journals. If free access does indeed convey a citation advantage, it is much smaller — and more costly — than anticipated.