First, a caveat. Phil Davis is a fellow blogger on this site. I like Phil and respect his work. But I reserve the right to disagree with him. I’m not in his back pocket by any means. So, even though this involves a paper by Phil, I’m telling you what I think, no holds barred.
OK, now that we’re clear on that, Davis and colleagues at Cornell University have just published in BMJ a very interesting and well-designed study addressing the question of whether open access drives citations.
This study has important advantages over prior studies. It was randomized by the researchers, so authors or publishers didn’t select which articles were made open access. It was done prospectively, so that the data were analyzed from point zero forward. These are crucial advantages in study design, in my opinion, and place this study head and shoulders above any prior study asserting a citation advantage. In fact, authors and editors often make studies they think are more significant free or push them online early. Retrospective, non-randomized studies fall prey to all sorts of problems because of these confounding effects.
The study covered articles published from January through April 2007. The researchers were allowed by the American Physiological Society (APS) to randomly assign OA status to up to 15% of articles (treatment group, n = 247) or not (control group, n = 1372). The researchers randomized the articles. Only research articles and reviews were included in the randomization. Davis et al also tested the effect of OA on downloads (full-text and PDF).
There is a good deal of sophisticated statistical and methodological rigor to the Davis et al paper, but the bottom line is unavoidable:
- Downloads increased for OA articles
- Citations did not increase for OA articles
Of the OA articles, 59% were cited after 9-12 months compared to 62% of the subscription-access articles. The chance of an OA article being cited was 13% lower, but this difference was not statistically significant.
I spoke with Marty Frank from the APS about this paper. He also feels it reports a strong, well-done study. I asked him about the additional traffic to the sites and whether this made a difference to APS publishing endeavors. As he stated it, “There wasn’t really an appreciable increase in traffic.” Also, as to whether he could commercialize additional traffic, Marty said, “Overall, advertising [against a traffic increase this size] ain’t going to make a hill of beans.” So, while the study found an increase in traffic, the publisher in question didn’t think it mattered much.
The traffic finding drew my attention, and the authors elaborate a good deal about it in the paper. For instance, other factors (review vs. research article, an associated press release, and featuring an article on the front cover) increased downloads, as did number of references and length. So, even the traffic finding has nuances that defy a generalizable cause-effect explanation.
There may be complaints that this study is just from one publisher, and from one scientific domain (physiology). Well, instead of complaining, I’d suggest people try to bring this superior study design to other domains to confirm or refute the findings.
You’ll probably have a pretty significant paper on your hands then.