Enormous citation effects attributed to online and open access are spurious — an artifact of the simple failure to control for differences in article quality — two information economists report.
The working paper by Mark McCabe and Chris Snyder titled “Did Online Access to Journals Change the Economics Literature?“ appeared on the SSRN network on January 23rd.
McCabe is a professor at the School of Information at the University of Michigan, and is well-regarded in the library community for his work on publisher mergers and their effects on journal prices. Snyder is a professor in the Department of Economics at Dartmouth College. Together, they are known for their application of two-sided market theory to open access journal publishing.
Analyzing a dataset of nearly 260,000 articles from 100 journals in business and economics published between 1956 and 2005, they attempted to validate whether online access to journal articles boosts citations.
Carefully analyzing their data and controlling for other “secular” explanations such as time and quality effects, they refute two claims that were made by University of Chicago sociologist James Evans — namely that online access concentrates citations on a smaller number of recent articles, and that it disproportionately benefits scholars in developing countries, .
McCabe and Snyder systematically test, and refute, most of the “access –> citations” claims made popular in the last decade, although they leave one standing — they find that certain publisher and delivery platforms may have a small, but detectable, citation effects. Specifically, they note that being hosted by JSTOR may boost article citations by about 10%. In contrast, they report no effect from ScienceDirect. This may not be too surprising considering the scope of their dataset and the collection strength of JSTOR in economics.
Their rationale in conducting such a large and careful study of citation patterns is not based on any particular political agenda. McCabe and Snyder are more interested in how to make sense of the value of a citation in the academic arena. They write:
If a small change in the convenience of access can cause a quadrupling of citations, then the typical citation may be of marginal value, used to pad the reference section of citing articles rather than providing an essential foundation for subsequent research. According to this view, citations would be at best a devalued currency, subject to manipulation through the choice of publication outlet. On the other hand, the finding of little or no citation boost would resuscitate the view of citations as a valuable currency and as a useful indicator of an article’s contribution to knowledge.
Given that open access may not lead to increases in an article’s citation rate, an alternative justification is required to persuade academics to alter their publication behavior. Repeated surveys of scholars by Ithaka S+R reveal that open access venues take low priority over prestige, relevance and Impact Factor, and that many authors report strong aversion to paying for publication. Indeed, similar priorities were reaffirmed recently in the survey conducted by the Study of Open Access Publishing (SOAP), a study highly biased toward open access publishing.
If citations are a form of academic currency, there may simply not be enough payoff to move the publication market toward an equilibrium dominated by open access journals, McCabe and Snyder write:
The current lack of evidence that free online access performs better, implies that the citation benefits of open-access publishing have been exaggerated by its proponents. Even if publishing in an open-access journal were generally associated with a 10% boost in citations, it is not clear that authors in economics and business would be willing to pay several thousand dollars for this benefit, at least in lieu of subsidies. Author demand may not be sufficiently inelastic with respect to submission fees for two-sided-market models of the journal market to provide a clear-cut case for the equilibrium dominance of open access or for its social efficiency.
The weakness in this argument, however, is that it assumes that scholars are paying with their own money, and therefore are sensitive to the costs and benefits of author-side payments. Price sensitivity is diminished when others (foundations, libraries) are willing to foot the bill, and becomes a non-issue when publishing in an open access venue is mandated through policy. When this happens, there is little incentive to force the system to reduce costs, compete on price, and become more efficient.
The main contribution of the McCabe and Snyder paper is their explanation for scores of initial papers reporting huge access-citation effects — i.e., they are artifacts of improper statistical analysis. The fact that there are many of these poor studies in the literature, or that their claims have achieved consensus among certain like-minded individuals, does not make for good science, nor does it help to inform good science policy.
There are many benefits to the free transmission of scientific ideas. A citation advantage, however, is not one of them. If science is a self-correcting process, it may be time to admit that we made a mistake, ventured down the wrong trail, and hit a dead-end. It’s time to retrace our steps and move on to more important questions.