It’s often said that history is written by the victors.
In the retelling of history, a simple storyline is constructed that reinterprets past events into a coherent narrative. When facts don’t fit into that narrative, they are edited, reinterpreted, or simply torn out.
Individuals engaged in battles of ideology also have a tendency to revise and reinterpret the past. Old arguments mysteriously disappear from web pages or get revised. Those who are challenged on past remarks respond with a claim of simple misunderstanding or that their words were taken out of context. What is at stake here are not lives but egos.
While this kind of reinterpretation of history is not as offensive as say the denial of genocide, it’s still a form of historical revisionism. In the academic realm, it is both disingenuous and distasteful.
“The Open Access Citation Advantage: Studies and Results To Date“ is the latest report from Key Perspectives publishing consultant Alma Swan. The document is freely available from the University of Southampton’s institutional archive and has been widely promoted on several listservs by open access evangelist Stevan Harnad.
In her report, Swan summarizes the studies posted on Steve Hitchcock’s bibliography of open access studies, and her summary is quite useful for those looking for a distillation of the research literature. What is troubling with this document, however, is the narrative, and it is here that Swan creates a historical revision of the open access debate.
The argument that open access leads to increased citations (coined the “open access citation advantage” by Stevan Harnad for rhetorical purposes) is not that old. Most of us remember Steve Lawrence’s 2001 letter to Nature, the debate it ignited, and a flurry of unqualified claims that persist to this day.
“articles that are made open access . . . are cited twice as much as those articles (in the very same journals and years) that are not” (Harnad, 2006)
What’s more, the strength of conviction appears to grow with each iteration of the OA citation advantage claim as each proponent paraphrases from the last:
Study after study has shown that free online access increases the impact of research literature, as measured by citations, 50 percent to 250 percent. (Peter Suber, 2005)
. . . in study after study and discipline after discipline, that open access is associated with increased citations for authors and journals, when compared to similar work that is not open access (John Willinsky, 2006)
Study after study has shown that articles available freely online are more widely cited than those that are available only through publishers’ access-limited venues. (Stuart Shieber, 2009)
Statements such as these have also been the focus of ARL/SPARC advocacy campaigns and are found frequently on university library web pages:
Open access articles are cited more often than those in limited access subscription journals (Alliance for Taxpayer Access, 2004)
Studies have shown that open-access articles are cited at a higher rate than those with restricted access. (Columbia University Libraries)
Given the recorded history of this debate, it would seem strange — even disingenuous — to assert anything but a simple and unqualified connection between access and citations and the implication of causality. The literature is full of examples of rhetorical prose from influential thinkers of the time, yet Swan appears to disregard the past, promoting a new revision of historical events. Her document is full of such statements:
There certainly was not, even early on, an expectation amongst the thinkers on this topic that OA can work magic and make the uncitable suddenly citable
That OA would produce an automatic citation boost for every article was never the expectation
Thus a blanket ‘OA boost’ to citations of, say, 50% was never considered probable.
The Limits of Meta Analysis
In promoting Alma Swan’s report, Stevan Harnad asserts that the time is “ripe for a meta-analysis” of existing studies. Indeed, in her report, Swan provides a score sheet for the open access game:
- Studies finding a positive open access citation advantage = 27
- Studies finding no open access citation advantage (or an OA citation disadvantage) = 4
If science were a matter of tallying consenting and dissenting views, the world would still be flat, the sun would still be revolving around the Earth, and Thalidomide would still be prescribed as a sedative for pregnant women.
Scientific truth is hardly a game of populism.
Meta-analysis is set of powerful statistical techniques for analyzing the literature. Its main function is to increase the statistical power of observation by combining separate empirical studies into one über-analysis. It’s assumed, however, that the studies are comparable (for instance, the same drug given to a random group of patients with multiple myeloma), but conducted at different times in different locales.
This is not the case with the empirical literature on open access and citations. Most of the studies to date are observational (simply observing the citation performance of two sets of articles), and most of these use no statistical controls to adjust for confounding variables. Some of the studies have focused on the effect of OA publishing, while others on OA self-archiving. To date, there is still only one published randomized controlled trial.
Conducting a meta-analysis on this disparate collection of studies is like taking a Veg-O-Matic to a seven-course dinner. Not only does it homogenize the context (and limitations) of each study into an unseemly mess, but it assumes that homogenization of disparate studies somehow renders a clearer picture of scientific truth.