The Joint Information Systems Committee (JISC) in the United Kingdom has put out an invitation for publishers of monographs in the social sciences and humanities to participate in an open access experiment called OAPEN-UK. As described:
The aim of OAPEN-UK is to experiment with scholarly monographs in the humanities and social sciences to find out if open access as a model is feasible, and what impacts open access scholarly monographs have on print and e-book sales, reach and readership.
The study comes with £250,000 (almost US $400,000) of support from JISC-collections to fund the experiment.
Like the open access experiment conducted on monographs published by Amsterdam University Press (AUP), books will be assigned randomly to either the experimental (open access) group or the control group. To ensure equal allocation, each publisher is asked to submit pairs of titles that are similar to each other in many respects (age, subject, e-book availability).
Matching subjects is a common experimental design in the social sciences that allows the researcher to control methodologically for variation that is known to exist between individuals, or in this case, books. Instead of testing for differences between each group of books (e.g. A, B, C vs. D, E, and F), the matching approach analyzes the differences between each pair of books (e.g. A-D, B-E, C-F). Using the latter approach often results in a more sensitive statistical analysis. But I’m speculating here, because the methodology and statistical analysis are not spelled out in the invitation to tender.
The explanatory variables under investigation aren’t any clearer. While the study asks, “what impacts open access scholarly monographs have on print and e-book sales, reach and readership,” nowhere are these variables spelled out, and the study requires multiple groups (authors, librarians, publishers, research councils) to assemble the dataset.
The independent variable (the variable controlled by the researchers, in this case “open access”) also takes on multiple meanings. Treatment books are deposited into the OAPEN Library, although publishers are also encouraged to make them freely available from their own platform and any other platform they wish. Authors of treatment books are also provided with a digital copy and encouraged to put copies in their institutional repository and on their personal webpage. These multiple treatments makes it difficult to understand the impact of each one on such outcomes as print and e-book sales. In this design, causes and effects become inseparable.
What is also unusual about this study is that it focuses on previously published books. Monographs must have been published between 2006 and February 2011 in order to be eligible, with the experiment scheduled to commence in May 2011. This design rules out a study that begins with a cohort of newly published books that tracks their performance over their lifespan.
As Ronald Snijder explains in his open access study of AUP books, academic libraries (nearly exclusive purchaser of academic books in the humanities) do not make purchase decisions based on whether a monograph is available digitally. Most university libraries rely upon approval plans that purchase books automatically from university and academic presses. Hence, the decision on whether a book is available in PDF from a digital repository like OAPEN Library would make no difference to print sales of that book. With regard to e-book sales, many formats are proprietary and function solely on a particular reading device. As a result, a free PDF of a book is also likely to have no effect on e-book sales (assuming that a reader even finds that free PDF).
The experiment is therefore designed to reveal null results on sales, which is exactly what Snijder reports in his study, and as a logical conclusion, publishers should not be threatened by open access. Coupled with increased PDF downloads as a result of putting free copies of books online — whether they are read or not — the headline of the study could be written today and the British government could have saved taxpayers £250,000.
The allocation of these research funds is also a little troubling. Publishers will be compensated up to £6,000 (almost US $10,000) for each pair of participating titles, although the rationale for financial compensation is not given. Publishers also select the titles, “participate in the collection and evaluation of the data,” and, as a benefit, “join the Steering Group to provide expert advice and guidance.”
Am I the only one squirming?
Since the methodology is not yet fleshed out in this experiment, this puts sponsors in a position of power to direct the analysis, interpretation, and reporting of the results. This is like asking the pharmaceutical industry to join a drug study, allow them pick their own subjects, conduct their own statistical analysts, hire their own writers, and then reward them financially with public monies for participating. The relationship between publishers and researchers has become uncomfortably close in recent years — especially in the UK — and we should acknowledge that this may unduly influence the validity of results that come from such close bedfellows.
The debate over the merits of open access have moved into a new phase where case studies, economic models, and advocacy are not enough to influence public policy. Yet if we turn to science to help provide us with answers, we must follow rigorous design and insulate these studies from the potential influence of special interest. Given its methodology, the results of the OAPEN-UK study may have already been written even before the experiment has commenced.