Library card catalogue, via JISC-collections

The Joint Information Systems Committee (JISC) in the United Kingdom has put out an invitation for publishers of monographs in the social sciences and humanities to participate in an open access experiment called OAPEN-UK.  As described:

The aim of OAPEN-UK is to experiment with scholarly monographs in the humanities and social sciences to find out if open access as a model is feasible, and what impacts open access scholarly monographs have on print and e-book sales, reach and readership.

The study comes with £250,000 (almost US $400,000) of support from JISC-collections to fund the experiment.

Like the open access experiment conducted on monographs published by Amsterdam University Press (AUP), books will be assigned randomly to either the experimental (open access) group or the control group.  To ensure equal allocation, each publisher is asked to submit pairs of titles that are similar to each other in many respects (age, subject, e-book availability).

Matching subjects is a common experimental design in the social sciences that allows the researcher to control methodologically for variation that is known to exist between individuals, or in this case, books.  Instead of testing for differences between each group of books (e.g. A, B, C vs. D, E, and F), the matching approach analyzes the differences between each pair of books (e.g. A-D, B-E, C-F).  Using the latter approach often results in a more sensitive statistical analysis.  But I’m speculating here, because the methodology and statistical analysis are not spelled out in the invitation to tender.

The explanatory variables under investigation aren’t any clearer. While the study asks, “what impacts open access scholarly monographs have on print and e-book sales, reach and readership,” nowhere are these variables spelled out, and the study requires multiple groups (authors, librarians, publishers, research councils) to assemble the dataset.

The independent variable (the variable controlled by the researchers, in this case “open access”) also takes on multiple meanings. Treatment books are deposited into the OAPEN Library, although publishers are also encouraged to make them freely available from their own platform and any other platform they wish.  Authors of treatment books are also provided with a digital copy and encouraged to put copies in their institutional repository and on their personal webpage.  These multiple treatments  makes it difficult to understand the impact of each one on such outcomes as print and e-book sales.  In this design, causes and effects become inseparable.

What is also unusual about this study is that it focuses on previously published books.  Monographs must have been published between 2006 and February 2011 in order to be eligible, with the experiment scheduled to commence in May 2011.  This design rules out a study that begins with a cohort of newly published books that tracks their performance over their lifespan.

As Ronald Snijder explains in his open access study of AUP books, academic libraries (nearly exclusive purchaser of academic books in the humanities) do not make purchase decisions based on whether a monograph is available digitally.  Most university libraries rely upon approval plans that purchase books automatically from university and academic presses.  Hence, the decision on whether a book is available in PDF from a digital repository like OAPEN Library would make no difference to print sales of that book.  With regard to e-book sales, many formats are proprietary and function solely on a particular reading device.  As a result, a free PDF of a book is also likely to have no effect on e-book sales (assuming that a reader even finds that free PDF).

The experiment is therefore designed to reveal null results on sales, which is exactly what Snijder reports in his study, and as a logical conclusion, publishers should not be threatened by open access.  Coupled with increased PDF downloads as a result of putting free copies of books online — whether they are read or not — the headline of the study could be written today and the British government could have saved taxpayers £250,000.

The allocation of these research funds is also a little troubling.  Publishers will be compensated up to £6,000 (almost US $10,000) for each pair of participating titles, although the rationale for financial compensation is not given.  Publishers also select the titles, “participate in the collection and evaluation of the data,” and, as a benefit, “join the Steering Group to provide expert advice and guidance.”

Am I the only one squirming?

Since the methodology is not yet fleshed out in this experiment, this puts sponsors in a position of power to direct the analysis, interpretation, and reporting of the results.  This is like asking the pharmaceutical industry to join a drug study, allow them pick their own subjects, conduct their own statistical analysts, hire their own writers, and then reward them financially with public monies for participating.  The relationship between publishers and researchers has become uncomfortably close in recent years — especially in the UK —  and we should acknowledge that this may unduly influence the validity of results that come from such close bedfellows.

The debate over the merits of open access have moved into a new phase where case studies, economic models, and advocacy are not enough to influence public policy.  Yet if we turn to science to help provide us with answers, we must follow rigorous design and insulate these studies from the potential influence of special interest.  Given its methodology, the results of the OAPEN-UK study may have already been written even before the experiment has commenced.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

12 Thoughts on "OAPEN — Open Access Book Experiment in Humanities, Social Sciences"

Seems like a great deal for publishers. Take a couple of books that aren’t selling and are about to go out of print, drop them in the study, collect your $10K and then who really cares what happens after that?

This is not the first flawed JISC experiment in this area. If I recall correctly, they previously distributed funds for OA experiments that were a) so trivial compared with the subscription revenue they’d need to replace that no conclusions could be drawn; b) used by publishers to install tracking systems that simply promoted online submission rather than open access.

Great analysis! I think there is room for rigorous analysis of the effects of OA books, but there do appear to be a number of critical flaws in this study.

I agree that there is much to be learned from an open access experiment with books; however, I’m not sure that this particular experiment will yield much useful new knowledge.

JISC-collections has not yet started the experiment, so there is ample opportunity for them to revise their methods.

If the study assumes that academic libraries use approval plans to purchase books, that is now an increasingly shaky assumption as more libraries turn to “patron-driven” purchasing and abandon the traditional approval-plan approach. One comparison that might be made is to measure sales to libraries that use approval plans and sales to those that don’t.

Another problem is that most academic libraries now will purchase a paperback version instead of a hardback if both are available at the outset. The POD editions that OA make possible generally allow the option to purchase a book in either format. This could have a significant effect on revenue, as indeed it has had to some degree already with the OA monograph series in Romance Studies at Penn State. One experiment Penn State conducted was pricing the hardback and paperback editions differently for different books in the series. For two books about 19th-century French female novelists that were published almost at the same time, the prices were set for one book with a much greater gap between the hardback and paperback prices than for the other, just to see what effect the pricing would have on sales. Unfortunately, I can’t remember the results of that experiment now.

I agree with Phil that it would be a shame for this OAPEN experiment to be conducted without a better methodology to ensure valid and useful results.

At a briefing event for publishers interested in participating in OAPEN-UK yesterday, we made very clear that the methodology for the experiment was open to comment. We hope that libraries and participating publishers will contribute further to the design of the experiment.

We invited publishers to submit titles for inclusion in the project that would contribute to their understanding of the impacts of open access on sales, both of print copies and e-book versions.

This design does not rule out newly published books; the publication date range for inclusion is 2006-2011, so that we can monitor the impact on new titles from launch and the impact of older titles, in terms of sales and research reach and impact.

Please contact us if you would like to comment on the methodology.
Lorraine Estelle: l.estelle@jisc.ac.uk
Caren Milloy: c.milloy@jisc.ac.uk

I’m not sure I understand your critique of method re independent variable in your paragraph 7 (“The independent…”).

To measure the impact of each particular open instance (e.g., author’s IR vs OAPEN Library) as a separate variable would be useful, as I think you imply. However, it would also be useful to look at the impact of all possible open instances in the aggregate. This is what it seems to me the experiment is trying to do.

It’s a less tidy control, but not worthless. In fact, such an approach may better reproduce the real-world environment in which an impact on sales would actually be shown. Since one cannot tidily control where one’s books are freely accessible and where they aren’t, the experiment proposes maximum accessibility as the variable. This makes defensible sense to me.

However, I do wonder if restricting the experiment to frontlist books might not bring more informative results. Books as old as 2006 will very likely be circulating in open collections already, legitimately or not. In those cases, any impact on sales will already have been absorbed. Furthermore, sales of most 5-yr-old monographs will have hit the wall, regardless. It seems to me that this reality, and not the experiment design, is what will guarantee a null effect for backlist titles.

Mike, thanks for your comment.

The project has a total of £250,000 to compensate publishers for their participation. If they allocate £6,000 per pair of titles, they can only recruit 41 pairs or a total of 82 books for their study. Even for a single bivariate study, this does not provide them with much statistical power. Add a cadre of independent and dependent variables and a lot of known variation, and the researchers would be unable to detect a significant difference even if one existed (i.e. a Type II error).

In other words, their study design is underpowered which means they have almost no chance of rejecting their null hypothesis.

The researchers either need to keep the study focused on the effect of a single variable or work on getting massive participation and a sufficient sample size. As designed, this study is not going to provide any useful information. Indeed, it may even be harmful if future readers cite the results as evidence to support a particular claim.

Conceptually, I think this is a really interesting and important study, but it needed a lot more thought (and perhaps a consultation with a statistician) before JISC put out a call to tender.

Thanks for re-emphasizing the n-size; it was a little buried in the blog.

And this, I think, is a far greater flaw in the study; it does indeed make it useless and potentially dangerous. It’s a shame.

I didn’t mention the issue of sample size in the blog post because I’m not sure that a larger sample size would help. If academic print and e-book sales are not affected by open access (as discussed in Ronald Snijder’s article), then a larger sample size may not matter.

I highlighted logical problems in the method as well as a blatant conflict of interest between the researchers and industry. I suppose now we can add statistical power to that list.

Now the dust has had a little time to settle, we would like to make two comments on Phil Davis’s post.

Firstly, we are very willing to listen to comments on the proposed methodology, and would certainly consider making changes that will improve it. We will review it again ourselves very carefully in the light of Mr Davis’s comments. It is very important to us to find out how different e-book business models may work in real situations, and we would not compromise that with a faulty study. If, indeed, it can be shown that our methodology can lead to only one conclusion, then we accept that it is faulty and will need to be changed. We are far from convinced that it is, as we made each decision on the design of the study very carefully and with a great deal of advice.

Mr Davis has read too much into the numerical aspects of the study and is criticising it for something it is simply not intended to be: we are making no claim to be running a major scientific experiment, but rather a practical one, intended to obtain valuable information that can be used alongside other emerging data. We are well aware, given the wide range of variables and the relatively small number of e-books that publishers may be willing to allow to be used in the project, that any statistical conclusion purporting to be anything other than indicative would be premature, from this or other studies we might undertake. We’ve never said anything else, in this Call or elsewhere.

However, we should also put Mr Davis’s mind at rest on a more important issue. He imagines that “the experiment is therefore designed to reveal null results on sales”, that we may “… direct the analysis, interpretation, and reporting of the results” and that we are starting out with a “null hypothesis”. He need let no such worries trouble him. We have absolutely no interest whatsoever in proving any particular hypothesis, or directing the analysis towards any particular conclusion.

We don’t know where he conjured this notion up from: why we would want to do something so idiotic, not to mention pointless, is unclear to us. JISC Collection’s interest is in ensuring as best we can the availability of a sustainable collection of high quality, reliable and wide-ranging e-resources to the UK higher and further education communities. To run a distorted experiment to prove one thing or another runs directly in opposition to that interest as it will trip us up when we come to putting anything into action. Open access could well turn out to have a valuable role to play–but equally, so might conventional sales-based business models. We do not know what the outcome of this project will be, and are seeking to produce qualitative and quantitative information that will help publishers, libraries and ourselves take sensible decisions that will make the provision of e-resources attractive and sustainable for all of the communities involved. Seeking that information is a complex task that will take time: we believe that our project will add substantially to knowledge of the factors that influence the behaviour of e-book buyers and users, but we are not deluding ourselves that it will be anything like the last word on the matter.

Nevertheless, we value his comments on our methodology: those, we take seriously.

Caren,
Thank you for your detailed reply. We have some misunderstandings.

First, experiments should be well-designed if they are to provide one with valid, reliable and generalizable results. A poorly-designed study may only provide the investigators with ambiguous results, or worse, allow one to derive invalid conclusions.

Call it what you wish (an experiment, a study, a project, etc.) JISC is investing significant resources, time and money into this project. While I am not privy to your intentions, and in no way called your study “idiotic,” you still haven’t addressed my methodological concerns with the exception that I should have trust in JISC.

Secondly, you also haven’t addressed the conflict of interest of having your industry sponsors help direct the study.

Until you are able to demonstrate that the project will result in meaningful results and that it will not suffer from conflicts of interest, I cannot accept your argument that you are taking my comments seriously. If anything, your response simply reflects a defensive posture.

Comments are closed.