Business Models, Controversial Topics, Experimentation, Metrics and Analytics, Research

Open Access Embargoes — How Long Is Long Enough?

Embargo

Embargo (Photo credit: kendrick)

Many subscription-based science publishers offer some form of free access to journal articles, usually after an embargo period that can range between two months and 36 months after publication.

These embargo dates were set more than a decade ago in most cases, with little supporting data and with little fear that giving away free content would put their business models at risk.

In a widely influential piece published over 10 years ago in Nature Web Debates, Martin Richardson, then Journals Publishing Director for Oxford University Press, released one of the first usage studies for electronic journals, plotting how the vast majority of article readership for the The EMBO Journal takes place within the first three months of publication. This piece of information shouldn’t have come as a surprise to science publishers, most of whom observe the same pattern in their own journals. However, what Richardson wrote next now seems, in hindsight, to be a controversial finding:

The initial results of the Highwire free access experiment indicate that there is no detectable increase in usage once free access is given.

In light of this finding, the decision for OUP and others to begin offering free access to journal articles — both from their own publishing platforms and through PubMed Central — seemed like a low-risk venture. If there was little pent up demand for free access, as the analysis revealed, publishers shouldn’t worry about giving away content.

Should they begin to start worrying?

The decision on the acceptable length of an embargo period is partly political — for many publishers, it was a reaction to the online petition to create a public library of science. It is also partly strategic — deciding how much to give away before you start compromising your ability to sell your service. Indeed, even the notion that access embargoes were harmful was considered controversial, with some arguing — even today — that minimizing embargoes increases subscription revenue, citations, and submissions.

These arguments, however, are based entirely on anecdotal evidence — uncontrolled, unscientific case studies that reflect the practices of a single journal. They are used in place of rigorous scientific studies because the rigorous scientific studies don’t yet exist. An embargo date is not something you play with, which is why most publishers have not attempted to adjust their embargo periods since they were set.

With the lack of rigorous studies on embargo periods, anecdotes have the potential to generalize individual observations that may not generalizable. For example, the fact that Molecular Biology of the Cell provides free access to its articles just two months after publication does not necessarily mean that all journals would survive with such a short embargo. Conversely, the fact that GENETICS increased its embargo period from six to 12 months does not necessarily mean that short embargoes lead to subscription cancellation.

The fundamental problem in looking at the relationship between embargoes and cancellation is that cancellation is the result of a complex decision-making process on the part of the librarian (or individual) that weighs the value of a subscription against its cost. Surveys attempting to create artificial scenarios, for example, “If the (majority of) content of research journals was freely available within six months of publication, would you continue to subscribe?” only measure the propensity to cancel through a single dimension. They are also hypothetical in nature.

Librarians indicate that usage data factors heavily in their decision to renew a subscription, so if we are seeking a causal relationship between public access and cancellation, we should look to usage statistics as an intermediary.

Access policies that require authors to provide public access to their work — either by depositing copies of their articles into PubMed Central, an institutional repository, or both — may be drawing those readership statistics away from the publishers’ websites. What is not known, at this point, is how much readership is being drawn away. PubMed Central may be complementing publishers by providing access to readers traditionally underserved by the subscription-model. On the other hand, PMC may be competing directly with publishers for the attention of the reader. Furthermore, little is known about whether access to author manuscripts is a viable substitute for the published article or how the length of embargo changes article-level usage statistics. It is puzzling that funders are debating changing access policies without these data.

In the next few months, I’ll be reporting on some emerging studies that attempt to answer some of these questions. In light of new funding policy debates, it is perhaps a good opportunity to revisit one’s access embargo and ask whether it should be changed considering that the world of access opportunities in the last decade has changed as well.

Enhanced by Zemanta

About Phil Davis

I am an independent researcher and publishing consultant specializing in the statistical analysis of readership and citation data. I am a former postdoctoral researcher in science communication and former science librarian. http://phil-davis.org/

Discussion

11 thoughts on “Open Access Embargoes — How Long Is Long Enough?

  1. Good article Phil. I look forward to further posts in this series.

    Posted by cityopenaccess | Sep 18, 2012, 6:57 am
  2. It continues to strike me as odd when scientists (and those funding science) base important decisions on feelings and suspicions rather than on empirical data and supported conclusions. As noted in my recent posting on block grants, OA mandates need to have all the details worked out in order to avoid a lot of unintended consequences, and length of embargo is another area, like funding, where things seem half-baked. If we can’t perform actual studies (most publishers aren’t going to risk their revenue for such a study) then at the very least we need policies that set rational criteria for embargo length, and that provide systems to either shorten or lengthen those periods based on real-world performance.

    A few other points:

    When I was EIC of a journal, we’d make 2 articles each issue “featured” and freely available. These articles were almost always the most-read articles in the issue. But this is quite variable journal-to-journal. My journal was a new startup, just building a subscription base. The journals I work with now are all well-established, mature journals, and when we feature an article and make it free, usage varies quite a bit, depending on the article itself.

    I’d argue that retention of subscriptions is only part of the picture. Many journals have not fully saturated the market, and growth through new subscriptions is an important part of the overall picture. If a librarian has to choose between purchasing two new journals, one that makes its articles free after 12 months and one that makes them free after 2 months, it’s likely they’ll lean toward the one with the longer embargo, as their patrons can probably make do with the short embargo on the latter journal (again, this will vary from field to field).

    Speaking of field to field, this is another key issue. A history journal editor tells me their citations peak around 5 years after an article is published. How do we set fair embargoes across different subject areas?

    There are other issues with PMC drawing away traffic from the publisher’s version, particularly that the lowered traffic means fewer impressions, and lower ad sales for the publisher, so there is a direct economic hit regardless of the effect on subscriptions. There’s also the question of Green OA archives and updates or changes to papers, whether it’s better to have one “paper of record” where the reader can be sure they’re seeing the latest version with any corrections or retractions that have occurred. In the age of Google, I’m not sure why everything must be collected and kept in a separate archive, provided the publisher’s version is made freely available.

    Posted by David Crotty | Sep 18, 2012, 7:46 am
  3. Interesting article. As a consultant I work with a number of publishers and have often looked at the data supporting the embargo period. What is clear without an embargo, many libraries will and have canceled their subscription. Selecting an embargo period is difficult as the usage really depends on the subject field with wide variations between science, social science and history. If the publisher has a mixed list of titles, having just one embargo period is difficult. Some of the more interesting factors in the usage analysis includes where the usage is coming from, a deeper content analysis,and technical analysis. Equally important is discovering what platform, mobile device the user using and reviewing the turnaway traffic. There is starting to be a period of publishers dropping out of aggregator’s databases as publishers are receiving too many cancellations.

    Posted by Dan Tonkery | Sep 18, 2012, 9:40 am
  4. These types of decisions — and the data/evidence that supports them — are made at a point in time, and there is a connected world that is moving as well. At the specific time that Martin was gathering this data, Google (much less Google Scholar) had not indexed all the content in Oxford’s journal sites. This indexing (which was about a decade ago, iirc) significantly boosted demand from non-institutional readers. So if Martin had run the analysis a few years later, we might have seen the bump in some fields after some periods.

    – John

    Posted by John Sack | Sep 18, 2012, 9:54 am
  5. You may find it “puzzling that funders are debating changing access policies without these data.” However, since the default is Toll-Access and artificial scarcity, then any movement away from that is an improvement for citizens and funders that aim to maximize the value of their funding.

    Maybe you confuse funders desire to maximise the value of their funds, with some publishers desire to keep out-dated, dependent-on-monopoly business models alive?

    It is ironic that you mention that publishers may “worry about giving away content.” What content is it exactly you are talking about that publishers “give away”? Content that the publishers created? Original research by publishers?

    Posted by andersnorgaard | Sep 18, 2012, 5:03 pm
  6. Forgive me for nitpicking here, but is the word ‘embargo’ the correct choice here? I’ve always taken ‘the’ embargo in the case of scientific publications to be the (short) period that news media can access the paper prior to it’s publication. You, however, seem to be using the term to describe the period in which a research paper is, to use the informal term, paywalled prior to subsequent change in status to open-access (for those journals that after a period cease to charge for access). This is the first time I have encountered this use of it and I have to say I had to read your initial paragraphs several times to determine what I believe their intended meaning is. (You’d have two different meanings for ‘embargo’ relating to the same material!) Wouldn’t using ‘paywalled’ or some equivalent be more accurate and less confusing? The common use of the term embargo means to withhold a publication entirely, not to limit an article to paying members. Surely?

    Your confused, etc.

    Posted by Grant Jacobs | Sep 19, 2012, 1:19 am
  7. Good article, Phil. As you know I chaired the PEER project (www.peerproject.eu ) that set out to see what happens to paid usage in a green OA mandated universe of 250 journals from 12 pubishers. We also wanted to test embargo sensitivity but the problem with experimenting with real journals is that the owners naturally don’t wish to create a suicidal position for their titles by offering very short embargos. What we did show was that for the embargos chosen per journal by the publishers (mostly 12 months but some at 6 and 18 too) there was no effect. We also showed that researchers have little interest in self-archiving. Out of 11000 papers only 70 were deposited in a repository despite repeated invitations from the publisher. Hope you may comment on PEER in the next part of this posting.

    Posted by Michael Mabe | Sep 20, 2012, 10:27 am
    • With experiments like PEER, the results may be limited to the experimental conditions. A new repository was constructed for the purpose of the trial, which is a very different situation than PubMed Central–a repository that has been in existence since February 2000.

      Secondly, the format of the document may be very important. PubMed Central accepts the XML version of final manuscripts from publishers and does a great job structuring them just like full text articles, creating inline images and even producing a Printer Friendly PDF version of the article. To a reader, these features are much more desirable than a 20-page, double-spaced manuscript with unlinked references and tables/figures at the end.

      In sum, I’m hesitant to generalize from the PEER repository study.

      Posted by Phil Davis | Sep 20, 2012, 10:40 am
      • It’s also important to note that PubMed is, for most medical and life sciences journals, one of the most common referring sites for readers (second only to Google in my experience). Since PubMed is geared in many ways toward sending the reader to the PubMed Central version of the article, rather than the free version hosted by the journal, this likely makes a huge difference when compared with repositories which are nowhere near as discoverable.

        Posted by David Crotty | Sep 20, 2012, 12:20 pm

Trackbacks/Pingbacks

  1. Pingback: Open and Shut: #OpenAccess around the web | Open Science - Sep 23, 2012

Side Dishes by Stewart Wills

Find Posts by Category

Find Posts by Date

September 2012
S M T W T F S
« Aug   Oct »
 1
2345678
9101112131415
16171819202122
23242526272829
30  

The Scholarly Kitchen on Twitter

SSP_LOGO
The mission of the Society for Scholarly Publishing (SSP) is "[t]o advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking." SSP established The Scholarly Kitchen blog in February 2008 to keep SSP members and interested parties aware of new developments in publishing.
......................................
The Scholarly Kitchen is a moderated and independent blog. Opinions on The Scholarly Kitchen are those of the authors. They are not necessarily those held by the Society for Scholarly Publishing nor by their respective employers.
Follow

Get every new post delivered to your Inbox.

Join 14,427 other followers

%d bloggers like this: