From a distance, you might think that journal publishers should be celebrating their success in Europe. They are being offered the open access (OA) crown, locking in OA contracts and article flows. But, European policy targets are adding complexity. The emergent problem is straightforward: there appears to be no realistic path forward that achieves the 2020 OA targets without resulting in substantial revenue reductions for existing publishers. Will Europe miss its OA target? Or will publishers miss their revenue targets?

Should Napoleon have crowned himself? Jacques Louis David, The Coronation of Napoleon, 1806.

European policy targets for full OA by 2020 are driving much of the recent activity in the region. National consortia are incorporating these targets into licensing strategy. Richard Poynder has provided strong analysis of the resulting “OA Big Deals.” While the details vary to some degree, in every case they are intended to take the subscription fees paid to a given publisher from the research sector in a given country and to use that money to provide for OA to all the publications authored by scholars in that country. The clock is running down on consortia efforts to put such deals in place by 2020.

In one way of thinking, these OA Big Deals are in publishers’ interests. For a publisher that can sign enough of them, they effectively eliminate the risk of revenue drain from uncontrolled piracy while getting beyond the challenges of hybrid journal models. And, consortia are effectively offering to crown the existing major publishers as the OA Royalty, rather than putting in place the competitive marketplace for OA that Poynder and others wish to see develop.

It is not unreasonable for publishers to be offered the Royal Crown by libraries and their institutions (and the accompanying ongoing market share and profitability) if OA itself is the end goal, as some advocates strongly feel. After all, an enormous share of the publishing capacity is currently held by just a handful of scholarly publishers. By capacity, I do not just mean technical capacity, but the relationships with editors and reviewers that enable publishing to take place reliably at scale, as well as strong metrics of publication quality required to draw submissions. Many library and consortia leaders have come to recognize that any near-term flip to OA must leverage publisher capacity. As the California Digital Library’s Ivy Anderson said at the Society for Scholarly Publishing Annual Meeting last month, although many librarians may wish to seek alternatives, agreements with these publishers will be the key “on-ramp” to OA.

Even as European consortia seem prepared to crown this handful of publishers as OA Royals, the publishers are taking what appears to be a cautious approach, especially on deals of the so-called Read and Publish (RAP) flavor. Joe Esposito wrote about some of their obvious hesitancy last week. The risk to publishers is that, by flipping journals in one major region to OA, they will devalue the subscription bundle for subscribers elsewhere. If most or all publications in Europe become immediately OA, even without revenue loss from that region, while other countries continue to rely on the subscription model, revenue will erode outside Europe. In North America, piracy and other factors are already reformulating market dynamics and emboldening library negotiating postures. If a substantial portion of their subscribed content becomes available OA, there is every reason to expect North American consortia or institutions to demand price reductions. And thus a regional flip results in revenue damage to a global publisher. Earlier deals, allowing for a national or regional flip without a substantial price increase, have proven a matter of concern from a revenue perspective. David Worlock is by no means alone in musing, in light of the recent failed Springer Nature IPO, that perhaps “OA dilutes Ebitda a bit.

Today’s dilemma, therefore, is not a matter of opposition in principle by publishers to a nationally led global flip. Rather, it is haggling about price. In recent years, a few consortia signed agreements in which they pay more for adding RAP features to their existing subscription contracts, accepting publisher claims that a flip necessitates investment in the new model while maintaining the cost of subscriptions. While by no means uniformly praised, in some circles, such a “first mover disadvantage” is seen as generating momentum for a truly global flip.

Thus it was notable that MIT, which has long been an important leader on OA in the US, recently signed on to an agreement with the Royal Society for Chemistry (RSC). The agreement, positioned as an experiment, it is believed to be the first RAP agreement to have been signed with a US university. Although a blog post from MIT’s lead on scholarly communications and collections strategy Ellen Finnie offers as context that “there is enough money in the system to support a move to open access,” it is important that she acknowledges that under this agreement with RSC there is an increase in “library-based payments.”

Will MIT’s experiment prove to be the first move in a much broader North American trend? The University of California’s strategic priorities that were released last week seem to suggest a growing interest in considering RAP models, an interesting departure in light of California’s earlier evidence-based Pay It Forward project which seemed more skeptical.

In one way of thinking, if the major producers of scholarship sign on to pay more to publishers under RAP models, then pay-to-read subscription models can gradually fade away. Of course, others believe that publisher arguments about an inability to proceed with the same fee structure in the absence of an immediate global flip is just a rationalization for higher prices; and that these deals are simply emboldening publishers to demand higher fees against a looming policy deadline for the flip.

In any event, as I wrote last week, I am skeptical that we are going to see large-scale consortia driven pressure in the US to change the model. And as a result, a global flip that avoids the need for a transitional higher-priced RAP model seems highly unlikely.

So, we are back to asking about the price for a transitional RAP model. Here, the level of price increase that may make sense for MIT as a short-term experiment with a non-profit society publisher is not what will necessarily be acceptable as an equilibrium point for the research community overall with the commercial publishers. This is a good moment for a careful re-read of Gemma Hersh’s very carefully laid out perspective of what it will take for Elsevier to embrace RAP models, including not least the need for the academic community to bear the price increases in “the system” from commercial subscribers. It gives some indication of just how far Elsevier might be from being willing to embrace such a model without seeing substantial price increases at a consortial level.

Of course the consortia understand this. David Crotty termed the idea of a coordinated flip to open access “magical thinking” several years ago in these pages. If the European consortia are not prepared to increase their payments to publishing majors, and publishers don’t accept the OA crown being offered them, what is the long-term game?

On the subscription side, it may be that “Elsevier blinked” in allowing access following contract lapses, as we have seen in Germany. Of course, RELX leadership stated clearly in February its confidence that the German negotiations represent only an unremarkable part of its strong global business.

In any event, such contract lapses will, eventually, lead to lost access. At some institutions this may not be of concern, and there could be some revenue loss for publishers as a result. But at research-intensive institutions, although piracy remains uncontained, it is not at all clear to me that Sci-Hub offers a perfect substitute. Would not an access cutoff result in at least some universities entering into side agreements with publishers even if a consortium tried to hold firm? 

The bigger question then is whether, after an access cutoff, enough universities would continue to resist contracting to yield a change in the posture of at least one major publisher. Looking beyond any current negotiations, is there a consortium strong enough and a publisher weak enough to force such a shift to another approach?

Academia’s OA targets and publishers’ revenue targets are once again directly in conflict. Have market fundamentals changed sufficiently — due to Sci-Hub among other factors — to yield a different outcome this time?

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.

Discussion

10 Thoughts on "Will Europe Lead a Global Flip to Open Access?"

Once we go full OA, publishers will cease to have recurring revenue. Their only concern will be with prestige, which is presently guided by JIF. Once an article falls out of the 2-year window, there is no incentive for the publishers to care about that article. I foresee this system having significant detriment to authors, whose h-index will likely suffer.
For authors to thus improve their h-indices, they will have to promote the heck out of their own articles. But this is yet another drain on their time. If I were a publisher, I would be looking at a “pay to promote” system where busy authors put in extra funding to raise the profiles of their articles during and after that two-year window. Will we go down a dangerous path where scholarly information becomes beholden to the world of paid advertising?

Joe, what about counterfactual thinking! How would prices develop if Open Access did not exist? Nobody knows! But everyone knows that price developments (with or without Open Access) are essentially related to market concentration.

This week’s issue of the Economist has a piece suggesting that a lack of quality in Open Access publishing (and perhaps also toll-access publishing) may be the real problem. https://www.economist.com/science-and-technology/2018/06/23/some-science-journals-that-claim-to-peer-review-papers-do-not-do-so
I find much of this discussion rather frustrating in that it appears that publishers (and perhaps also librarians) appear to be wholly in a reactive mode: RAP as a business model, and flipping as an access strategy seem to be purely reactive. There was a time when it was expected that publishers who offered digital services would be offering something new, perhaps unexpected, that allowed researchers to do stuff that couldn’t previously be done well. Perhaps some new approaches to analysis and reviewing of published research could be thought up and developed? Publishers, with their editors and reviewers, who can do that will surely find some scope for developing new services.

Roger Schonfeld’s scholarly kitchen article this morning captures the current conundrum for the scholarly publishing industry and begs the question as to how will the publishers and institutions agree on access and price that will not harm the researchers access to important research while allowing the publishers to have the necessary financial wherewithal to continue to be an ongoing concern?

As data analytics are beginning to take shape in many forms, in my humble opinion the issue of services and pricing will become more complex as libraries, institutions, publishers, researchers, etc., grapple first to understand how it will help them in their intellectual pursuits and then how to pay for these new tools.

Who will own the rights to the data and how will the owners of that data be compensated? If one analytics tool only contains data that represents 30% of Research on a given topic vs. another tool that has 50%, who will be the arbiter to say which one is better?

As researchers have always been the arbiters for their research to determine which articles are most germane for their research hypothesis it is essential that they have access to the most relevant scholarly articles. In my 2014 article, ‘As World’s Collide’, I addressed the issue of younger new researchers trying to achieve their first R01 grant, must present their paper with the best research articles and to do so in a timely manner. Considering this factor, a researcher must have access to a wealth of articles coupled with a dynamic search engine to help them search through the sea of articles to find the most relevant article to support their research hypothesis.

This “silent negotiation” of pricing is in the hands of the open market and ultimately should lead to an overall balance between price, quantity, and quality of research but in the meantime, will the quality of research be affected? We will have to wait and see but, in the meantime, the scholarly publishing industry collectively should have an overarching goal of providing the research community with the best ecosystem of scholarly content and tools.

Stay tuned this discussion will not be wrapped up any time soon.

All excellent, insightful comments here. In Britain a guiding principle is that STM studies sponsored and paid for with public funds should not be “repackaged” by publishers at a profit because, in effect, the readership has already paid for these through taxes at the financing stage. The major players are the UK, Switzerland, Germany and France.

Even in humanities, a case could be made that some research (archeology, medieval history) is underwritten by public universities and should therefore be free and readily available to the taxpayer.

In Europe most major research universities are public. Here in Italy STM research is painfully underfunded except at the major tech universities. The only ones usually included in the world’s top 200 are in Pisa, which has seen obvious improvement since construction of that city’s leaning tower.

The standard counter argument is that while the research is paid for with public funds, publication is not paid for with public funds. Those costs must be borne somewhere, so either the governments must increase funding to researchers in order to pay the costs, or research budgets should be cut accordingly so a portion can be used to cover publication costs.

There’s also a major fallacy in the argument above because those same governments allow discoveries made using those public funds to be owned by the discoverer and their institution, and exploited through patents. If the taxpayer has already paid for these results, then why allow researchers to withhold the results from public use? I would argue that charging enormous amounts of money for access to a cure for a disease is more problematic than charging for access to read a story about the discovery of the cure.

A good point, especially if the published story includes a formula for the cure, but that also opens a legal can of worms regarding patents, intellectual property generally, and who owns knowledge (or its product) and for how long.

In its most essential form, digital publication and distribution over networks now cost very little if we consider that anybody can upload a PDF to a server. No, it’s not a journal, but perhaps an article submitted to one.

I’m not pointing fingers, and I know that labor and printing costs are substantial, but the prices of some academic journal subscriptions strike me as rather high.

The fact remains that too many people want information for free.

Back when the Finch Report came out (2012), I recall that one of the first responses was to note that the UK produces around 5-6% of the world’s published research, yet pays only 2-3% of the world’s subscription fees. So it was obviously clear that any move to shift the costs of publication onto the producers of research would greatly increase the UK’s costs as they would be voluntarily shouldering more of the burden on behalf of consumers of the research. I’m not sure why this concept has created such cognitive dissonance.

Comments are closed.