Last week, the University of California system terminated its license with Elsevier. There has been a great deal of attention to California’s efforts to reach a Publish & Read (P&R) agreement. The what-could-have-been of this deal is interesting and important. But I wish to focus today on the what-no-longer-is of scholarly content licensing, focusing on the big deal model of subscription journals bundled together on a single publisher basis for three to five year deals. In the eyes of major libraries in Europe and the US, the value of the big deal has declined. As a result, we are moving into a new period, in which publisher pricing power has declined and the equilibrium price for content and related services is being reset. What is the principal culprit? I will maintain today that we must look in large part to what publishers call “leakage.”
The University of California was extremely effective in framing the negotiation as being about the desire for a pathway to open access. We have seen in other efforts to negotiate P&R deals, such as that led by Projekt Deal in Germany, that Elsevier is unyielding in insisting that it must take a global perspective on the economics of this transformation.
From their Pay It Forward study, California negotiators were aware that the UC system would be responsible for a far higher share of publication costs than of subscription costs were the same global publishing cost basis to flip to open. California negotiators nevertheless took the negotiating posture that it should experience “reduction in costs” from Elsevier for the move. While Elsevier was unwilling to provide a P&R model that met UC’s price goals, it certainly would have been willing to offer a continuing subscription big deal to UC.
In choosing not to move ahead with a continuing subscription deal at all, California has made an extremely strong statement. While its negotiating position was to seek a P&R deal, in the absence of such an agreement it felt comfortable to walk away from Elsevier. Whatever its strategic goals, California like Germany and Sweden before it clearly felt that it can do without the subscription bundle at all.
Of course, Elsevier is quick to argue the value of its offerings. It argues that the number of subscription articles published is growing faster than any proposed price increases for subscriptions, lowering the cost per article published. And it notes that usage continues to rise, driving down the cost per article downloaded.
Against these arguments, California, through its cancellation, has nevertheless maintained its position unambiguously. It does not need ongoing journal subscriptions through ScienceDirect. Put another way: A major customer’s perceived value in the product offering has declined. Elsevier apparently no longer has the pricing power it once could assert.
The source of the value decline is no mystery. Joe Esposito argued more than a year ago that “Sci-Hub is an unacknowledged reserve army prepared to enter the battle with publishers,” noting elsewhere in the piece that time “is not on Elsevier’s side.”
But Sci-Hub is not alone. Sci-Hub is one of a series of services through which content is “leaking” out of publisher sites through to users. While some of these sites are illicit and pirate, others like SSRN and institutional repositories are accepted parts of the ecosystem, and still others are like ResearchGate, whose intentions vary by observer.
As a result of this combination of services, of whatever legitimacy, publishers are noting that a decreasing share of the usage of a given article is coming from their own content platforms, which are the primary vehicles through which they monetize the library channel.
Some of the larger publishers have attempted to generate estimates of the amount of leakage they are experiencing. While these models are rough, and no one wants their name associated with their estimate, I have heard estimates that suggest publisher usage numbers could be at least 60-70% higher if “leakage” was included in addition to their on-platform usage statistics. This includes “green” options through a variety of repositories (including some that are operated by publishers themselves in addition to library and not-for-profit repositories), materials on scholarly collaboration networks, and through piracy. The share of leakage among entitled users at an institution with a license is probably lower than this estimate, but it is likely well in the double digits.
I am in no way arguing against green models. Indeed, publishers have largely become comfortable with green open access. I am simply observing that these percentages are beginning to add up.
Publishers analyze usage data carefully. Some report it as part of their financial reporting. For example, RELX reports that Elsevier’s global article downloads grew from over 900 million in 2017 [PDF page 14] to 1 billion in 2018 [PDF page 14]. Let’s assume that these downloads represent organic growth and do not include the additional traffic derived from the acquisition of bepress, which took place in this period. Allowing that ScienceDirect traffic has been growing globally, is it possible that emerging markets account for a far higher share of this increase than do mature markets? Might some major research universities even be seeing subscription article usage plateau or decline? There is much here we do not know. Universities are negotiating, in any case, based on their own usage and associated perceived value, not the global aggregate.
The implication is this: publishers license their subscription content for on-platform use at premium prices and usage globally may well be growing; however, also globally, a significant portion of usage now occurs for free, illicitly or not. The key question is how this all adds up at the institutional level. In individual negotiations, these patterns cannot help but raise questions about what the equilibrium price should be for this content.
Having terminated its journal licensing relationship with Elsevier, California is undertaking to ensure that it provides alternative forms of access for its researchers. Its recommendations include a number of services and tools from which its users can choose, depending on the nature of their needs. Most of the alternatives are tools and repositories designed to access “green” versions of articles. In addition, as has become the pattern in such documents, it makes sure to reference Sci-Hub, ensuring that brand recognition grows, while making clear not to endorse it. Interlibrary loan and document delivery are also offered as alternatives, and, while they probably will not be the greatest source of articles to California users, they are a key part of the calculus.
California offers various forms of interlibrary lending, which often requires labor costs, not only for the borrower but the lender as well. I had heard some concerns from other libraries that California will thereby be, in one way of thinking, free riding on its peers in a way that is not sustainable should others wish to follow California in cancelling the big deal.
In my view, document delivery is the more important factor. Urgent needs for journal article fulfillment, such as those that a clinical provider might face in a patient care situation, are among those that have most deterred libraries from cancelling big deal licenses. For these urgent needs, California is offering document delivery.
California’s document delivery provider is Reprints Desk. The basic model, which is similar to a service offered by the Copyright Clearance Center, is that a user can place a request, directly or mediated through their library, for an article. For many publishers, including Elsevier, the document delivery service can provide access to an article with little to no delay. The library pays the service and the service passes along a portion of the fee to the publisher. If the service is configured to trigger when a discovery workflow ends without other forms of access to a licensed or open version of the article, these services can come within striking distance of replicating the user experience of having access to the licensed big deal. Presumably, California has concluded that the inconvenience or delays imposed on article access by the overall set of alternatives — including document delivery for those important urgent needs — does not create an undue inconvenience to their users.
Both standard interlibrary lending and document delivery impose additional costs on the university as demand for them increases, as can be expected following a major cancellation. Depending on volume, interlibrary loan could require some additional labor, both within the California system and at other institutions, while document delivery incurs direct expenditures with the provider. While the demands for the services may not be much greater in the early period following a cancellation, especially in a case such as this one when backfile years remain available through the publisher, over time a greater amount of content will become unavailable and demand for lending and delivery services may be expected to rise. As one of my readers explained to me, costs will grow over time and so, “At a certain point it seems likely that UC will be compelled to pick up subscriptions to individual titles as a cost-control mechanism.” Presumably, California has modeled its expectations for levels of use and has concluded (as did Germany) that the articles that will for the foreseeable future be requested through these services will result in a lower cost than the big deal license that Elsevier was offering.
There is an important likely conclusion here: the inconvenience and cost of providing an alternative access system is not expected to net up to the value of the big deal price. This is important because the per-article prices were deliberately set in such a way as not to undermine the economics of the primary publishers’ licensing businesses. As a result of content leakage through other sources, the amount of content needed by a library through these document delivery services to make the cancellation logic work has likely dropped considerably from what once would have been the case. As part of this overall mix, it seems these document delivery services are something of a game changer for big deal negotiators.
Thus, for all the discussion about open around this latest round of failed negotiations, I believe the first question is about value and price for the subscription business. To be clear, the work of publishing has value, and I respect the importance of the curatorial, editorial, technical, and production functions, among others. Still, my working hypothesis is this: the pricing power among the largest publishers for their big deals has declined and the equilibrium price for journal subscriptions may therefore decline as well.
First, Germany and Sweden, and now California, have used this perceived decline in value to push publishers towards what would ultimately replace a subscription model with a publishing services agreement. In the case of Germany, Wiley appears to have accepted this proposal, prepared to gain market share for its existing open business and even creating a flagship Deal-branded journal that could position it as a leader in showcasing German research, while Elsevier is frozen out, one must imagine, not only as a content provider but also increasingly as a publisher of German scientific manuscripts. California would seem to represent a gold rush opportunity for Wiley, Springer Nature, or another publisher interested in acquiring market share for publishing services among one of the world’s research powerhouses.
Elsevier has a global footprint, and although it appears to be overly reliant on North America for its publishing revenue, even losing what may be its largest North American account may not amount to a material impact on its bottom line. But, California is widely perceived as a leader among American library groups, and the rumblings about what seems possible this week that did not seem likely early last week are hard to miss if you speak with consortial leaders.
For this reason, I am surprised that the strategists at Elsevier did not manage to construct some kind of creative deal, bundling parts of their platforms business or reducing the price of the expiring license agreement, to retain California. Surely there must be a way to escape the logic of P&R negotiations, which Elsevier strategists understandably hate.
Of course, any “contagion” of cancellations will spread slowly across North America, not rapidly, since not all consortia have the same ability as California to speak with a single voice, and moreover as prices adjust to match the new equilibrium. After all, how many lost agreements can Elsevier withstand?
What comes next will assuredly be of interest. While we have seen Germany able to maintain a common position in its negotiations with Elsevier, it remains to be seen how strong the California schools will prove to be. Will any of them choose to license individual journal titles on their own, breaking the big deal but acknowledging the value of some of its contents? Will faculty members rise up in revolt and cause the consortium as a whole or some of its individual university members to take a different approach? Will Elsevier return at some point with a game changing offer? If California can maintain control of its stance to the degree that Germany has been able to do, it will suggest that its position on the value of the journal bundles can hold.
I maintain the belief that leakage is a key factor in reducing the pricing power of the major publishers. With legal action thus far going nowhere fast, and the green cat out of the bag in any case, pragmatists may see only a few prospects for how to address leakage.
First, to the extent that document delivery services are the game changer that some have speculated they may be, publishers may want to understand this particular part of the marketplace better. Not all librarians are equally sanguine about these services and their methods. Publishers may wish to dig into this landscape further and explore options.
Second, publishers are already looking at the syndication model, as a strategic initiative to co-opt content leakage and turn much of it into COUNTER compliant usage that can be applied to the library channel. Several major publishers are actively exploring syndication and considering the trade-offs it involves, and Springer Nature and ResearchGate just launched a pilot. It seems that syndication can help publishers maintain the value of their journals — and thereby preserve the library channel.
For many, leakage will only seem to be a short-term concern as we move inexorably towards a more open future. But brief though today’s transitional period may be, libraries will increasingly leverage this new dynamic to their negotiating advantage. Publishers reliant on the subscription model will need to grapple with leakage — in terms of their platform strategy, related technical issues, product bundling, and negotiating tactics.
I thank Christine Wolff-Eisenberg for encouraging me to write about this topic. I also thank her, along with Rick Anderson, Brandon Butler, David Crotty, Lisa Janicke Hinchliffe, Kimberly Lutz, Mark McBride, and Doug Way, for commenting on an earlier draft.
50 Thoughts on "Is the Value of the Big Deal in Decline?"
Great analysis, but don’t forget the trigger for the UoC cancellation was the attempt by Elsevier to bounce UoC into agreeing its terms by contacting UoC faculty with a misleading impression of what was on offer in the hope of said faculty pressurising the UoC negotiators. I hope Elsevier has penalised the employee(s) responsible for that mistake. A model of how NOT to negotiate.
I’m very curious to watch and see if other libraries take a cue from this and seek to carve out more value even if they don’t cancel to at least improve negotiating position. Some librarians seemed uncomfortable with my suggestions here – https://scholarlykitchen.sspnet.org/2018/05/22/are-library-subscriptions-overutilized/ – to tamp down over utilization but maybe it will seem more reasonable now.
One of the things that I worry libraries haven’t considered fully in these decisions is the long-term impact on their collective value to faculty, if these changes happen at scale in a relatively short timeframe. We know from nearly two decades of ITHAKA faculty surveys that two of the things that faculty value most about libraries are: (1) the fact that they pay for the content they access (buyer), and (2) the access that the library provides to that content (gateway).
If the value of the buyer function begins to diminish, either because the content is OA or cancelled (where the buying function may shift to the researcher); and the gateway function continues to diminish, as faculty and researchers continue their move to other starting points for accessing content; I’m afraid that the library’s “value” to faculty and researchers will continue to take a hit.
Of course, there is little risk of this currently, but as these changes go to scale, I worry that some libraries could fall victim to what my father used to call the “Gorbachev Syndrome”: where the movement initiated ends up running over the initiators at the end of the day.
You are right on Bruce….
I would put this more strongly: libraries have no role in a fully open access environment.
Many librarians I’ve talked to about this issue are confident that even in a fully OA environment, libraries will continue to fulfill an essential campus function by, instead of brokering access to purchased content, using what were once subscription funds to subsidize OA publishing–perhaps by brokering access to APCs (as currently happens with library-based OA funds) or perhaps by underwriting publishing directly (à la SCOAP3).
One interesting challenge for the library in such scenarios, I think, lies in the fact that there’s no particular reason why the library itself should play these particular roles. If I were an administrator tasked with justifying the investment of scarce university resources, I think I’d have a hard time explaining why a separate organizational unit of the university should be tasked with these roles when they could just as easily (probably more easily) be handled by, say, the Office of Research, or at the college level.
Another notable challenge would be the “free rider” problem. (In an OA world, what if the university decides not to participate actively in subsidizing publication at all, and instead decides to redirect the millions of dollars once dedicated to buying content to supporting other worthy efforts, like providing scholarships to underprivileged students or refurbishing science labs?)
You are making the assumption that the library will still control that content money but I think there is a considerable risk that money will be reduced and given to other departments to spend on content. Add up the total amount of money spent on content with in the university especially those large research universities with professional schools today and the library may represent 60%-70% of the content expenditures or less. That % may drop in the future .
Well, Joe, I’m sure you won’t be surprised that I disagree with this. But, perhaps we can find common ground in the statement that it will definitely have to be a different role than that of buyer. Many smaller academic libraries are already essentially out of the buying function – their budgets are so gutted there isn’t much buying they can do. Also, one role that it appears that is already emerging is manager of information platforms and tools. In some ways, what I am suggesting is a re-conceptualized “gateway” role – maybe more “pathways” rather than gateway. But, whether libraries pivot quickly enough to that role in light of other units on campus who are competing for the the “information management” role remains to be seen. Libraries could emerge in a strong new role – or completely fade into a very minor role. I have a preference, especially given the particular values that we bring to our work, but in no way do I see that outcome guaranteed.
Lisa, I’d suggest one small correction to what you say above:
Many smaller academic libraries are already essentially out of the buying function – their budgets are so gutted there isn’t much buying they can do.
Libraries (like mine) whose budgets are stagnant are still playing a very important purchasing role–because we’re still paying for (most of) the subscriptions we’ve paid for in the past. We may not be buying much in the way of new subscriptions, but our role in preserving access to the flow of new content from existing subscriptions remains, and remains important to our faculty and administrators. In other words, the “buying” function is ongoing; if we stopped paying those recurring bills, faculty would definitely feel the impact.
Rick, while I take your point I also want to be clear that I said gutted not stagnant. I am thinking of libraries that are not preserving access to the flow of content already subscribed but rather engaged in an annual process of cutting of that flow. One former graduate student of mine is working at a university library that had not purchased a monograph in three years last we talked and I doubt that has changed. Things are quite dire for many institutions – and for their libraries. However, while these are dire situations, I think they are possibly places that start to tell a story for us about what librarianship looks like if the buyer role isn’t the prominent one.
Lisa, thanks, I see the distinction you’re making. But I don’t think it takes away from my point, which is that whether a library’s collections budget has been “gutted” (like that of your former student) or is merely “stagnant” (like mine is), in neither of those situations can the library be said to be “essentially out of the buying function.” Even in the case of your colleague who hasn’t bought a monograph in three years, I promise you that his/her library is still very much “in” the buying function when it comes to other, very important kinds of content, notably journals and research databases. If the library weren’t continually buying (i.e., paying for) that content, access to the content would go away and the faculty would feel that impact immediately. This brokerage function is the one that (for better or worse) becomes less and less important to the degree that access to content no longer costs money.
I think the problem with what Lisa is saying and Rick is hinting at is that “librarians” are not needed to do these functions. Someone with an IT background can integrate “gateway” products on campus and an accounting department combined with a department chair can approve APCs. This would be a shame because I think universities desperately need other services that librarians are well equipped to provide. Perhaps the model will be shifted from collections management to having librarians situated in key department–not the library–to provide the services and support needed.
I do wonder why Rick thinks it would be cheaper to hire IT people than to hire librarians. Or why he thinks that IT people have any particular interest in being saddled with the task of organizing and distributing information. To say that librarians have no value is to say that the information ethics of librarianship have no value, and that’s demonstrably untrue. Computer Science lacks an ethical frame for information and especially for public information (see for example Mark Zuckerberg). It surprises me that discussions of “fully open access” information doesn’t examine .gov information which is already fully open-access. In the age of Trump, librarians have unfortunately been forced into the role of data rescuers. Librarians aren’t always perfect, but they do have a code of ethics and the profession is capable of rising to the occasion to do the right thing. Computer science, on the other hand, has developed into a perfect storm of a tool to undermine democratic government by spreading disinformation. To the extent that we believe that information is the foundation of human progress and freedom (and I think most librarians do), we must not hand over the responsibility of guardianship to people who are concerned only with the efficiency of technology.
Amy, you seem to be confused — I have no idea how you got the impression that I “think it would be cheaper to hire IT people than to hire librarians.” I’ve said nothing of the sort. Nor have I come close to suggesting that librarians have no value. Nor am I even suggesting that librarians couldn’t do very well the work that we’re talking about here. I’m questioning whether we can safely assume that, in a scenario in which access to scholarly content no longer costs anything, we can assume that the university will continue sending money to the library that was formerly intended to be used to pay for content. I know many librarians who believe that they will be able to simply redirect their collections money to other uses (such as underwriting publishing or paying APCs), but I don’t think we can assume that universities will see the library as the organizational unit that ought to take those things on. APCs, for example, could be brokered through (for example) the office of research, or through deans’ offices.
You and I might think that it makes lots of sense for the library to take on those roles. But that’s not the question. The question is whether the people who allocate money to the library would agree.
Lisa, all I can say to you is, Good luck!
Thanks Joe. I am a bit of an optimist about this – I didn’t have the “Value of Academic Libraries” as my ACRL Presidential Initiative (2010-2011) for nothing! But, I think librarians are going to have to be proactive and at least a bit competitive if we are going to forge a strong role for ourselves (and our values) as open access grows and the buyer role diminishes. What may seem obvious to librarians about our potential roles is not, in my experience, so obvious to others (some on campus even find our ethical commitments very annoying as they stand in the way of things they would like to do!) and there are other units on campus that have moved into not only content curation and information education but even the role of buyer! So, I’ll take that luck to combine with my hard work!
Saying libraries have no role in a digital environment totally leaves out the role that libraries play in supporting research and facilitating discovery, even if access is much simpler. Even if digital content is freely accessible, making *relevant* content discoverable is still a lot of work. Users will still benefit from research consultation & guidance.
Oh, I agree entirely. I just don’t think universities will pay for this.
Libraries’ role(s) would certainly change in a fully open access environment, but they would still be important (e.g., as curators rather than purchasers).
Important to whom, though? We librarians may all agree that it’s important for us to curate non-purchased content, but can we be confident that our host institutions will agree with us that this is an essential role for librarians? (Can we even be confident that we share with out institutions an understanding of what constitutes “curation of non-purchased content”?) These are the kinds of questions that it would be good to ask and resolve before we undertake initiatives designed to undermine our own roles in areas that are demonstrably already important to our institutions.
Interesting thread, re the essentiality of the library and librarian, not to mention the UC discussion. When I talk with the many deans, presidents, and provosts within our Society’s ambit, several themes have emerged in recent years. 1) While they value the legacy role of library/expertise of librarians, they see them as a cost center vs either their professional schools or grant-accumulating research programs, which are viewed as revenue generators (that ‘cost-center’ term seemed particularly brutal when coming from an academician); 2) that while one encourages growth and innovation in a revenue center, as a business, one seeks to control/constrain expansion of cost centers (hence the ‘gutted/stagnant’ budget) and 3) their view on many of the library reinvention initiatives has been many of those tasks can be performed more efficiently elsewhere (for example – “why would I retrain an MLS to perform what is essentially a clerical accounting function (such as managing/approving/distributing Article Publishing Charge payments & offsets) that could be managed through our Purchasing department”.
Overall in the UC deal. I wonder who is being the more cynical. One of the mathematical realities of widespread Open Access, is that the total cost of the scholarly publishing ecosystem will be reallocated/recentralised from the 6000 academic institutions that currently spread the cost of access to the more concentrated pool of 500 universities that are the primary source of (at least in STM) journal articles. If you are a high scholarly output system, it seems unrealistic to expect that an Open Access world will be less expensive for you, regardless if the expenditure with external publishers (a la P&R deals) or internally funded publishing and peer review initiatives. That’s especially true, if as Plan S does, you also load up the publisher requirements with additional costs, from subsidizing APCS for developing research economies or asking for additional publishing services, that will have to be borne by this narrower institutional base as well.
One final point: if, as you say, the move to a broader Open Access economy potentially undermines publisher pricing power, what I have found is it also calls into question the future role and authority of the Library Consortium. Many of these established negotiation gateways seem, in the Post Plan S world, seem in irons, unsure of their decision rights and consensus among their disparate members when the discussion covers issues beyond content acquisition.
If you are a high scholarly output system, it seems unrealistic to expect that an Open Access world will be less expensive for you, regardless if the expenditure with external publishers (a la P&R deals) or internally funded publishing and peer review initiatives.
See the UC’s own Pay It Forward study, discussed here: https://scholarlykitchen.sspnet.org/2016/08/09/the-pay-it-forward-project-confirming-what-we-already-knew-about-open-access/
This is the major stumbling block for author-pays OA to take hold in a system like that found in the US, where there’s no central pool of funds that can be moved around between universities. Schools that largely read the literature but don’t produce much of it will happily take their savings (and in many cases divert those savings to areas other than the library), while institutions that publish a lot of papers will be required to pay a lot more than they currently do. The UC system publishes around 50,000 articles per year and pays around $40M in subscription fees. Do the math, and note what their average APC would have to be to break even.
This is but one reason why many are reaching the conclusion that author-pays, APC-driven, Gold OA is not the evolutionary endpoint of open access.
I don’t think we have heard the last of this situation and while UC has broadcast their side of the story, Elsevier has kept relatively quiet. Elsevier has a great deal of usage data about its journals and monitors the usage by title, user, and articles. They know who, when and how much usage occurs at the lowest level. The UC faculty in the STM fields are extremely heavy users and this usage cannot be supported by document delivery. Faculty that are used to getting instant downloads are not going to turn to document delivery. If this trend continues it will erode the faculty dependence on the library for information and they will find other money to support their information needs. Departmental deals for specific titles will be activated with access limited to that department. Shutting down access to Elsevier will not help the library gain the backing it needs to defend its budget and further erodes the library’s position on campus.
Already monograph purchasing has declined, circulation has dropped, many libraries have eliminated reference service, and now stop buying journals. Provosts are already after the prime library space. It seems to me that librarians are reducing their role and value. Libraries are a big overhead cost, and what is the university getting for their money?
One angle that occurs to me is the different pressures on the various actors and the role that time plays.
On cancellation, the impact on the user is minimal because the back files are still available but, over time, the impact will ratchet up as the volume of new-but-inaccessible content grows. How fast the pressure builds will be dependant on how important the newest content is but continuing access to the back files gives time for librarians to provide alternative channels and time for users to get used to them. Get the alternative supply lines right and the library will be able to contain any negative reaction from faculty and students. Get them wrong and user frustration will grow and the pressure on the library to reverse its decision will increase week-on-week, eroding its re-negotiating position.
On the other side, the publisher instantly loses a chunk of revenue which falls straight to the bottom line, so the pain is short and sharp and will recede as the weeks and months pass. Providing authors from the cancelling institution still submit to their journals, the receding pain reduces pressure on the publisher to revise their offer: they can sit it out and hope that the alternative supply channels prove to be inadequate. If that proves to be the case, they are in a stronger position in any re-negotiation. On the other hand, if the alternative channels prove to be cost-effective for both new and old content, then the publisher will be in an ever-weakening position in any renegotiation.
It’s going to be interesting to see on whose side time falls.
I keep reading that the financial impact to Elsevier’s bottom line is likely minimal. What I appreciate about Roger’s take here is the long term consideration for market share. Given Elsevier’s overreliance on US institutions for revenue and the US market’s integral role in their maintaining status as a market leader, this seems to me to be a negotiating failure fraught with long-term risk for the vendor. Both sides have less to fear in the current calendar year but will see pressure rising in subsequent years once the rolling paywall hits a critical mass for US and once/if Wiley, SpringerNature, or heaven knows who grabs this significant chunk of change Elsevier has left on the table. I would imagine the pressure on Elsevier may be quite great in the long run, especially if UC’s decision is the trendsetter the article implies it might be. How will they respond, where will they make up this share?
Here’s a question I keep asking myself … is there a point at which publishers start to pursue action against SciHub users like the RIAA did against Napster users?
No. The RIAA (and MPAA) pretty quickly learned that that was a bad strategy. Why repeat their mistakes?
What I have seen is that PDFs downloaded from Sci-Hub are often watermarked with information on which account provided access for the download, and I know publishers have shared this information with the libraries whose passwords and accounts are compromised, leaving it to the library to see to their own security matters.
Given the moral ambivalence with which many of my colleagues regard the usage of Sci-Hub, I wonder if publishers would be wiser to share that information with campus IT departments rather than with campus librarians.
I wonder if the on-demand model can handle the type of extreme journal use by, say, the author of “Six Degrees”? How many articles on climate change would Mark Lynas have been able to get before the library turned him down? Who would get to judge whether Lynas had a “legitimate” reserch need? https://writersandlibraries.blogspot.com/2018/07/mark-lynas.html On the other hand, bundled databases are causing all academic libraries to spend an awful lot of money on the exact same things as everyone else. Would ditching the big packages open up space for greater collection diversity? Or would all the money be gobbled up by supplying a few “super-users” like Lynas? (Whose book is great, BTW, but if someone walked into your library with this research strategy in mind, what would you do?)
One of the defining features of every on-demand acquisition model of which I’m aware (whether for journal articles or for books) is that its parameters are delimited up front by the library: the model includes a large number of books or journal articles, but certainly nowhere near all books or journal articles. This helps to ensure that edge-case users won’t bring the whole system down. It also ensures that users don’t have to worry about having their requests “turned down.” In fact, it’s in the fundamental nature of patron-driven models that patrons have no experience of making “requests” at all; they simply do their work, using a vastly larger array of resources than could possibly have been made available to them by librarians purchasing them preemptively. The purchase process takes place entirely behind the scenes, strictly in response to actual usage behavior, and is transparent to the patron.
Another standard failsafe is a mechanism whereby anomalous cases of very high use trigger some kind of flag to the managing librarian, prompting him or her to adjust the risk pool of available content, or to simply put the brakes on the system altogether while figuring out what’s happening to cause the spike.
It’s not clear to me – at all – that monies saved in these reductions/cancellations will actually stay in the library budget. If I were a CFO, or VP of Finance at a higher education institution, I might look at those dollars as cost savings that don’t get re-deployed to the library.
Libraries might want to diversify their collections, or even throw more support toward OA initiatives, but they may not be given that choice.
I always wondered what would happen if a publisher like Elsevier were to conduct a serious campaign to sell Science Direct subscriptions to individual faculty. Just as a thought experiment: the UC has maybe 70,000 faculty, researchers and teaching staff. UC’s Elsevier subscription was $10,000,000 per year. If Elsevier could sell individual subscriptions to all 70,000 folks for around $143 per year, it could easily make up that $10 million–and it would have direct communication with its readers. That $143 per year is less than my annual subscription to the NY Times.
On the flip side, imagine if CDL could use that $10M to really develop and expand its own journal publishing system and stable of journals.
On the point of individuals and payment, I wonder why we have not seen more activity from publishers to offer low cost PPV? I’ve heard that integration with payment systems has been clunky, but things must have moved on a lot in the last few years.
Why does a user need to pay £35 or more to access an article that they may well not need? Where are the micropayment options?
Any insights would be truly welcomed.
Hi, Bernie —
There have indeed been some forays into this model, notably both ReadCube and DeepDyve. Both have achieved some success, but neither one has taken off in the way one might have expected. I wonder if the UC/Elsevier split will create new market incentives for more initiatives along this line, though.
Deepdyve is not really a great user experience, in my opinion. Lots of clicks, the need to become a member or $9 per article. I couldn’t figure out the Readcube offering from a few minutes on their site.
I think it needs to be easier. And things are changing so perhaps time to revisit the financials. The audience would be beyond the researchers at subscribing institutions. Including undergrads, alumni, non-affiliated researchers… And some great customer insight stats.
The philosophy behind PPV pricing has largely been to set prices at a point where it’s more expensive for a university to buy all the articles it accesses individually than it would be to subscribe. Businesses prefer recurring, guaranteed revenue, and publishers are no exception. So PPV is set up to encourage big customers to get a better deal via subscription access.
It’s interesting to think about as the number of papers being published and read has expanded so much that cost per download figures continue to decrease every year in most subscription deals. But as Roger points out in this article, the success or failure of UC dropping Elsevier will largely depend on their ability to source the needed articles at a price point that’s lower than what they were paying via their previous subscription deal. It’s not in Elsevier’s interests to make that easier for them by lowering PPV prices, so I don’t expect to see much action on that front.
But as Rick points out above, the offerings available through services like DeepDyve have greatly expanded (and prices have come down) yet we haven’t seen enormous levels of uptake.
It’s true that usage on the publisher’s platform is lower than it could be because of “leaking”. It is also true that usage is higher than it had to be because of a number of reasons, among them publishers’ efforts to increase usage.: If a scientist wants to read an article an access it from a third-party database via direct linking, she might come to a webpage already with the html fulltext (count: 1). If she then downloads the pdf that she wants, there is count 2. The publisher might offer to take a look at “related articles” (counts 3-5). Or even offer to download the whole issue instead of a single article (counts 6 – n). Everything very convenient, everything skyrocketing download numbers. Now if her library has no access to that publisher – will she then request that specific article via document delivery or two versions of it or three other articles, recommended by her document delivery department, or the whole issue?
That’s actually not the case if publishers are COUNTER compliant. Newer COUNTER guidelines would count the full text article followed by a PDF download as one download. Related articles would certainly count as they are different pieces of content that the user is choosing to access. I have no idea how whole issues are counted, but if a user wants an entire issue and selects that option, it should count as usage for each paper.
I don’t deny that it’s correct to count each article when a complete issue is downloaded. The point is: Nobody should be astonished that in cases w/o access to fulltext on the publisher’s platform, this usage is not compensated via other ways (from illegal like sci-hub to highly profitable like pay-perview). Therefore, there is not only “leaking” from publishers’ platform to other options, but also “vanishing” into non-existence.
I want to return to my original argument that if you remove the library from the purchasing role, that role will fall to others and there is certainly a risk that the central pot of money will be divided back up and reprogrammed to the departments where they can make their own deals for access. The STM field is most likely to be unhappy and as a serious revenue generator, can make a good case for getting their hands on that money. Take a look at the medical and law schools which both have database deals with a number of publishers which actually exceed the library budgets for content. Most librarians don’t realize what content is already being acquired by purchasing under the medical school and law school deans budgets. STM Departments may soon follow suit.
A few years ago I was trying to sell the University of Texas system a database and we could not reach an agreement. So I went to each medical school and their teaching hospital and sold every one of the medical schools the database at full price which the net amount was 2.5 times as much as my offer to the UT System. My point is the same publishers can cherry pick campuses to sell their content and quickly recover any money lost in the library markets.
The humanities and social sciences don’t have the financial clout that other disciplines maintain. Medical schools and many Law schools have already moved as independent buyers of content.
Which brings to mind this post: https://scholarlykitchen.sspnet.org/2016/04/07/selling-to-libraries-vs-selling-to-users/
In this case, how did the individual schools manage authentication and access? I (academic librarian) have found that when different departments arrange access they fail to take into account the need to manage access.
I’m amused by the unstated goal embodied in some of the above comments. At the University of California the University Librarians’ shared goal has been to do the best thing for the University (and society as a whole) not the best thing for the libraries. Of course folks will argue about how best libraries can serve their mission, but we are very clear and unified on the fact that our mission is service to the university and society, not ourselves.
Jeff makes an important and inspiring point, even if it is a hard one for any organization always to live up to. Whether we are universities, libraries, publishers, aggregators, information providers, scholarly societies, etc., we exist to serve the needs of scholars, students and the pursuit of knowledge and learning. We all aspire to hold that objective out to lead us. At the same time, there is a natural bias toward survival and we all have to admit that sometimes our actions will be impacted by that bias. So some in this discussion understandably worry whether certain outcomes will undermine the role or sustainability of libraries. Or humanities publishers. Or societies. See all of the debate about Plan S. To overcome that bias takes effort and discipline and I commend Jeff for calling it out and congratulate him if the UC libraries are galvanized in that way. That is hard to achieve!
The only player in this game that doesn’t have a survival bias in this way is the user — the student, faculty member, etc. They are not serving someone else; they are the one being served. At the end of the day, this debate will therefore be decided by who serves the needs of those stakeholders most effectively. Therein lies what best serves our society as a whole, as Jeff says. Since the faculty votes count most in this particular election, what I think we are going to learn in the long run is whether faculty believe that libraries or publishers are better positioned to deliver the content and services they need to do their work. If you buy that argument, this is not really an election about pricing or terms, it is an election about whose services the faculty think will best enable them to carry out their research and teaching: the UC libraries or Elsevier.