We don’t have to look too hard to find someone arguing that a particular subscription database is a good value because it has a lower than average cost per use. Librarians are as likely to make this argument as publishers, typically to show that our institution is receiving a good value for its investment in the library’s collections. However, is it always that case low cost-per-use is an indicator of good value? If the true value is of a subscription is being obscured by over-utilization, should libraries seek to dampen such excess in order to have more appropriate measures of the real value of a subscription? By doing so, could a library then negotiate for better prices on some resources and thereby more effectively steward a limited budget allocation in order to better serve its community of users?
Cost-per-use is a ratio of two components — the numerator (cost) and the denominator (use). As such, there are two possible strategies for improving the cost-per-use ratio as a measure of value, either driving up use or lowering the cost.
Libraries have many strategies for increasing use and the interests of publishers are aligned with this approach as it does not threaten subscription revenues. Indeed, publishers themselves put a great deal of effort into increasing use as well.
In contrast, librarians have expressed frustration at their seeming inability to influence the numerator part of the cost-per-use equation. No library is able to buy or license all of the content that would be useful to its community of users; most regularly find their buying power declining as their budgets do not keep pace with inflation much less price increases beyond that. Thus, the challenge facing librarians is not how to spend increasing amounts of money on increasing amounts of content but rather how to spend decreasing amounts of money as effectively as possible. In reality, for many librarians, the task at hand is what to cancel.
As one potential remedy to budget challenges, Kristin Antelman has suggested an approach for calculating an Open Access-adjusted Cost per Download metric to leverage in negotiations. In this essay I would like to raise the possibility that the increasing availability of open access content, coupled with the potential for systematized efforts to put that open content into the user workflow, could be a mechanism for librarians to gain some control of their budgets and pricing. Generally speaking, there is little reason to pay for something that is openly available and particularly not when doing so prevents paying for other things that library users need.
Libraries have put great effort into developing institutional repositories, supporting campus open access mandates, and advocating for APC funding. At least in part, the goal has been to drive down costs (and perhaps to drive down profits of commercial publishers) in addition to increasing access for readers. As open access versions of articles become increasingly available through these and other efforts, by presenting open versions to library users, or perhaps even privileging open versions, libraries could leverage the results of their efforts in negotiating for lower prices. Such contracts could still allow the library to demonstrate the value of investing in subscription resources – changing the cost-per-use not by increasing use of subscription resources but by decreasing their costs.
Twitter erupts from time to time with examples of platforms using layout and other design elements to obscure that a copy can be read without subscription/payment or without individual registration. While it is tempting and perhaps satisfying to express outrage at this obfuscation, I’d like to suggest that these tactics – as well as nudging strategies of ResearchGate, Google Scholar, and the like – might give us insight into approaches for how libraries might better put open access materials into the user workflow. I’ve organized these approaches into three categories: acquisition, linking, and delivery.
First, libraries should “acquire” and organize open content as one would subscription content. I am continuously amazed at how many libraries still do not have systematic mechanisms for selecting, cataloging, linking, etc., open access journals and monographs. Of course, these should be evaluated for inclusion based on coverage, scope, quality, etc. I personally still believe in local collection development as a complementary strategy to efforts to make everything discoverable and singularly presented – in large part because I have found that users want “my everything” and not absolutely everything, as I discussed in “Discovery Should Be Delivery“. Treating these open materials differently because they are available without charge undercuts efforts to encourage open access and we are long past the era of being able to credibly argue, if we ever could, that something on the Internet for free is inherently a quality risk.
This second category is a set of strategies for integrating open copy into article discovery and delivery workflow. At a minimum, libraries could present links to open access versions of articles in the link resolver. This will immediate expand the total amount of content that libraries are able to present in the research workflow as it is unlikely that any library has subscription copy of all of the open access articles already available. In addition, users will see options for open copies of the same articles available by subscription.
Adding these links will create communication and perhaps education challenges for librarians. Users might ask why there are multiple links to the “same” article (though maybe not as link resolver menus can already be rather complicated and duplicative). More likely they might ask why there different versions of the same article. Looking at the NISO JAV documentation – there are far more versions than most of us think about with any regularity – and of course any or all of them might be openly available! The most vexing problem is of course how one can identify which version they have accessed and whether it is “equivalent” to the version of record. Efforts to implement article tagging may eventually resolve at least part of this challenge of establishing equivalency; however, any system will likely take significant time to be fully implemented. User studies will also be needed in order to guide decisions about how to best represent open content in the link resolver.
Building on presentation of open links, libraries could nudge users to choose open access versions of articles as appropriate to their needs. Moving from just passive representation of links to providing contextual information to guide the user in making their choices could help nudge users to use open versions. Perhaps icons could be used to indicate free vs library-pays versions? A library could present recommendations to use open copy unless a subscription version of record is necessary for a particular use task. For citing in a formal publication, the user may need to have read a version of record that is only available via subscription. But, we all read far more than we ever cite and, at a minimum, open copy would likely be fully acceptable for the scanning and browsing in order to identify what gets a full read.
Adding nudging information would allow users to make more informed decisions about which version they use for which tasks. Given researchers likely already have habits around downloading and reading, user studies would be needed to investigate which nudges are most effective; nudging is, after all, about trying to change default and habit behaviors.
Most radically, libraries might suppress links to subscription copy when open copy is available. Slightly less radical would be to add friction to the process of getting the link to the subscription copy without suppressing it entirely. As an example, rather than a direct link to the subscription copy, the user could be presented with a one-click to the open copy vs. an option to fill out a form to have the subscription link emailed within two hours. I recognize that even adding friction — much less suppressing links — will like grate against the librarian service ethos. The framing for considering this would be as short-term inconvenience to enable long-term gains and budget viability.
Finally, in the context of interlibrary loan and document delivery, libraries might default to providing open copy rather than a paid version of record. With automated workflows, such as the option of DeliverOA from the Open Access Button, users might be provided immediately with open copy along with an option for a paid version of record copy, if the open copy is not the version of record. Or, the paid version of record copy might not be offered at all in order to contain costs, or only offered after justification, or to some user groups but not others. CalTech has implemented some of these concepts in its interlibrary loan/document delivery platform.
For any of these strategies to be fully effective in improving the library’s negotiating position, library-wide communication and training for selectors, front-line service staff, etc., will be crucial. Library leaders will need to clearly explain the goals and purposes of these efforts, particularly the importance of discouraging users from over-utilizing subscription copy and artificially driving down the cost-per-use, as part of an overall budget strategy as well as to establish procedures for cases for which these defaults might be justifiably waived. Library leaders should also expect that publishers may resist these efforts and that users may complain and should be prepared to support the staff responsible for carrying out these tactics.
For publishers, this may indeed sound like a less than desirable situation — librarians strengthening their negotiating position potentially means a weakened negotiating position for publishers. Indeed, one could imagine a knee-jerk reaction of pulling back support for open versions. However, it might be better viewed as the gift of an innovation driver in this age of Sci-Hub — something that might fix rather than break the subscription model? A savvy publisher should ask what value they can provide that will make subscriptions worth paying for and, by taking this approach, drive innovation and value creation.
Acknowledgements: In laying out these thoughts, I am indebted to Roger Schonfeld’s Red Light Green Light: Aligning the Library to Support Licensing that articulates how a library might align its efforts to strengthen its negotiating position and to Ryan Regier for our many conversations about how libraries might encourage open access. This essay is based in part on my opening remarks at the CNI Spring Meeting panel “The Privileged Link: Open Access, Version of Record, or Let the User Decide?” (presentation slides and video).
57 Thoughts on "Are Library Subscriptions Over-Utilized?"
An interesting proposal that librarians should actively suppress access to the resources they actually purchase for their patrons. This approach would be another nail in the coffin of society publishers—those publishers that have a history of negotiating with with libraries based on usage. It’s also taking advantage of the work done by journals while devaluing the work.
Authors and readers ask me all the time if we can offer some of the services that Elsevier offers. Some of those people are even at your university. The answer is usually no due to costs or lack of resources. But it’s also because we don’t expect to recoup those costs through subscription cost hikes. Many societies try very hard not to increase subscription rates a whole lot year on year—despite increases in output or new services.
So suppressing usage to content that may only be discoverable to you because it appears in a journal pushing out the societies, whose members belong to your faculty and in your student body, is a sure way to kill off societies or even force those that are self published to partner with a big commercial publisher leaving libraries with only the big guys to deal with.
The proposal is to suppress over-use not use in general. Personally, I suspect this proposal would likely benefit smaller publishers … they are already losing out because libraries pay so much for big deals. In negotiating, libraries are more likely to put their time into negotiating down the big deals rather than smaller ones because those offer the greatest chance for impact.
Wouldn’t this scheme disproportionately harm more liberal publishers, those that make strong efforts to increase free public availability of content? It strikes me that those who keep the tightest hold on material would benefit, while those supporting OA would be punished.
Nothing I’ve prosed here prevents a library from using other criteria in its decision-making about which contracts to re-negotiate.
Yes, David is absolutely correct, and you heard it here on the Kitchen many years ago: https://scholarlykitchen.sspnet.org/2013/09/26/when-it-comes-to-green-oa-nice-guys-finish-last/. Nice guys finish last.
Many a tweet points out that ELS authors can make their manuscripts available OA. Last I checked, ELS isn’t coming in last.
As you know, you have not responded to the substance of the issue. ELS has a strategy for dealing with Green OA, which is known as the Big Deal. Smaller publishers can match ELS’s green OA policy, but they can’t match the effectiveness of the Big Deal. Even so, ELS’s green policy is not helping it.
Sorry, I was too obtuse. My point being my suggestions here address the fact that ELS (and others) haven’t seen the impact because libraries have been assessing cost – per – use based on overutilization.
ELS (and others) haven’t seen the impact because libraries have been assessing cost – per – use based on overutilization.
But is that the goal here? To hit those with liberal Green OA policies hard and make them feel the impact of being generous in this manner?
The goal is to get a more accurate measure of the value of subscription than cost-per-use, which over-represents the value (particularly when platforms use obfuscation, nudging, etc. to drive up use). It only makes sense for a library to want the best measures of value possible as the basis for their negotiations. It’s just good business sense! That doesn’t mean this is the only criteria in use.
Societies that self-publish seem to be more likely to have liberal policies that provide free access to their content. Those policies include short embargo periods and allowing the manuscript version to be posted elsewhere or making it freely accessible from the journal. The spirit behind those policies is to provide access for those who absolutely cannot afford subscriptions (no matter how low the price) while maintaining subscription revenue from institutions that can afford to pay. Your plan would definitely kill off my organization’s journals unless we change our policies or move to a commercial publisher.
It’s clear that free versions of paywalled content is becoming easier to find (Sci Hub, Unpaywall, Research Gate et al) and some are suggesting this is emboldening negotiators’ hands when it comes to renewing big deals with publishers. So, maybe Elsevier’s (and others’) Green/Big Deal strategy is beginning to unravel? If this is the case, whether a publisher is being ‘liberal’ or not is moot because selling access to content isn’t going to work any more; to survive, publishers will have to shift to selling services around content.
Toby– Can you define “services” that you are selling?
We bring together content from a variety of IGOs (currently seven, two more in the pipeline, three more in discussions), organize it and enhance discoverability/useability by adding metadata at a granular level (eg at chapter level) and make the whole lot available online to a standard that is expected by the scholarly community (see OECD-iLibrary or UN-iLibrary as examples). We integrate the metadata with the various library discovery services (Ex Libris, Yewno et al) so that this content is discoverable via library catalogues. We offer a free, read-only, service that makes the content free and shareable for everyone, reserving for subscribers PDF and other actionable file formats (such as Excel and CSV) which can be printed, copy-pasted etc). Where possible, we also make links to the underlying datasets, hosting the datasets ourselves to ensure access and stability. We provide subscribers with snapshot, archival, files for freely available live datasets (thus guaranteeing past data is preserved and not over-written when datasets are revised). We provide subscribers with usage data and off-line support via our worldwide network of offices and sales partners. We offer print to those who want it. Much (but not all) of the content we host is also available, for free, via IGO repositories/websites, but these are prone to link-rot, sudden deletions and other instabilities and fall short of the standards expected for scholcomm. Good luck if you want any support. We do all of this thanks to the ca. 3000 libraries around the world who subscribe to all or part of what we offer. If they didn’t subscribe, there is no other obvious funding to support what we do because neither the various funders behind the IGOs or the IGOs themselves want to meet the cost of publishing in a professional manner. We call this Freemium Open Access and we continue to look for new ways to add value (currently exploring AI-powered search) so we can continue to offer justifiable value to our librarian ‘funders’ (which is increasingly how we see them).
Many of those are services that publishers provide: organizing content, enhanced discoverability/useability, PDFs, supplemental data housed at the journal, usage data. We also deposit on behalf of authors articles based on research funded by the NIH, Wellcome Trust, and Research Councils UK into mandated repositories. That’s the short list of services publishers provide. As you note, if libraries don’t subscribe to our services, there is no other obvious funding to support what we do.
So, maybe Elsevier’s (and others’) Green/Big Deal strategy is beginning to unravel? If this is the case, whether a publisher is being ‘liberal’ or not is moot because selling access to content isn’t going to work any more; to survive, publishers will have to shift to selling services around content.
For most this would likely mean a shift to Gold OA models and a refusal to accomodate Green OA requirements without compensation. And as we know, that shifts the inequity from the reader to the author.
Maybe. Or, a publisher might compete on having lower/no costs for Green OA if they have pivoted to analytics rather than publishing revenue?
Sure, but again, you’re talking about the big commercial publishers that can invest in technology, so kiss all the independents and smaller publishers goodbye.
I guess what this all then begs is the question of whether libraries are responsible for using their collections dollars to subvent small publishers, to provide subsidy to other services provided by scholarly societies, etc. If we are (which I would question as a claim – I can’t recall any time I’ve seen a campus allocation to a library come with that as a principle of stewardship), the current model is failing at that already as many a person has observed in other posts here on SK.
Or a completely different model entirely, as I pondered last year, where authors’ needs are serviced on one side and readers’ on another. http://blogs.lse.ac.uk/impactofsocialsciences/2017/10/24/its-time-for-pushmi-pullyu-open-access-servicing-the-distinct-needs-of-readers-and-authors/
It’s not a “subsidy” to a society. It’s paying for content. This is the problem with the purchaser not being the user.
I would think that incorporating a service like Unpaywall would be helpful in presenting options to library patrons.
But to me, the “radical” suggestion of deliberately degrading the quality of service offered by the library is a bit like cutting off one’s nose to spite one’s face, and places economic interests of the university over the value of scholarship performed there. Why not remove half the lightbulbs in the reading room in hopes of securing a better electricity price from the power company? Just as a reading room that’s too dark in which to see anything is of little value to readers, so too would be a library that deliberately thwarts their efforts to get the materials they want. Most will simply stop using the library and route around it to get those materials, lowering the value and importance of the library to the institution.
I’m not sure it’s possible to separate out the use cases for each article at the time it is requested. As yesterday’s post noted, authors often download once for reading and later use that same downloaded copy for citation. Would the patron have to fill out a form detailing their planned use of the article in order to determine which version is delivered? Again, another speed bump that is likely to decrease use of library services in general.
And the cynic in me suggests that if you game the system around cost-per-use, publishers are going to find another metric that they’ll prioritize instead. You’d also likely see a greater pushback against Green OA, and demands for longer embargoes once these actions prove to harm publisher revenues (or a move toward requiring APC payments to fulfill Green OA demands).
In the end, if a library could garner significant cost savings from this method, would that money remain with the library or would it more likely be absorbed by the university and spent elsewhere?
Funny you choose that analogy. In previous times of financial distress, I know universities that did just that … removed half of the bulbs in fixtures in the library to cut the electrical bill.
Anyway, you are absolutely right that there are risks. Nonetheless, as I say in the piece, libraries can never buy/ license everything that users need. So, over-paying for some resources means we can’t pay for others.
Hi Lisa, thanks for a bracing early morning read! I always appreciate hearing from library colleagues about how they are approaching acquisitions and use.
And I think any way to think about the challenge of the high costs associated with some publications is good. It’s why CPU seemed so important from my vantage as the publisher of a very low cost (we put the non- in non-profit) journal is a smaller field. We could show, any time we got that data, that our journal cost pennies, often less than 10 cents, even though it’s the leading journal in my field.
But what you suggest again seems a blanket approach to a more specific problem. You do acknowledge that access to the version of record is important, though you suggest that’s largely for formal citation purposes. (“But, we all read far more than we ever cite and, at a minimum, open copy would likely be fully acceptable for the scanning and browsing in order to identify what gets a full read.”) I have to disagree. In our case, we allow posting to an institutional repository of the accepted manuscript (that is, after the reader reports and significant input from the editor, which means already a very big investment on our side). We do this primarily to allow colleagues in the UK and EU to comply with strict OA requirements. But the changes that take place between acceptance and publication, which mean manuscript editing but also source verification, can often be fundamental. Read the “open” copy first to see if the “version of record” deserves a full read could often mean missing key aspects of the argument and evidence.
All this to “solve” a problem that’s not ours.
Gosh, I wish there was a way to encourage less monolithic approaches.
It could mean missing something. Though it might not. Would love to see more definite studies on that.
Here’s the thing though. Scholars are already in that position for a lot of content … no library has enough funds to pay for everything. The issue here is reallocation or, more likely, choosing what to cancel. I completely understand why publishers like to use low cost-per-use. But, if those uses are over-use, CPU makes it look like greater value than there is.
Question for you. Do you think librarians would subscribe to services which reduce library costs in ‘acquiring’ or organizing open access content and provide/guarantee stable links to open access versions?
Reflecting a little on the threads of comments on Lisa’s provocative and insightful piece, I see some concern about the prospect that libraries might take stronger positions looking after their and their parent universities’ own best interests given the realities of the marketplace and how open (and pirate) access has changed it. But of course that’s the reality, and if anything it’s surprising that libraries haven’t already taken stronger negotiating positions. They are doing so now and will continue to find creative new ways to strengthen their position. Publishers should be ensuring that they have a strategy for resilience.
To me, Green OA (and public access) are about balance. What is the maximum amount of content we can make freely available to all as quickly as possible without going out of business? What is the minimum embargo period that won’t cause lost subscriptions? When one side of the equation pushes back, the other side will react accordingly. If library X demands lower prices because of short embargoes, then publisher Y is going to lengthen its embargoes. I’ve seen this happen a number of times.
The other big question is that if the library embarks on a strategy of removing themselves from the equation, don’t bother getting the materials from us, we’re going to give you poorer quality stuff than you can get on your own, then why would researchers continue to use the library and why would universities continue to fund the library?
Absolutely about balance. This is about one party in the equation asserting its strength. Publishers will presumably prepare to respond accordingly. I think the idea that open copies would be prioritized in discovery framework is an obvious step, and I don’t see how it removes libraries from the equation. Libraries are not just there to maximize their spend on publications.
The library is removed from the equation when they fail to deliver to the patron the thing that they’re seeking. The proposal above goes so far as to suggest that the library deliberately hide the paid-for, version of record from the patron and instead give them a lesser version. If I’m a researcher, then I stop asking the library for the materials and just go get it myself (even if this means through nefarious channels).
Libraries provide access through alternative and in some cases lesser sources all the time. Aggregators missing images and poorly scanned ILL
Sure. But if I want to read an article, and my library will only send me the preprint version of that article, I’m going to stop using my library to source it.
My the linking strategy lays out options: linking, nudging, friction, and denial. Each library would have to decide for itself the level of risk – in part based on the level of education, communication, etc. it can commit to and the degree to which it can organize its campus faculty, administrators, etc. in alliance with the strategy. German, French, etc. university libraries seem to be having some success in organizing in this arena.
“But if I want to read an article, and my library will only send me the preprint version of that article, I’m going to stop using my library to source it.”
This is already reality for library users. There is a ton of content that libraries can only get preprint versions for at a price-point the library can afford. We don’t subscribe to everything and ILL is a throttled service at most institutions. Library budgets are not unlimited.
But the more this is the case, the less reason to go to the library. And paying for a subscription and deliberately hiding it from users seems self-defeating. If you’re not going to give the patron what they want, then why not just cancel the subscription? Wouldn’t that give you more bargaining power than lower usage rates?
I can think of any number of scenarios in which cancel is not a realistic option (this is one of the complications libraries face) and so having better data to negotiate from would be a stronger strategy than a threat to cancel that everyone knows isn’t likely (though perhaps German, et al. will shift that perception – time will tell).
And what if we stop focusing on content as being the unit of business (and which is priced) and start recognising that the digital revolution has made content, now plentiful and ubiquitous, essentially value-less (as economics teaches, something that is increasingly plentiful has declining value*) and adapt to the reality that we’re in a service business. For example, authors need services to transform their mss in a paper/book and for it to find a wide audience, librarians need services to help their patrons find quality stuff quickly and easily, funders need services to ensure they get impact/recognition, readers need services to help zoom in onto the actionable content they need (and zoom away from stuff they don’t) etc etc. Both APCs and trad subscriptions suffer from being funding ‘bundles’ that attempt to pay for all of ‘105 things publishers do’ thus putting the entire financial burden of schol comm on a single stakeholder. They also put all the focus on whether the content is free or paywalled, a zero-sum game that has plainly reached its expiry date and we’re all bored of playing. I wonder if we should be looking for a more nuanced model where the market is made up of services for stakeholders and paid for independently of one another. This might have the additional benefit of boosting the services that are valued and squeezing out those that are not.
*[And yes, I know that each article or book is unique, therefore theoretically unsubstitutable and potentially priceless to a reader, but in world awash with content, no-one can honestly pretend they read everything relevant to their world so, logically, content must be ‘substitutable’ – if only by omission.]
If a library subscribes to our journal content, I expect them to ensure first and foremost access to the content that was purchased (i.e., the version of record). It’s as simple as that.
Isn’t it the library’s prerogative as the customer to decide how it will use what it licenses? You must have suppliers as a publisher … do you let them decide how you will use their services or do you make those decisions yourself in order to best manage your budgets?
“Isn’t it the library’s prerogative as the customer to decide how it will use what it licenses? You must have suppliers as a publisher … do you let them decide how you will use their services or do you make those decisions yourself in order to best manage your budgets?”
As someone mentions above, the user is not the person paying, which to me is the problem (you’re deciding for researchers, who are also our clients). But to answer your question, I can’t think of a case where we’d purchase a service but then not use it so that we could negotiate down the cost.
You can add me to the list of sad people reading this post. It’s a mess out there.
On this point: “libraries might suppress links to subscription copy when open copy is available…[or] add friction to the process of getting the link to the subscription copy.”
I worry that this logic — if we follow its economic imperative — might extend to Hathi/Google versions of open-access monographs: a catalog interface in which getting the actual book (say, from offsite storage which costs the library money) will be discouraged through such suppression.
Lisa, this is a very interesting discussion, but I confess it’s making me feel pretty sad. At my institute, we’ve always thought that understanding the limits of library budgets has been a crucial part of our role as a society very-non-profit publisher. We never jacked up our subscription costs– for a very labor intensive journal –because we know that libraries struggle. More than once I’ve been told that was a foolish strategy, but I still believed we were all responsible for thinking about our role in the full cycle of production and use.
I’m very glad you wrote this, because obviously (now that you’e said it) this is the logical next step for libraries looking to enhance access to free content. But it’s neither the content I want, nor the content we work so hard to support and publish. I’d have to tell my grad students to look for other resources to curate scholarship if the library sent them to material in the way you’re proposing.
Honestly, I’m pretty sad about it too. Unfortunately, graduate students are likely already having to curate scholarship from other sources. And, I suspect many turn to illicit platforms like LibGen and aaaaarg.fail in addition to SciHub. I hope you haven’t missed that two of the strategies I mention above would actually help your students find more licit open copy (acquisition and the first linking strategy) in cases where the library doesn’t have any subscribed copy at all.
Libraries are really stuck. If we don’t reign in the prices of the big deals/subscriptions somehow, the already existing trend to squeezing out small, independent, etc. publications will only continue (something, I’d note, that Mary Case warned against in 2001 – https://twitter.com/lisalibrarian/status/949402244285370368).
If I may, I’d like to repeat something I said above – I’m not proposing that a cost-per-use that suppresses over-utilization be the only criteria a library uses in making its budget/negotiating decisions. (And, if you aren’t using obscuring UX and the like on your platform, the overutilization on your platform won’t necessarily even result in a CPU correction). In fact, I think that using only the adjusted CPU would also be irresponsible. Libraries regularly choose to allocate their funds based on other criteria (e.g., funding open access publications, paying for services (see Toby Green’s freemium model)). My gut sense is that libraries would seek to reward publishers who do as you describe.
Here I thought librarians were advocates of free speech and not censors. It seems to me that the librarian in this scenario is now serving as the role of a censor by saying what I suggest is just as good as that almost same article take my word for it!
Rather, I would suggest the library fight for a larger share of the budget by demonstrating just how and why they are a vital part if not one of the central parts of the reason to have a university. I cannot recall an ad promoting the university library during a televised sporting event.
I don’t think reducing costs creates prosperity.
It seems to me that the avenue chosen by many libraries is to attack publishers as the root of all their problems while not coming up with creative ideas to solve their funding problems.
I tend to agree with David that the solution presented will only lead to library patrons simply no longer choosing to be a patron.
This isn’t an either/or. Every librarian I know is quite busy advocating. But, you can’t spend a dollar you don’t have and you have to make choices about how to spend the ones you do. We advocate AND we steward.
There are multiple strategies I presented and none of them prevent the user from getting a version of record – even the suppress or do not provide by interlibrary loan – they only effect whether the library will pay for it.
P.S. I’m sorry to hear you missed the library media spots on the Big Ten Network – they were fabulous.
Savvy publishers often do provide value to a library’s subscription (at no additional cost) that include worklflow tools, LTI integration, and other support designed to increase the ability for the end user to access an article within a subscription at point of need. Unfortunately, libraries often don’t take advantage of these tools and the effect is an increased cost per use due to lack of attention or staff to properly promote a new or renewed database effectively. Often libraries are reluctant in the negotiation phase of a renewal to utilize publisher support that should have been adopted at the start, for fear of higher use and potential higher costs as a result of providing a resource at point of need.
Full disclosure: I’m a co-founder of Kopernio, a tool for providing access to journal articles (both subscription and OA).
From my own experiences as a researcher I can say that some PDF versions of journal articles are “more equal” than others. Formatting and the ability to cite the article aside, there can be significant differences between the text of the pre-print and publisher’s version of record (something I learnt the hard way during my own research!). As a reader it can be hard to tell from a given PDF document exactly what article version you’re looking at or to know whether it contains all corrections from peer review. How many researchers could confidently explain the difference between pre-prints, post-prints, accepted manuscripts and articles in print?
Any system that intentionally “nudges” readers to open versions in preference to otherwise available subscription versions needs to take these differences in versions into consideration. One starting point for this might be the SHERPA/RoMEO “colour” metadata; case in point a list of “yellow” publishers where articles can only be archived pre-refereeing http://www.sherpa.ac.uk/romeo/browse.php?colour=yellow&la=en&fIDnum=|&mode=simple. Of course this approach optimistically assumes that all author uploads actually conform to this standard in practice, something my experience building up Kopernio’s OA index leaves me quite sceptical of.
More broadly I’d say there’s enough obstacles already between researchers and journal articles in many cases. “Nudging” users away from the version of record may help optimise cost metrics but will be hard to do without adding inconvenience to researcher’s workflows.
Appreciate the unpacking of some of the issues here. Indeed, as I mentioned, article tagging would really help everyone out a lot. Interestingly though, the terminology you use (pre-prints, post-prints, etc.) is not in the JATS framework. This came up at CNI. What exactly is a pre-print? Not only will we need metadata, we’ll need it accurately employed!
To be clear, I absolutely understand that this inserts inconvenience into researcher workflow. That’s why libraries would have to assess the risk of unintended consequences in making choices among the Linking Strategy options. Personally, I think the link+nudge is probably the most palatable to library ethos of service – particularly as it empowers users to make more informed choices about the costs of their clicks – but the other options (friction + denial) are indeed, just that, options. In dire times, a library might need more drastic measures. I’ve never had to work through a 20-30% budget cut … yet. But, some librarians have. It’s brutal and having the most accurate measures of value would be useful.
I have a lot of problems with this idea, but most have been mentioned by other commenters (including increasing the irrelevance of the library in the research process by employing this type of strategy). Instead, I’ll mention a very tactical problem, which is that most discovery doesn’t happen from the library catalog, so privileging / deprivileging links therein won’t make a difference except creating gobs of work for technical services folks. We did one of the Ithaka faculty surveys recently and something like 90% of our faculty start their search in a search engine or a specialized database. Direct-to-the-publisher-version links work elegantly in Google, as do keyword searches that pull up open content from our repository. I would posit that’s where the high downloads of publisher package articles comes from.
I also have to say that the idea that tech services hand-curates links for journals one URL at a time is out-dated. Nobody does that — we [mostly] use knowledge bases to catalog large collections of titles (which is another reason why the big deal is so lovely to manage instead of 500 or 1,000 individual subscriptions). There are open access collections of ebooks and e-journals in knowledge bases and so most libraries do acquire them in basically the same way we acquire other journals. And there’s not typically a way to game the results systematically to ensure that one link appears in the catalog above another. I can’t even get the direct sub to come up first in the list of options successfully — much less could I push it to the bottom.
Tech svcs folks just aren’t in charge of the discovery layer at that level of detail. So I’m not sure anyone even *could* do what you’re proposing.
Essentially, I don’t agree that libraries suffer from our interfaces being too easy to use, promoting too much access. I’ve always thought our downfall would be that we are consistently promoting products that we have no control over in terms of design, usability… The number of problem tickets we get about e-resources is huge at most libraries — and that represents such a small number of the actual problems that patrons encounter. I’m not interested in doing anything to make it even harder to get to information.
I’m used to librarians telling me not to talk, but it’s a bit of surprise to find one advocating that I should read less. In 2002, Elsevier provided 70m downloads, and last year that figure had risen to about 900m downloads by about 14m researchers. Those downloads included work by 173 of the 174 Nobel prize winners in science and economics since 2000. More researchers accessing more information of ever rising quality is a good thing no? I really can’t see how this is a bad trend. PS: I work for Elsevier’s parent company
Hi Paul, thanks for commenting. Worry not – we librarians really don’t shush people much – though I don’t expect the stereotype to depart the popular media anytime soon unfortunately!
If you took away that I was suggesting people should read less, I didn’t explain my argument as well as I had hoped. In no way do I want people to read less. My point here is about about *which* version to read – not whether to read. Just like we don’t use formal (expensive) watermarked stationary to print off a copy of an agenda as we run to a meeting – though of course we would to print out an award for framing, I’m suggesting that users might choose (or be provided with – depending on which of the strategies a library would implement) less expensive copy when their use tasks can be met by that.