Editor’s Note: Digital delivery of scholarly publications has enabled far more robust tracking of usage, with the COUNTER Project providing and periodically updating the defining standard for usage measurement. As a result, usage has become a critical metric for establishing the value of a given journal or content bundle in many circumstances, including licensing negotiations between publishers and libraries. This has caused one Scholarly Kitchen author to wonder, Are Library Subscriptions Over-Utilized? At the same time, concerns about usage “leakage” from the publisher platform to other services where publishers have not received “credit” for that usage has led to efforts to re-enclose that usage through syndication.

Against this context, Curtis Kendrick, Dean of Libraries at Binghamton University and a key leader in SUNY’s collective licensing initiatives, has raised some probing questions about whether cost-per-use is the appropriate metric for measuring the comparative value of library subscriptions. Today’s piece offers a strong warning, to publishers and libraries alike, to avoid the simplistic use of metrics when the underlying thing being measured is far more complex.

The subscription monopoly has been broken. Content is a commodity; it is ubiquitous.  What we are paying for by subscribing to journals is as much convenience now as it is access, and the valuation paradigm has to change. The cost per use model is too simplistic a measure as it does not account for variability in the nature of patron usage, consequently overvaluing journal subscriptions. Cost per use fails to take into consideration variability in the nature of usage – not all usage is equal, not all usage has equal value. Usage by a sophomore who happens to download an article from a high-end database is qualitatively different from that of a research scientist racing to complete an application for an NSF grant. In the former case, the undergraduate who downloads an article from a high-end database may be served just as well as by an article from a less costly substitute information resource. In the latter, a research scientist’s close reading of the article from the higher priced resource may be the difference between winning and losing the grant. In this piece, I critique the limitations of the “cost per use” valuation exercise and provide some alternative ways to approach resource valuation.

balancing a stack of coins on a seesaw

With limited resources, one of the ways in which we can make our dollars go further is to segment our market (users) and do a better job of providing what they need rather than providing a lot more than what they need. In negotiations between libraries and vendors, the strategy of basing the assessment of value on cost per use is advantageous to the vendor. Because cost per use has become a de facto standard, there is a common measure of value by which vendors can compare usage and spending patterns at different libraries and use this information to their advantage as leverage in negotiations. Libraries typically do not engage in such comparisons of cost per use against other libraries, or do so on a very limited basis because of our uncertainty about non-disclosure agreements. As long as cost per use is the generally accepted metric for value, vendors will have an edge in assessing how important titles are (or, they will postulate, should be) to the library.  The more self-sufficient libraries can become at assigning value the better.

Libraries have an opportunity to seize greater control of the negotiation conversation by reducing the substantial informational asymmetry that privileges vendors. They must conceive of a different scale for evaluating journal subscriptions. Whereas an advantage of the cost per use model is that it is relatively easy to calculate and easy to understand, it doesn’t get us too far from our practice of mistaking “counting things” for assessment. While our industry norm has been to think in terms of the cost of the journals we license, what if that cost were to be viewed instead as an investment? Less, “how much does this journal cost?” and more “what are we getting for our investment in this journal?” What is our ROI, our return on investment? We need to shift perspective to focus on the investment side of the equation and understand not only what are we getting for our money, but also what is our money is getting for us? In other words, for each dollar invested how much usage do I get? While the model will get more complicated in a moment, think of it essentially as shifting from Cost Per Use (CPU) to Use Per Cost (UPC).  Either way, for collection analysis purposes we would end up drawing the same conclusions as the table below suggests.

Table 1: Comparison of Cost Per Use (CPU) and Use Per Cost (UPC) Analysis

 

Resource A Resource B Resource C
Cost $10,000 $25,000 $50,000
Use  5,000  5,000  5,000
CPU $2.00 $5.00 $10.00
UPC 0.5/$1.00 0.2/$1.00 0.1/$1.00

 

In the example above Resource A has a cost per use of $2.00 and Resource C has a cost per use of $10.00. Resource C is five times more expensive. Similarly, for each dollar of our $50,000 invested in Resource C we get 0.1 uses, while for Resource A’s $10,000 investment we get five times as many uses per dollar. Making the flip from  cost per use to use per cost allows us to answer the question “for every dollar invested in resource X how many uses do we get?” This investment-based approach is better suited to library planning because it keeps cost constant, but as with cost per use, can allow us to vary the usage calculation based on our applied assumptions about the nature of that usage.

Disaggregating Usage

Usage can be disaggregated in a number of useful ways and assist with a more sophisticated understanding of the value of a journal subscription. Below are three models of usage disaggregation. Gathering this data in real-time might prove too labor-intensive, and perhaps infringe further on user privacy than most librarians are comfortable with. Instead, a sampling exercise could be performed a few times a year via surveys and other analyses.

Disaggregation Model 1: User Category  

With this disaggregation model a library would assign different valuations or weights to usage by different categories of users for each title, globally, individually or in some grouping.

Usage = Uf + Ug + Us + Uo

In this example Uf = use by faculty; Ug = use by graduate students; Us= Use by undergraduate students; Uo = Use by other constituencies. Depending on the library the categorization of users might be more or less complex. This model strives to be indicative of an approach the library may take.

To apply the model, a library may determine that usage by faculty for a particular resource is twice as important as usage by undergraduates and that usage by graduate students falls somewhat in between. For the purposes of valuing the use, they will exclude usage by other constituencies. (All of the different weights or coefficients assigned in this paper are for demonstration purposes only; they are not to be taken literally and presumably would vary by institutional type and other considerations.) In this example below, the use formula might be modified to look like this:

Usage = 1Uf +.75Ug +.50Us + 0Uo

In this example each faculty usage would count as 1 use, each graduate student usage would count as .75 uses, and each undergraduate use would count as .50 uses. Use by other constituencies would be zeroed out. The net result for the purposes of calculation would be a decrease in the number of uses that get counted (the formula could just as easily result in an increase in the number of uses that get counted depending on the assumptions that get made).

In the example above a library might then estimate the value of a subscription-based resource with the calculation:

Value = (1Uf +.75Ug +.50Us + 0Uo)/Cost

Here, value is equal to one times the number of faculty uses plus .75 times the number of graduate student uses plus .50 times the number of undergraduate student uses divided by the cost of the resource.

Disaggregation Model 2: Quality of Usage 

As we all know, sometimes general information about a topic is needed and a variety of information resources might meet the need, while sometimes a specific article is needed. We know too that sometimes there are information needs that can only be met by high-cost resources, yet also there are times when high-cost resources are used when a lower-cost resource would have performed equally well. This is where Disaggregation Model 2: Quality of Usage might be applied with the following formula:

Usage = Un + Uc

Where Un= necessary use, that is, use of a specific article is necessary or class of resource is necessary; Uc= convenience use, that is, an article was used but a different information resource could also have met the need (e.g., lower division undergraduate papers).

As in Model 1, weights might be assigned to necessary uses and to convenience uses, so for a particular resource one might assign the following:

Usage = 1.5Un + .25Uc

The resulting value estimation for this resource would then be:

Value  = (1.5Un + .25Uc)/Cost

The value of this resource would be calculated as one-and-a-half times necessary uses (need for that specific article or class of article) plus .25 times convenience uses (another information resource could have met the need) divided by the cost of the resource.

Academic libraries provide a host of information resources for users and the annual costs for subscription-based resources vary greatly. Particularly with the advent of modern discovery systems, some of the usage that gets counted is essentially serendipitous – someone used an article because they came across it and it looked like it fit their need, not that they had a specific need for that particular article. There are many cases where a high-cost subscription resource meets an information resource need that could just as easily have been met by a lower-cost subscription resource with zero degradation in outcome.

Disaggregation Model 3: Time Requirement of Usage 

Subscriptions enable the convenience of immediate access and for generations libraries have worked to deliver our collections as quickly as possible. We have made assumptions about our need to compete in an environment of rising service expectations without necessarily assessing use cases to see what the true requirements are. If a substantial portion of the need for a high cost subscription resource is not immediate, a portion of the savings from canceling the resource can be reinvested to ensure that those with a true need for speed are not inconvenienced. The quality of interlibrary loan services has improved tremendously, particularly for article delivery. Next day delivery is commonplace, and it is not atypical for an article requested in the morning to be fulfilled by the afternoon. This is where Disaggregation Model 3: Time Requirement of Usage might be applied:

Usage = Ui + Uw

Where Ui = cases where there is an immediate need for access to an article; Uw = cases where user needs can be perfectly well met with a modest wait (e.g., Interlibrary Loan or asking author scenarios).

As with the other models, for Disaggregation Model 3 weights might be assigned here too so that for one resource a library might calculate usage as:

Usage = 1Ui + .50Uw

Here, each article for which immediate delivery was required would count as one, and each article for which a delay would be acceptable would count as .50 of a usage.

The ensuing value calculation would then be:

Value = 1Ui + .50Uw/Cost

There is definitely a service implication here and as always libraries would need to be mindful of degrading services. Moreover, it’s always important to remember opportunity costs, as dollars saved from investment in subscriptions may be repurposed for other activities central to the mission of the library, which might include a greater investment in interlibrary loan. What service levels are needed by which groups of patrons, and can we continue to provide premium level services in cases where a standard level of service is perfectly acceptable? A major Midwestern university recently announced it had prepared a list of titles for review for cancelation due to budgetary pressures. Of the list the chair of the history department noted “as I glance at the list I do see several items I don’t consider obscure or esoteric but I also have found interlibrary loan to be efficient at obtaining things we don’t have.”

Bringing It All Together

It should now be clear that the existing paradigm for expressing value in article subscription-based resources inadequately reflects variability in time, quality or category of user. A simple calculation of cost per use is inadequate for describing the nature of our patron’s actual use of the collections, and consequently not a sufficient tool for calculating value.

Old Model Valuation Formula:  Value = Cost/Use

I  have presented three models for disaggregating usage to offer a more refined calculation of usage that enables libraries to tailor-make assumptions to fit their particular users and then vary those assumptions based on the characteristics of each individual resource being evaluated. While each of the models has been presented individually, the complexity builds when one begins to pick factors from across the examples to build an even more robust model. Here is an example:

Usage = Uf + Ug +Un + Ui

For this particular resource, a library might determine that the only usage factors it wanted to consider in valuing the resource were usage by faculty and graduate students, usage that was necessary and usage for which there was an immediate need.

The valuation formula for this resource would then look like this:

Usage = 1Uf + .75Ug +.9Un + .95Ui

Value = Usage/Cost

Libraries will need to determine how to handle duplication; for example, an article usage will fall in multiple categories and therefore potentially be counted multiple times. A usage might be by a faculty member. It might also be a necessary usage and might also be needed immediately so that one usage could be counted three times. In a perfect world there would be a way to identify this overlap and correct for it by counting one use and apportioning that use to the different categories into which it might fall. Such work might be beyond our current data analysis capacities and ethical principles with respect to privacy, but it serves as the basis for imagining how far away the current standards are from reflecting true underlying value. And of course, while every day we are already making informal assumptions about journal usage– now we will have a more robust framework to use in structuring our thought processes.

So the next time a vendor tries to sell you on the value proposition that you should pay more because your cost per use is amazingly low you can tell them, “no, the dollars we invest in your subscription don’t generate enough quality usage compared to your competitor, so we are moving on, thank you very much!”

Thanks to Lisa Janicke Hinchliffe, Roger Schonfeld, Elizabeth Adelman, Jin Guo, Brenda Hazard, Kathryn Machin, Mary Van Ullen, Jill Dixon, James Galbraith, Irene Gashurov, Stephanie Hess, Caryl Ward and Mary Beth Kendrick for their substantive insights on earlier drafts.

Discussion

29 Thoughts on "Guest Post: Cost per Use Overvalues Journal Subscriptions"

While this article provides a very sound and valuable theoretical approach to analyzing usage, it negates the fact that publishers maintain that they do not know who is downloading materials and how they are using them. That would be a huge violation of privacy agreements set between the publisher and the university, college, or institution that is purchasing materials from the publisher. What we can do is use estimates of, for what type of higher-education institution, how often are students using materials versus faculty and/or researchers and how do they generally use them. But, that would only provide an estimate of what usage activity qualitatively looks like. What are other ways we could use these theories of breaking down usage without violating customer privacy?

This is an interesting statement: “publishers maintain that they do not know who is downloading materials and how they are using them” …

For example, if you look at the data privacy policies for almost any publisher you will see that they very much know – especially if they have tools that span the workflow (e.g., I download something from ScienceDirect into Mendeley and then cite it a presentation that is uploaded into a bepress repository). And, David Lacy and Cody Hanson did a revealing analysis of just how much they know from browser fingerprinting, etc. and indications that they are combining data they collect with audience marketing tools that create robust profiles of users: https://www.cni.org/topics/identity-management/collecting-correlating-stitching-enriching-how-commercial-publishers-are-creating-value-by-profiling-users … details of the analysis are here: https://www.codyh.com/writing/tracking.html

So, while I am definitely in favor of pursuing approaches that don’t violate privacy, I think we should recognize that the publisher platforms are already deploying tools that mean they have analytics on library users that undercut any claim that they just don’t know who is using their materials or how. Do they know everything? No. Do they know an awful lot? Absolutely.

Lisa, is it your impression that publishers across the board tend to have this kind of information about library users, or do only the biggest ones have the kind of infrastructure necessary for this kind of tracking?

Scale isn’t needed per se though investment of resources is. My sense is that the infrastructure is most developed on the largest platforms, when there is a turn to research workflow, or when there is a requirement to set up individual accounts in order to have access to content (e.g., those resources that do not allow access only based on authentication but require account creation – which are definitely revealing then of user characteristics).

As a side note, I would mention that some open access publishers are also moving into much greater data tracking and capture with the goal of being able to pitch back to those libraries whose users are making use of open content that they should join in supporting the platform/publisher. Library IRs are also capturing quite a bit of user data … I remember seeing one dashboard that showed “someone in x city just accessed y article.”

With all due respect, this is full of nonsense. Let’s start with the second sentence, “Content is a commodity”. Go ahead and cancel Science, Nature, JAMA, and Lancet, and see how fast your faculty explain that content is NOT a commodity. “Commodity” means that the product is identical in use value regardless of provider – a bushel of a particular strain of wheat is the same regardless of which farmer grew it. Content might be a ‘commodity’ for the lower undergraduate term paper, but no one else. The rest of the article tries to look all mathematically clever with subscript variables, but never points out that the data necessary to actually use those variables does not exist, nor in our current data-privacy environment could we ever get it. We have absolutely no way to determine for any given journal’s full text usage which portion was used by faculty, graduate students, or undergraduates, so this entire “framework” is “counting angels on the head of a pin” with no value in the real world. If you have a serious proposal on where we might get that data from, I’ll gladly pay attention.

I totally agree with this comment and those below. Who decides which undergraduate, graduate, post-doc or faculty actually ‘gained’ valuable insight from any given content? Who ‘gave back’ to the ROI of the school?

I’m puzzled by this comment. There are numerous library value studies that show us libraries are able to track usage by user group. And, certainly if your campus SAML system is set up to pass along user group status, so do the publisher platforms and in far greater detail. Every time I authenticate to a library database, I check what is being passed along and my university group status is one of the variables. Now, perhaps these are not comprehensive datasets of what is used by whom — but this is very much in the realm of already happening and quickly developing further.

We use Ezproxy, the most common off-campus authentication system used by North American academic libraries (by far). First, if the user is on campus, the authentication with the vendor is pure IP range, so not only does the vendor get no information about the user (unless the user voluntarily uses a “my research account” feature of the vendor’s website), but the library doesn’t either. If the user is off-campus, the library’s proxy server does get the user group status for the moment of authentication, just to make sure they’re entitled to that particular resource, but that data is not stored in any log/audit files so there’s no way for the library to collect any kind of usage data on the basis of it. It might be possible for the library to configure capturing that, but it’s most definitely not the default and I don’t think anyone would go out of their way to do that because of the privacy issues. And in any case, that information is certainly NOT passed along to the vendor, who only sees the IP address of the proxy server itself. Again, if the patron voluntarily uses a personalized account feature on the vendor’s site, they can capture that information IF the patron accurately described their status in creating that account (they could always lie and the vendor would have no way to verify that info) but I have no evidence that such would account for any significant proportion of full text use with that vendor. OpenAthens sites may optionally provide the kind of user group info but they are a tiny minority of univs in North America to begin with, and I’d like to see a study of how many are choosing to send that extra info since that again is not the default.

“It might be possible for the library to configure capturing that, but it’s most definitely not the default and I don’t think anyone would go out of their way to do that because of the privacy issues.” – The library value literature that reports on analytics analysis tells us that some libraries indeed do this very thing. And, add to it requiring login for on campus not just off. I’m not saying every library does this but it is clear that some indeed do.

“And in any case, that information is certainly NOT passed along to the vendor, who only sees the IP address of the proxy server itself.” – Certainly many libraries have their proxy servers set up this way. But, not all. And, SAML is a different approach entirely (I confirmed again that indeed my campus SAML is passing my campus group – plus my personal email – to the publisher platforms).

In no way am I claiming that the data tracking or the default in different systems, but the technology is there to gather this data and some libraries are doing it and thus we are out of the realm of angels on pinheads in discussing the possibility of collecting and using this kind of data.

Wow, thanks for that glimpse into what other libraries are doing. I know that SAML (aka OpenAthens realistically in terms of implementation options) can do that, but I am frankly shocked that I have any significant number of colleagues who are making that choice as it is beyond the pale of acceptability here in Canada in terms of patron privacy rights. I have to wonder if your faculty know and approve of your doing that or is it happening under their radar? Maybe it’s a US versus other countries’ values issue? Everyone knows that privacy rights in the US are almost non-existent legally compared to the rest of the developed world.

Not just the USA. the most developed library data collection project I know is the LibraryCube in Australia (https://er.educause.edu/articles/2012/7/discovering-the-impact-of-library-use-and-student-performance). And, here’s an EXproxy based project in France (https://www.oclc.org/en/news/releases/2019/20190212-oclc-couperin-partner-add-analytics-features-ezproxy.html). In the public library context, check out the services of OrangeBoy, OCLC Wise, and Gale Analytics.

Library practice has indeed shifted radically – many libraries don’t purge circulation records any more either. FWIW, you might be interested in this: Hinchliffe Awarded IMLS Grant to Develop Training on Privacy in Library Learning Analytics (https://publish.illinois.edu/library-excellence/2019/07/11/hinchliffe-awarded-imls-grant/)

Congrats on that grant! I look forward to seeing what training materials you’re able to share with us all when the project is done. Our university just this year came under a new Freedom of Information and Protection of Privacy Act here on PEI and we’re all struggling to understand how to balance (re-balance?) our legitimate uses of metrics with the privacy rights of patrons.

While I appreciate the theory behind attempting to calculate a value metric for libraries, it does pose some real problems in application.

First, it assumes that the library has perfect information about WHO in their community is using a resource and for WHAT purpose. As organizations that pride themselves on the rights of patrons to pursue information with anonymity, a structural change to track the user behavior of patrons seems hostile to this core value. And while you argue that collecting such data may be served by periodic surveys “and other analyses,” this seems little more than aspirational hand-waving.

Second, your proposal presupposes some hard values for your users, namely, that faculty are worth more than graduate students, who are worth more than undergraduates. Are full faculty members worth more than associate faculty, who are worth more than assistant professors, the latter of which may be applying for tenure? Are postdocs worth more than lecturers? Imagine for a minute how acrimonious this discussion would get when attempting to apply arbitrary numbers for real people.

Third, your proposal assumes that you are able to gather reliable information from users about their intention of use, which may have taken place months after a paper has been downloaded. I doubt most users would be able to respond accurately with such a request, let alone recollect the event at all.

We are left with an untenable and unworkable situation where you are attempting to measure value in an environment where you neither have the culture, the structure, nor the tools to accurately measure it. In the end, you are no closer to measuring value, but have spent resources and, more importantly, precious library social capital, in the process.

Cost per use is not a perfect metric: it was never intended as one. Nevertheless, informed librarians may take this information and weigh it thoughtfully with other information–information about their budget, user feedback, interlibrary loan statistics, recommendations from professors, etc. This is information that the publishers do NOT know about your community. This puts you in a position of knowing more information than the publisher, not less. There is reasons why publishers aggregate institutions into tiers based on number of FTEs, researchers, total downloads, or World Bank classification. They are not perfect, but they tend to be easier to measure, don’t require everyone to compromise their values, and in the end, are good enough.

At its core, Curtis Kendrick makes a basic economic argument: Perfect information allows libraries to more accurately measure value, and therefore, calculate price. A more helpful post could have proposed: 1) Would libraries and their communities be better off in such a world? or 2) What would need to change in order to measure perfect value and is this a tenable proposal?

The article overlooks a blatant fact–simplistic as it may be. In a stressed budgetary climate materials that betray low usage get the boot. Publishers and vendors hate that logic.

It’s a big assumption to make that faculty use is more important than graduate student use or undergraduate student use or other use. Faculty often delegate tasks like gathering literature to their student assistants. Just because a student ID is used to download an article doesn’t mean it wasn’t ultimately used by a faculty member. Also, it is important that the scholars of the future can learn from high quality articles. What they learn during their formative years about their subject areas and evaluating research in their fields determines the skills they will acquire for their futures as researchers. The quality of research they are exposed to can also affect their motivation to continue their education or pursue a career in academia rather than other industries. Also, when you think about the enormous percentage of university employees who are not categorized as faculty these days, “other” use becomes very important. I want the administrators, academic affairs professionals, student affairs professionals, and research associates at my university to have high quality research at their disposal just as the faculty do. The quality of education and research at the university depends on it.
It is true that sometimes less expensive articles could equally have met an information need. But, how are scholars going to find those alternative articles? Discoverability is a service worth paying for. If a publisher or vendor adds better metadata, that’s value they have added to their product.
Interlibrary loan is a viable alternative to subscriptions for many use cases, but creating additional steps to retrieving information results in decreased use of those products, so this alternative needs be used with care. Cost per use varies by discipline because disciplines have different outputs, article lengths, and publisher allegiances. It’s not a good idea to cancel all your journals for one field, just because that field tends to have a low cost per use metric.

We can obtain information about user behavior anonymously to the extent that we can map behaviors and reported responses to different categories of users without having to know who the individuals are. This is not aspirational, it is possible today. The information we get will never be perfect but it can be sufficient to be actionable. This proposal does not presuppose any values – only suggests that libraries make their own value judgments. While the example in the model may “value” faculty higher than others, an actual library might come to a different determination. A community college, for example, might choose to give a higher valuation to undergraduate usage than to faculty usage if that aligns more closely with their mission. We have the tools and structure to move in this direction – it’s the culture that needs to change and this of course is incredibly difficult in and of itself.

How are you getting this data? Are you collecting it at a title level or platform (vendor) level on a monthly or annual basis? Are you getting it for on campus as well as off campus use and for patrons using your “public” computers inside your library?

We too use EZProxy with the University’s authentication system to obtain quite detailed data about use of publisher content. We enforce login on through EZProxy both on and off campus. We keep log files and can interrogate the data in different user groups at Faculty level, by user type and down to individual module level.
What we can’t do is follow the use through the platform to the individual piece of content that those users are using. We can see exactly what is being used (ie which titles / articles / chapters etc) only from the publisher data so there is a gap in the middle of our reporting. Eg EZproxy will tell us ‘x’ number of users on module ‘y’ accessed Science Direct in January 2019 but the publisher will tell us that ‘The Journal of Something Complicated’ was accessed ‘z’ times by users from the Open University. It’s impossible for us to marry the 2 due to the overall size of the university (150,000+ users).
We would only seek to identify EZProxy use by an individual user if we were notified by the publisher that there had been suspicious activity on our account and we needed to investigate to remedy it. In that case we would be looking entirely at the log files, with data from the publisher about the date and time, to spot what looked like excessive downloading by an individual user.
Our log files are vast! I hope that helps?

I find it appallingly condescending to across-the-board decree that undergraduate use should be considered to have less value than use by graduate students or faculty. Granted, the undergrad is unlikely to bring in multi-million dollar grants, but we have no way to know the value of a given article to a particular undergrad. Perhaps that article is key to their work that wins them a scholarship that allows them to complete their degree. Perhaps it’s key to a successful grad school/assistantship application. For that student, at that point in their studies, the result is as important to them as a multi-million dollar grant is to a seasoned researcher. Libraries have never been, nor should they be, the arbiter of who deserves what.

I think it speaks to the mission of the university, and the library’s role in helping fulfill that mission. The question would be whether the mission is education or if the gathering of funds (the example of a professor using an article as part of a grant application) is the top priority for both the university and the library.

There’s also the notion that through paying tuition and fees, the undergraduate may be contributing significantly more to the library’s budget than the graduate student or faculty member.

In my earlier comment I criticized the concept of content as commodity, but I am going to switch sides for a moment. In at least the lower undergraduate context (which is very different from upper and honors undergraduate needs), the research on student behavior around “satisficing” (using whatever sources are easiest to get) is very strong, and it is apparent that such satisficing behavior is rewarded in their grades because otherwise they wouldn’t do it. In an ideal world, we’d be able to provide everything for everybody, but in today’s budget reality, we have to prioritize which bibliographic research needs we are going to fill immediately and which we won’t. Given satisficing behavior, I think it’s completely reasonable to downweight undergrad needs at a title or even package level. And remember, no one is saying we’ll absolutely deny them access, just that they might have to use delayed-gratification services like ILL to fulfill some of their demands. If that article is key to that application, they really shouldn’t have left it to the night before to go looking for it. I will acknowledge on the other side that faculty and grad students actually have a much longer research time scale, and probably should be able to wait longer (ILL again) than the undergrad whose schedule is course-length, not PhD thesis or long-term grant project length. Exception of course for the clinical medical context, so none of this applies to med school libraries.

Is not there a catch 22 there? Subscription pricing “should” be based (among other things) on usage? A useful metaphor is that of insurance. Institutional subscription (should) work like group insurance relying up some sort of law of large numbers: Some users use a lot, some less. Those of use less (who on a per head basis should have paid less) “subsidize” those who use more (who on a per head basis should have paid more). More on the “metaphor” here: https://www.thebookseller.com/futurebook/part-1-ku-or-ko-streaming-subscription-and-big-data

Interesting perspective on using an insurance industry metaphor. To extend it, the issue here then is which insurance companies to use. Should we sign up with All-State or go with Flo at Progressive?

I cannot imagine ever having the staff numbers to be able to justify spending the time and effort required to do a detailed analysis like this, even if we were in a decent budget situation (not likely in the next 5 years at least in my realm). In my ideal and unrealistic world, I’d change the scholarly landscape and outrageous journal costs by removing publish or perish requirements for faculty who teach better than they research, reduce the total number of journals and other scholarly publications in existence, so that the entire universe of scholarly information is vastly decreased. There’s so much pure dreck out there; what can we do to get rid of it in our libraries? Reduced to the essential and important knowledge being produced, the library budget can become manageable and the cost of higher ed becomes accessible to students again.

I do agree though, CPU is not necessarily the best metric; it’s just the easiest to produce.

Interesting blog. Thank you. It would be useful to add some examples of „high-end database“ and „less costly… substitute resource“. I assume „high-end“ is the publisher‘s website and „less costly“ means no cost like author accepted manuscript or Pubmed Central? As a university academic and open access ambassador, I am interested in the conversation but not au fait with the jargon.

“High-end” is an imprecise label. Probably better would have been highly-specialized, or at a level aimed minimally at an upper division undergraduate if not a graduate student or faculty member. At our university this type of resource is afforded a disproportionate share of our budget. At the other end of the spectrum would be more generalized content, perhaps even from an aggregator’s corpus.

Leave a Comment