Editor’s Note: Today’s post is by A.J. Boston. A.J. is the Scholarly Communication Librarian for Murray State University, located in scenic Western Kentucky, USA.
It’s time for a new serials subscription strategy. One that addresses the present moment in scholarly communication. Open access (OA) articles are waxing, Big Deals continue to wane, and new titles still appear even as libraries struggle to maintain coverage. With most stakeholders now used to a rapidly changing environment comes an opportunity to shape the next standard way academic libraries and larger commercial publishers make content available to users.
Last year, I offered recommendations to research institutions and funders on the better routes to achieve equitable open access. Today, I offer a route for managing closed access e-serials in a way that finds the best value for libraries, the most content for users, keeps publishers solvent, and experiments on behalf of equity.
Unbundling “Read” From “Publish”
Two trends in recent library-publisher relations have been the unbundling from big deals and the bundling of open access publishing onto read deals. Neither directly addresses how libraries undertake that fundamental role of brokering access to paywalled content from scholarly publishers on behalf of their communities.
Read-and-publish deals bundle a ‘publish’ component onto a preexisting ‘read’ component but, practically-speaking, little changes for the read component. And while unbundling from Big Deals does change the structure of read deals, this is not a proactive subscriptions strategy, it’s a retreat from a failed one. While neither trend offers a model for a new read deal, understanding how they shape the current terrain does help us navigate a future path.
Let’s consider the role of equity, which is gaining headway into library decision-making. Read-and-publish increases the overall number of articles published as open access through a publisher, which increases free access for readers; this increases equity. This also standardizes author-side OA publishing fees, decreasing opportunity for under-affiliated authors, which decreases equity. Some will argue that read-and-publish is good for equity, and others will argue it’s bad for equity, which is evidenced by the continued growth of read-and-publish deals as well as the continued criticism of them.
Setting that debate aside, where can energy be redirected productively? Easy. Consider less controversial frameworks that libraries operate under, such as the desire to maximize fulfillment of local users’ content needs within set budgets. Read-and-publish doesn’t necessarily do this in a ‘read’ subscription context and, unless we consider retreat from Big Deals as advancement in a different direction, the strategy vacuum left in that space is largely unfilled.
Big Deals => Cost-Per-Use
Collection development decisions are informed guesses. Even when these guesses are highly informed, the result is that libraries pay for access to articles their users may not even touch in a given year, meanwhile, special processes are used to access articles from non-subscribed titles. The resulting collections that libraries are able to offer creates a series of green lights and stop signs regulating how users read and cite. With thousands of titles competing for scarce budget dollars, how do libraries meaningfully pick and choose in the first place?
Calculating a serial’s cost-per-use (a ratio of two components — the numerator (cost) and the denominator (use)) is a common method libraries use to help decide what to keep and what to cancel, especially when leaving Big Deals. When a cancellation occurs under this method, the library is literally dropping its support for journals that have demonstrated less demand than others at their institution, regardless of their importance to other communities.
Big Deals, which offer “great coverage, but poor value”, are further devalued as the share of open access articles increases. Through simple cost-per-use analyses and more in-depth analyses (with tools like UnSub), libraries can lower their costs by retaining subscriptions to titles deemed as a high value to their institutions. “High value” in this context refers to the “most highly used individual journal titles” as discussed in recently collected examples.
It makes sense for libraries to subscribe to what their users use and unsubscribe to what they don’t. To pay for what has a price tag and not pay for what is mostly free. When libraries cancel a subscription to a journal, either because many articles are free or because local usage rates are not surpassing acceptable costs per use thresholds, it sends a message to publishers which may not be the intended one.
Commercially published journals depend on library subscription money or open access fee funding; an insufficient amount coming from either bucket leaves publishers with little reason to continue support. If libraries do not subscribe to journals based on relatively low usage (a trait that comes with the territory of new, growing, or underrepresented fields), it would make sense for publishers to cut support for those journals – and that’s bad for bibliodiversity. If a journal begins to receive more reliable funding from open access publishing fees than subscription revenues, it would make sense to move that journal to a completely author-pays model, which is bad for under-affiliated authors. Circling all the way back around, if a journal receives adequate funding primarily from subscriptions, the journal continues, albeit with paywalls for readers.
Current decision trees all seem to lead to suboptimal outcomes. This is why something different is necessary, starting with the structure of subscriptions. While there are downsides to using cost-per-use as a tool for collection development, the commonplace nature of using it points toward another possibility. This other possibility is at the heart of what I am proposing: use-based cost.
New Read Deal
I propose that publishers make all of their paywalled content available to a partnered library’s users and, in turn, libraries pay invoices based on total usage of paywalled content at a single flat rate. (As opposed to a bespoke formula based on journal brand value and institutional classification.) Giving users the ability to read everything from a publisher is maximum coverage. Paying only for the paywalled articles that users use is maximum value.
If libraries broker access to a publisher’s full portfolio, institutional readers gain a much better chance of being able to immediately access the article of their choosing. And because access is now a series of user decisions instead of a single binary decision to subscribe or not, small journals gain a fighting chance. (Compare the difference between a political campaign receiving one dollar from a thousand donors rather than relying on a single donor for $1,000.)
The flat rate will be set at a very low bulk rate in exchange for guaranteed library spending levels being pre-purchased, based on forecasted institutional use of the publisher’s paywalled content.
Use-based spending exceeding the normal rate of the publisher’s open access fees will automatically flip paywalled articles to open access (regardless of author affiliations), to cap spending and offset the unequal ability among authors to meet open access payment (iterating off previous thinking).
Use-based subscriptions will incentivize publishers to continue the paywall model, which have the upside of not requiring author payments. And to offset this continuation of paywalls, surplus uses realized at the end of each year will be channeled into a free public reading option (also iterating off previous related thinking).
If this plan is “demand driven”, then libraries strengthen the case to their host institutions for continued funding, based on institutional users voting with their [browsers]. Widening the amount of publisher content available to users without delay should, in theory, drive user satisfaction rates as well as alleviate work for access services/interlibrary loan departments.
For publishers, knowing the library’s internal case for funding rests on firmer ground should be cause for relief as well. Exposing users to more content from diverse titles will likely drive usage up, and if invoices are based on usage, then payment goes up. Immediate publisher-side access will catch a lot of current usage not currently captured, whether due to piracy, use of green copies, or loss of interest when less patient users face alternate access options. This plan holds room for income growth for the publisher, but cost increases can be kept to reasonable levels – with beneficial side effects for creating equitable access.
It may strike some as odd to think a publisher would offer a flat rate across all titles, considering how some titles command high subscription rates currently. My first reaction is to cite the old Apple iTunes Store where songs were all the same price, even though some songs cost much more to produce or were much more popular than others. But I avoid that analogy, because it quickly turns to the question of why not Spotify as a model. I like my Spotify subscription (even though I know it’s awful for artists) because it’s a low price for a lot of content. The Spotify model is also the same model as the Netflix model and Netflix prices are much higher today than they once were. The value of these models is harder to quantify than use-based models, and thus easier to increase prices without real explanation. They are: Big Deals.
A better analogy for this, I think, is the weighted buffet. There is a buffet in my town that offers a takeout option where the customer’s box is weighed and this weight is multiplied by one set rate. While each of the entrees and sides have different ingredient costs and preparation times, the customer only has to consider the total weight of their box and the restaurant’s fixed rate. Here, the publisher is the restaurant and libraries are paying for the to-go boxes once users have filled them up with desired content (or rather, what the library bets their users will desire). This makes that set rate very important, which will be discussed in detail.
Terms for a New Read Deal
What would probably be most useful for the reader at this point is an idea of how you would craft a contract that achieves all of the features promised. I don’t personally have experience with library-publisher licenses and contracts, but here are five points that I think are useful for groups working on a memorandum of understanding.
- Libraries prepay publishers for a predetermined amount of use of publishers’ entire portfolio of paywalled serials content. The use count is charged at a single, flat, negotiated rate.
- “Use” is defined as any instance in which a library user navigates to the full text of a paywalled article. Only the first time that an individual navigates to the full text of an article is counted as a billable “use” per coverage year. Publishers are incentivized to make user navigation to full text as simple as possible.
- A “historical cost-per-use rate” (calculated by dividing the library’s historical total spending by the library’s historical use data of the publisher’s paywalled content) will help determine the future “use-per-cost rate” (at which a library will pay a publisher for usage). The historical use data will help forecast institutional usage levels.
- A “negotiated integer” (meaning a whole number, two or larger, determined through negotiations) will be applied in the following ways.
- The established historical cost-per-use rate will be divided by the negotiated integer in order to determine the future use-per-cost rate. Thus, the (future) use-per-cost (UPC) rate will necessarily be lower than the (historic) cost-per-use (CPU) rate.
- The forecast institutional usage levels will be re-multiplied by the negotiated integer, and the product of that multiplication amount will determine the spending levels the library will pre-purchase (at the use-per-cost rates) for the lifetime of the deal
- The deal length will be the negotiated integer expressed as years.
To illustrate: If, on average, a library historically spends about $100 with a publisher in exchange for the use of about 10 articles, then the historical cost-per-use (CPU) rate would be calculated at about $10.00. If the negotiated integer is decided to be two, then the CPU ($10.00) will be divided by the negotiated integer (“two”) to bring the use-per-cost (UPC) rate down to $5.00. The historic use of 10 articles per year is then multiplied by the negotiated integer (“two”). This means the library now agrees to buy 20 articles at a $5.00 use-per-cost. 20 articles at $5 brings spending back to where we started: $100. The library will guarantee a continuation of spending (of $100) across the lifetime of the deal, which in this case is two years (based on the negotiated integer). For two years, the library buys 20 annual article uses for $100 from the publisher.
In this example where the negotiated integer is two, the library doubles (multiplies by “two”) the value of their deal by securing double the article usage as historically used.
Usage is predicted to spike once the publisher’s full portfolio of paywalled articles becomes available to library users with navigation to full-text fully optimized.
Therefore, the ‘excessive’ purchase of usage (relative to historical use data) is intended to provide cushion against these predicted spikes. If the ‘cushion’ size is adequate, there will be a surplus of prepaid uses at the end of each year. This surplus will not rollover to the next year nor be counted for future spending discounts. Instead, this surplus will be rolled into a free public use account.
The original intention of pre-purchasing use in excess is to lock-in low bulk rates and secure against usage spikes, but if surpluses remain relatively average over time, these planned excesses may be seen as resources wasted from an outside perspective.
This free public use mechanism turns a potential downside into an opportunity for publishers and libraries to cooperatively add to the common good. Under-affiliated users gain free and legal read access from the publisher site. Under-affiliated authors who cannot pay an open access fee gain equal odds of having their works read as those authors who can pay.
- Lastly a “Golden Gateway” mechanism will be developed and installed whereby any paywalled article that receives institutional usage (and thus funding) equal to or surpassing a set ‘flip rate’ will automatically be made open access on the publisher site. This set flip rate will be negotiated by the library and publisher, based on historical article processing charge rates. The Golden Gateway also takes a situation with potential to be viewed negatively (‘double dipping’) and turns it into a cooperative opportunity for the library and publisher to gradually open the full scholarly corpus over time.
The rush toward open access currently fueled by Plan-S and the Nelson Memo (unlike the steadfast efforts of open publishing elsewhere on the globe) create incentives for commercial publishers to push more open access publishing based on fees at the individual (APC) and institutional (read-and-publish) level. This in turn has outcomes for scholars (pay the fee or publish elsewhere), for scholarship (publish more articles or else), and inefficient library deals.
Refocusing attention toward a revised subscription-read model can help preserve the market for publishing models where authors are not expected to pay fees, which means that authors gain or keep journal choices that do not require a payment to publish.
Unlike the paywalled publishing of yesterday, the ‘Bronze Border’ ensures that under-affiliated users are not necessarily locked out of reading content from this system and that authors’ paywalled works are not necessarily made harder to read than works by authors who pay open access fees. Meanwhile, the ‘Golden Gateway’ mechanism provides a way for current and back catalog content to open up without reliance on an author-side fee or an institution-specific deal.
Nothing presented in this proposal would force the end of current or future open access spending at the individual, institutional, or national level. In fact, all articles published open access would provide a relief on costs under the new read deal, just as the new read deal will provide relief to negative side effects under fee-based open access publishing.
Even as I attempt to wrap this all up with a tidy bow, let me acknowledge this will play out much messier in practice. Plans usually do. Dozens of smart people will have hard negotiating sessions, thousands of workers at hundreds of institutions could have workflows altered at least to some degree, and the structure that moves millions of dollars from one account to another will look different than before. It would be much easier to continue on next year just as we did this year and last year. But do you like the current system, its outcomes, and where it’s leading us?
I imagine it will mostly be librarians and publishing professionals who will read this. It is upon you whom this hard work falls, and I’m happy to help you. The people that I don’t expect to read this are researchers and scholars, faculty and students, and people who generally work outside scholarly & scientific fields. We do our work for their benefit.
If we in the kitchen (so to speak) do the work of slicing, dicing, and applying heat, then when the fruits of this labor are presented to our guests, it will appear effortless, elegant, and satisfying. They will publish a paper where they want, either paying an open access fee or not. If they follow a citation, a doi, or a link through a library, the full text will be there instantly. And if they seek an article without a library, it will be accessible to them, too, if enough of us commit to making it so.