The Big Deal (TV channel)
The Big Deal (TV channel) (Photo credit: Wikipedia)

We in libraries have now been complaining about the Big Deal for just over a decade. We complain about it in some of the same terms — and for all the same reasons — that a junkie complains about heroin. We love the Big Deal, we have to have it, it solves some serious short-term problems for us and makes us feel really good, and we know it’s going to kill us eventually.

The problems with the Big Deal are easy to see, and they are primarily two:

  1. It’s not viable in the long run. With the typical Big Deal, you cordon off a very large chunk of your materials budget, assign that chunk to a single publisher, agree (in the case of a multi-year contract) to an annual price increase just a couple of percentage points lower than the ruinous market rate (which has historically averaged between 9-10% annually for STM journals, but may be starting to decline), you accept the effusive gratitude of the faculty who are thrilled with their greatly-expanded access, and then you wait for the deal to take over your entire serials budget, which it eventually will do — more slowly if your budget is increasing from year to year, more quickly if (as is often the case these days) your budget is flat or flattish. The short-term benefits are both real and significant: your patrons get access to lots of content that you could never provide them at market prices. But the end game is disastrous.
  2. It’s terribly wasteful in the short run. The Big Deal is explicitly structured such that you’re buying access to journals you don’t want or need, in order to secure affordable access to those that you do. Depending on your existing subscription list and the content of the Big Deal package, your waste quotient could range anywhere from 20 to 90%. Again, the short-term benefit is huge: dividing the annual price of your deal by the annual number of downloads may show a per-article cost of anywhere from 25 cents (this was the actual per-article cost to one library I once worked for) to a few dollars. The extremely low per-unit cost for manifestly in-demand content makes it easy to ignore the waste, in much the same way that it’s easy to ignore a leaky faucet if you’re paying very little for the water. The difference being that with a typical Big Deal, the faucet isn’t dripping—it’s running constantly, day and night, all that cheap water going steadily down the drain.

OK, so the Big Deal has structural drawbacks. What are the solutions?

I don’t see one on the immediate horizon. This is partly because the dynamics that give rise to the Big Deal are so firmly and deeply rooted in the structure of our scholarly communication system, and have been for hundreds of years. It’s easy to think of the Big Deal as a product of the World Wide Web, as if it were brought into being by the nearly wholesale movement of journal publishing from the print to the online world. In a sense this is true; the networked digital environment creates the economies of scale that make the Big Deal possible.

But in another sense it’s the opposite of the truth. The Big Deal is really just a caricature of an access model that came into being precisely because of the limitations of the print environment: the journal subscription — or, as I like to call it, the Medium Deal.

I’ve taken to calling the journal subscription the Medium Deal because it’s just like the Big Deal, only smaller — it’s scaled to the article rather than the journal title. With the Big Deal you buy access to journals you don’t need in order to get reasonably-priced access to journals you do need. With a journal subscription you do exactly the same thing, only with articles. This was never a good model, but back when information could only be distributed in the form of printed documents, it was the only feasible one. Now, in the era of networked digital information, we still have that print-based mindset, thinking of journal “issues” as meaningful units (which they obviously aren’t, except in the unusual case of a themed issue) and going along more-or-less willingly with the proposition that the only way to get reasonably-priced access to a desired article is to pay for it in an annualized bundle with a bunch of others you don’t want.

I don’t see a solution to this problem either. What would obviously make the most sense is a Tiny Deal, one based on articles rather than journals, one that involves the efficient purchase only of what’s actually needed rather than the preemptive and wasteful purchase of large blocks of unneeded articles. But just because such a model would make sense doesn’t mean it’s feasible. For the Tiny Deal to work for libraries, the price of an individual article would have to be very low (as it is with a Big Deal). For it to work for publishers, the price of an individual article would have to be very high, because relatively few articles would be sold (cf. Joe Esposito’s recent posting on the projected economic impact of patron-driven acquisition on book publishers).

Some of my colleagues reading this will say, “Who cares whether it works for publishers? If a bunch of publishers go out of business, so much the better.” Those who think that way are very often people whose careers depend on their ability to provide access, not people whose careers depend on having prestigious journals in which to publish their papers. Driving a lot of publishers out of business might make some of us feel good temporarily, but I’m not sure it’s a realistic recipe for a better scholarly communication economy.

“Fine,” I can hear my colleagues say, “then this is why we need Open Access.” And maybe they’re right. But no one, not even Stevan Harnad, thinks that OA is going to take over for traditional toll-access models on anything like a broad scale in the foreseeable future. We need a solution for the near- and midterm.

I think the only way we’re going to find a workable solution is if at least two things happen:

  1. Librarians have to let go of the idea that profit is evil. A healthy scholarly communication system is almost certainly going to include publishers that make a profit, and in some cases very substantial profits. I would say this to my colleagues in libraries: if you don’t like that, then tough luck — deal with it, or find a different job. Our task is not to bring about the death of capitalism, but to make possible the scholarship our institutions are charged with creating. As my man Stanley Fish has said, “save the world on your own time.”
  2. Publishers have to commit to creating selling models that are both rational (i.e., not predicated on waste) and viable (i.e. economically feasible in the long run). Why do I say that publishers need to do this? Shouldn’t all members of the content supply chain—authors, librarians, publishers, etc.—contribute to this endeavor? Theoretically, yes they should. Realistically, the innovation is going to come from the commercial side. It almost always does. Commercial entities are driven to innovate by competitive considerations that really don’t exist in academia. In libraries, we generally innovate out of a sense of obligation, whereas companies tend to innovate out of a sense of terror, which is a much better motivator than obligation.

Once again I find I’ve written a posting that raises vexing questions and offers no concrete solutions. I’m not sure how many more of these I’ll be allowed before Kent fires me. In the meantime, maybe you commenters can bail me out: What might a solution to this quandary look like?

Enhanced by Zemanta
Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.

Discussion

54 Thoughts on "The Big Deal, the Medium Deal, and the Tiny Deal"

“Fine,” I can hear my colleagues say, “then this is why we need Open Access.” And maybe they’re right. But no one, not even Stevan Harnad, thinks that OA is going to take over for traditional toll-access models on anything like a broad scale in the foreseeable future.

Really? Put me down in the “no-one” category, then. And I am pretty sure I’m far from alone.

Librarians have to let go of the idea that profit is evil. A healthy scholarly communication system is almost certainly going to include publishers that make a profit, and in some cases very substantial profits.

I should comment on this, if only to clear up misconceptions of my position: I agree. I have nothing against profit per se, and I am all in favour of people finding ways to create value that we’re willing to pay for. My problem is only with publishing who make their profit by doing the opposite of publishing — by restricting access. Anyone who can make money by providing access is fine by me.

Realistically, the innovation is going to come from the commercial side. It almost always does. Commercial entities are driven to innovate by competitive considerations that really don’t exist in academia. In libraries, we generally innovate out of a sense of obligation, whereas companies tend to innovate out of a sense of terror, which is a much better motivator than obligation.

All true, in the general case. In the case of academic publishing, I don’t think the analysis holds because it omits an important stakeholder: the funder. In a conventional economic transaction, I pay you and you give me something in return. In academic publishing, the funder pays me, and I give you something in return. Now that funders are waking up to what an inefficient use of their money this is, they are changing the game by fiat. The challenge is not whether publishers can find a way to change to make them more appealing to libaries; it’s whether they can change in a way that means they still have a role when funders get fully up to speed.

The biggest threat to Elsevier is not the boycott, or Harvard’s can’t-pay-any-more memo. It’s that a big funder is going to say “screw this, we’ll publish the results we fund”, and once one goes the others will follow.

To avoid that scenario, publishers are going to have to up their game — not my small incremental improvements, but by radical change.

I have nothing against profit per se, and I am all in favour of people finding ways to create value that we’re willing to pay for. My problem is only with publishing who make their profit by doing the opposite of publishing — by restricting access. Anyone who can make money by providing access is fine by me.

Mike, can you clarify what you mean by “publishers who make their profit by restricting access”? Do you mean “publishers who charge money for access”? (If so, then wouldn’t your objection apply just as much to OA publishers like PLoS, who “restrict access” to their publishing services by imposing charges on authors?)

That’s a non-answer. PLoS “restricts access at the point of access” just as much as, say, Elsevier does — it’s just that it’s restricting access to its publishing service rather than restricting access to its content. Each restriction is a very real one that limits the services provided to those who want them. Why is it morally okay to make a profit the way PLoS does (by charging for the right to publish), but not to make a profit the way Elsevier does (by charging for the right to read)?

Surely you don’t truly find the phrase “restricts access at the point of access” so confusing? You must be familiar with the phrase “free at the point of access”?

There are many reasons why freedom at the point of access is more important than at the point of publication. For fear of dominating the conversation, I will give just two.

First, there is an asymmetry of monopoly. If I want to publish my article as Gold OA but I don’t like PLoS’s prices or features, I can take my article to BMC, or any other Gold-OA outlet I wish; whereas if you publish your article behind and Elsevier paywall and I don’t like their prices or features, I can’t go and get it from Wiley instead. So the subscription model imposes monopolies on each paper, and doesn’t give a free market.

Second, there is more to freedom than just the ability to run eyeballs across a paper. True OA publishers (including PLoS and BMC, but also for example Springer when their elective Open Choice option is used) make the work available for ALL kinds of re-use: republication, remixing, translation, text-mining, image-gallery assembly, and a hundred other kinds of reuse that we’ve not even thought of yet. I want the get barriers out of the way of the progress of science, and paywalls impose much more, and much worse, barriers than the “pay” part.

It’s misinformed to claim that as an author, you don’t have a choice of publishing venues — reader pays, author pays, etc. If there weren’t a free market, then publishers couldn’t leave contracts with large commercial publishers and thrive, start new journals, or take papers away from one another; authors couldn’t patronize (or not) favorite journals. This happens all the time.

There is no “monopoly” imposed on any paper. What academics want is priority and prestige, which publication provides. They don’t really want anything beyond that, except to move on to do more science. Fortunately, that’s what society wants, too.

Copyright transfer enables many things, including incentivizing the publisher to protect the priority and prestige of the work, protecting the integrity of the scientific record, and sheltering the time of the scientist. Maintaining copyright is a service publishers have traditionally provided to authors, who cannot defend copyright well alone (and often don’t even know how to do it) and who find such activities distracting, difficult, and unproductive. OA publishers, in the worst case, have kicked this task back to authors as a cost-saving approach, and one that leaves authors vulnerable to their works being exploited without their awareness or permission, and then finding themselves unable to stop it.

As for true research into and across works, permission for such work is routinely granted, even supported, by publishers, academic societies that publish, and others.

Rick is correct — there is a paywall in either OA or reader-pays publication. The wall is just in a different place. You’re debating where the wall should be and how high it should be and who should leap it, not the existence of the wall.

“It’s misinformed to claim that as an author, you don’t have a choice of publishing venues — reader pays, author pays, etc.”

Yes, it would be misinformed for someone to claim that.

I didn’t; I said the opposite.

I guess I don’t understand your point at all then. Are you saying that OA is better because the article can be republished by other publishers?

If this is your claim, please provide:

a) An example of this occurring in credible journals
b) An argument about why it’s better

If that’s not your point, I have to confess I’m not following you.

> It’s misinformed to claim that as an author, you don’t have a choice of publishing venues

It is not what is claimed here; what is claimed is that once a paper is published, the people who pay for it do not have a choice where to get it. In the Gold OA model, the author has the choice and the reader has open access anyway.

What bothers me is that this has been explained, here, and more than once. I cannot imagine that you fail to understand this point, especially after several explanations, so I am quite wondering whether your long answer’s point is more than convincing anyone looking at this site that Mike Taylor’s argument have been properly answered, expecting that the time you have to comment here exceeds his.

This being said, Gold OA is in my opinion not a good model for all those fields where one can do research without funding (like mathematics), because the paywall for publication would become a burden for many. But OA with institutional funding would certainly be sustainable.

So, in a subscription model, those who “pay for it” have no choice of where they have to get the paper because the researcher chose to publish in a (likely free to them) high-quality, well-targeted venue, and the reader pays. Yet in an OA model, the author will want to pay to publish in journals with high impact factors, appropriate readership, or both (ideally). This narrows choices a lot, too. And the reader will get the paper from that source. I don’t see how the author/reader “choice” is much different in either scenario. The reader still has to get the paper from one source, and the author still has to publish with one source. That’s why I posed those questions to Mike. He’s proposing an irrational situation.

We’re talking about price and transactions, not choice. That’s where this stops making sense. You can’t dress up free as freedom.

Want to reply to Kent’s 4:55 post.

My understanding is that, with a CC-BY license, a paper could be re-posted anywhere, on any server. So that, theoretically, the reader could get the paper from anywhere, regardless of the journal or platform/publisher that reviewed, accepted, and did the rest of the publishing work on it.

Scott

Theoretically. However, it doesn’t happen because there is no economic value in doing it. That makes it moot.

Surely you don’t truly find the phrase “restricts access at the point of access” so confusing?

You’re correct; I don’t find it confusing. I find it unresponsive, given the question I asked you. Both models sell access. The toll-access model sells access to content; the Gold OA model sells access to publishing services. Both “restrict access at the point of access” to the thing they’re selling.

There are many reasons why freedom at the point of access is more important than at the point of publication.

I didn’t ask why you find it “more important.” I asked why you find it morally superior.

First, there is an asymmetry of monopoly. If I want to publish my article as Gold OA but I don’t like PLoS’s prices or features, I can take my article to BMC, or any other Gold-OA outlet I wish; whereas if you publish your article behind and Elsevier paywall and I don’t like their prices or features, I can’t go and get it from Wiley instead. So the subscription model imposes monopolies on each paper, and doesn’t give a free market.

Now I am indeed confused by your response. There’s an “asymmetry of monopoly” in virtually every economic exchange — at least every one that involves money (since no buyer has any monopoly over money, whereas most sellers have at least some degree of monopoly over their product or service, or else they couldn’t command a price for it). As an author, you can only choose to publish with a Gold OA outlet if you have sufficient money to pay for the privilege. It’s true that there’s more than one Gold OA publishing option, but there’s far more competition for your article (which is to say, far less seller-side control over access to the publishing service) between toll-access publishers than between Gold OA publishers.

Second, there is more to freedom than just the ability to run eyeballs across a paper. True OA publishers (including PLoS and BMC, but also for example Springer when their elective Open Choice option is used) make the work available for ALL kinds of re-use: republication, remixing, translation, text-mining, image-gallery assembly, and a hundred other kinds of reuse that we’ve not even thought of yet. I want the get barriers out of the way of the progress of science, and paywalls impose much more, and much worse, barriers than the “pay” part.

So now it sounds like your real concern is with the concept of copyright itself, or at least with the exclusivity of rights that comprises copyright. That’s an important subject, but I think a different one.

Are you planning to engage with the actual issues here at all? Or are you just going to be pointlessly picking holes?

Mike, I’m doing my best to respond as substantively as I can to the points you’re raising (or at least to the ones that are relevant to my posting). If you have substantive objections to my responses, please say what your objections are. I’ll be happy to address them as best I can.

“There’s an “asymmetry of monopoly” in virtually every economic exchange … now it sounds like your real concern is with the concept of copyright itself.”

Your defect is a propensity to hate everybody” [Elizabeth said to Darcy].
“And yours,” he replied, with a smile, “is wilfully to misunderstand them.”

— Jane Austen, Pride and Prejudice, chapter 11.

Mike, I assure you that if I’m misunderstanding you, it’s not wilfully. Are you suggesting an inconsistency between the two extracts of my comments that you spliced together here? I apologize, but I honestly can’t figure out what point you’re trying to make. Give me something less gnomic to work with, and I might be able to respond better.

OK, Rick. I’ll explain myself over on my own blog, where I can take it bit more time and space over it without risking overrunning your space with my thoughts. I’ll post a note here with a link when I’m done.

BTW., I’d appreciate it if you’d add a note to your comment ofMay 30, 2012, 1:12 pm saying that it was edited after I posted my followup. The way the thread is right now, my question about whether you’re going to engage with the issues looks dumb.

[Editor’s Note: The only change to Rick’s comment was a formatting change.]

Mike, the only change made to my comment after it posted was the insertion of a couple of [blockquote] tags I had accidentally left off. Nothing about the content of my comment was changed. My apologies if the formatting error led you to respond in a way that you feel put you in a bad light.

Thanks, Rick. Because of the missing [blockquote] tags, I just saw a mass of grey after your first comment — so I completely missed the parts where you actually responded to the points I was making. Hence my own response “Are you planning to engage with the actual issues here at all? Or are you just going to be pointlessly picking holes?” which is a completely inappropriate reaction given that you did engage.

Rick,

I would say two things. One approach on the publisher side, at the “medium deal” level, is to work to make as few of those journal titles “waste”–i.e., improve the quality of those journals on a title-by-title level. This isn’t theoretical, FWIW. It’s something I have spent a good deal of my working time on for the last five+ years. Although I realize that might make things more difficult for librarians–if a greater proportion of a Big Deal is good and not waste.

Secondly, as to those who would think driving publishers out of business might be a good thing… Has anyone given thought to the fact that people who work at publishers are all university/college alumnae, and part of the donor base?

Scott

(I work at Springer, my opinions are solely my own.)

While I applaud all work being done to improve the quality of journal content, I think you and I are defining “waste” differently. You’re defining it in terms of quality (more low-quality content = more waste) whereas I’m defining it in terms of relevance and utility. As I’ve argued elsewhere, money spent on an excellent book or article that doesn’t serve the actual needs of my patrons is money wasted; money spent on a mediocre book or article that does serve the needs of my patrons is not wasted. So increasing the quality of journals on a title-by-title basis really can’t solve the problem of waste that is inevitably created by both the Big Deal and the Medium Deal. In this context, “waste” is defined locally, where the money is spent and the content is used — not globally according to objective standards of quality.

The underlying assumption is that there is a causal relationship (with other factors such as breadth and amount of work being done in a field, among others) between the quality of work in a journal, and how much it gets used, and how needed it is. So improving the quality of journal is supposed to have the follow-on effect of improving its usefulness. Unless that underlying assumption is flawed? 🙂

BTW, how do consortia arrangements play into this? That is, if certain titles aren’t of much use to your patrons, but are of high importance to the patrons at one of your consortium partners, and you’re all buying in together?

Scott

The underlying assumption is that there is a causal relationship (with other factors such as breadth and amount of work being done in a field, among others) between the quality of work in a journal, and how much it gets used, and how needed it is.

That is certainly the underlying assumption, and I believe it’s wrong. Or at least, I’d put it this way: that while there’s definitely a relationship between quality and usage, that relationship is not causal (or at least not determinative). A very high-quality but low-relevance article is not likely to be used, despite its quality. A high-relevance article of only moderate quality is much more likely to be used. And a very low-quality article or book may be of very high utility — sometimes precisely because of its low quality. It all depends on what scholarly task one is trying to accomplish.

As for consortia: a consortial arrangement tends to increase “waste” (as I’m defining it) but decrease unit cost per participant. Libraries don’t generally enter into consortial arrangements in order to increase efficiencies; we usually enter into them in order to save money.

Oh, I see.

I would, though, include utility and community interest as a very important part of defining what I mean by “quality,” in terms of which papers journals want to accept–that is, to publish more material that will get used and cited more.

Scott

Rick,
Wonderful summary of the situation. I’m hopeful that Open Access and new publishers with new business models will provide enough competition for established publishers that they will change their business models. Or profit margins.
Robin

I agree that, from the librarian’s perspective, buying at the article level would make the most sense. As Joe has argued though, this only provides very unpredictable income to content providers, and it’s difficult to maintain full time staff and infrastructure without predictable income. As such, most publishers have priced individual articles so high as to make them very unattractive to libraries. Perhaps a third party will come along, sell articles at a loss, and make up the difference with ad revenues.

I think Joe makes a very good point about predictability of revenue streams — that’s one of the reasons the subscription model is so attractive to publishers. But the ability to sell content in bundles is also attractive in and of itself; it means not only predictable revenue streams, but more revenue.

Rick,

I had the opportunity to see you talk at the CSE meeting last week. (I’m the one who asked you about permanence/ephemeralness and about Netflix.) Since then, it occurred to me that that your argument is essentially the same argument as the one for à la carte cable television, and one of the basic arguments against unbundling cable doesn’t seem to come up in your discussion.

Many argue (here, for example, http://www.theatlantic.com/business/archive/2011/06/why-cant-we-unbundle-cable/239849/) that unbundling will just raise the price of the individual pieces bought à la carte to a point that it is the same as what you pay for the bundle anyway, so what’s the point in unbundling?

The publisher still has to cover the same fixed costs to produce the journal, and the à la carte prices will have to be set in a way to have revenue meet expenses. And since the marginal cost of giving you stuff that you don’t want is basically zero, why not?

I guess my question, then, is: what evidence is there that buying content per article will save you money?

Joel, you’re pretty much restating the concern I expressed in the piece above: for the Tiny Deal to work, libraries would need the per-article price to be very low, while publishers would need the per-article price to be very high. As things stand, you’re absolutely right that the Big Deal offers excellent pricing — the philosophical problem is that the model is very wasteful; the practical problem is that the price (excellent though it may be on a per-unit basis) can’t be sustained in the long run.

I guess maybe I just don’t agree with your assessment that getting access to something you don’t need at essentially no additional cost is “waste.” If a bundle gets you what you need at a lower cost, who cares that you also get stuff you don’t need?

My assumption here is that all of these “deals” are for online content, right? If they were forcing you to subscribe to a physical journal, where the actual physical copy would be wasted (ink, paper, shipping cost, etc., all have real marginal costs), then I would understand how that would be waste. But I don’t think that you can “waste” something that has a marginal cost approaching zero.

It’s that word “essentially” that causes the problem here. Getting access to something you don’t need at zero cost does, indeed, involve zero waste. Getting access to something you don’t need at some cost (even a low cost) does involve waste. And the Big Deal involves very significant cost — it’s just that the cost per unit is low.

But when you get a bundle, how do you know how much of the cost of the bundle is going to pay for various parts of the bundle?

My basic contention is that if the pricing structure went to à la carte purchasing, libraries would end up paying as much as they do under the current pricing structure for the bundle because prices for individual journal subscriptions and/or individual articles would rise in order to cover the same fixed costs of production. And since you would still be paying the same price for less “product,” anything else you get for that same price in a bundle is free. (When I said “essentially,” I didn’t mean “near zero,” I meant actually zero; the “essentially” was to hedge because the extra content isn’t actually free, it’s just free above and beyond the stuff you actually want.)

Just like cable TV, the things you want may not be the same as the ones I want, and we end up subsidizing each others low unit costs. The only place where I see possible cost savings would be the discontinuation of journals that nobody at all wants (which may be a good thing). However, there may be low-demand journals that still serve important functions in a discipline or community of users that may not be able to continue without being subsidized as part of a package.

But when you get a bundle, how do you know how much of the cost of the bundle is going to pay for various parts of the bundle?

Exactly. You don’t know until after you’ve bought it and have had a chance to measure usage. (And actually, we tend not to think in terms of “parts of the bundle” — we tend to measure the value of a Big Deal in terms of cost per download.) But there’s one thing of which you can be very sure: you’re going to end up paying for a whole lot of content that doesn’t get used. Are you likely to end up with a good price per download? In many cases, yes; that’s the upside, and it’s a very real practical benefit. The downside, in practical terms, lies in the annual price increase, which will invariably be unsustainable with any Big Deal (typically somewhere between 5% and 8%). The downside in philosophical terms is the amount of waste represented by the unused content that you purchased.

My basic contention is that if the pricing structure went to à la carte purchasing, libraries would end up paying as much as they do under the current pricing structure for the bundle because prices for individual journal subscriptions and/or individual articles would rise in order to cover the same fixed costs of production.

That’s the conundrum I tried to express in my posting: for an à la carte model to work for libraries, the unit price would have to stay low; for it to work for publishers, it would have to be high. As I said, I don’t see a clear solution to this problem.

Okay. I see how we’ve been talking past each other now. I thought that when you say “waste,” you mean money (or something tangible/consumable that costs money, e.g., paper or staff work hours), and so I was tailoring my argument to address the cost/pricing structures under the two possible models (which I still don’t think would be lower in a per-article payment structure).

I don’t think I understand your basic philosophical point. What is being wasted when you have access to something that you end up not using? There’s no marginal cost to create additional “copies” of the articles and if no one at your library ever accesses them, you’re not even using up any IT resources (i.e., bandwidth) for them.

I think your analogy with water is a false analogy because water is tangible and consumable and access to content that you may or may not use is neither tangible nor consumable. I contend that you can only “waste” things that are either consumable or tangible.

So, what’s being wasted?

I thought that when you say “waste,” you mean money (or something tangible/consumable that costs money, e.g., paper or staff work hours)

That’s exactly what I mean, but in a way the question is kind of abstract and academic. When you buy into a Big Deal, you are paying for a huge bundle of content, only some of which will end up being used. As I see it, what’s not used represents wasted spending; you bought it, you paid for it up front, and no one is using it. You’re right that the Big Deal model, though wasteful in that it requires the purchase of unwanted content, often yields a lower per-download cost than other, less wasteful models, like purchasing individual articles on demand. But I’m trying to keep the issue of rationality (does it minimize waste?) separate from those of effectiveness (does it work?) and sustainability (can it be maintained in the long term?).

All three of those issues matter, but they are to some degree independent of each other. An irrational model can be effective if it yields the desired utility and it can be infinitely sustainable if the overall price (as distinct from the per-article cost) is low enough. That’s why I characterize the wastefulness of the Big Deal as a philosophical problem rather than a practical one. I don’t like waste as a matter of principle, but I could live with it in the case of the Big Deal if it weren’t for the practical problem, which is that even though the Big Deal yields very low per-article cost, the price of the bundle (which is the price we actually pay) always increases at a manifestly unsustainable rate. Cost and value aren’t the same thing; a deal may represent a very good value, but that doesn’t matter much if you can’t afford to pay the invoice. That’s the structural and practical problem with the Big Deal: although it represents arguably very good value for money (despite its wastefulness), the way its pricing is structured ensures that no library can continue to pay for it in the long run. The waste issue matters, but isn’t determinative. The cost issue is.

As I see it, what’s not used represents wasted spending; you bought it, you paid for it up front, and no one is using it.

This assumes that you could get just what you need for a lower price than what you pay now for the bundle, and I contend that you can’t. So, for example, if

Stuff you need + Stuff you don’t need = $10,000 (bundled),
and
Stuff you need = $10,000 (à la carte),
then
Stuff you don’t need = $0.

I guess my basic contention is that you’re not actually paying for the content that doesn’t get used due to the math above.

(The issue of unsustainable yearly price increases is a separate issue that I’m not going to argue on.)

This assumes that you could get just what you need for a lower price than what you pay now for the bundle, and I contend that you can’t.

No, my position really doesn’t assume that. I think we’re talking past each other here, and consequently our conversation is going in circles. It seems to me that you’re arguing from value, whereas I’m arguing from efficiency. In your view (if I’m understanding you correctly) it makes no sense to talke about “waste” when you’re getting the content you need at the lowest possible price. It seems to me that you’re conflating efficiency and value, while I’m trying to consider those two variables separately. A proposition like the Big Deal offers both high waste and high value; even if it represents the highest-possible value proposition, I remain concerned by the waste represented by the fact that I’m paying for content that doesn’t get used.

But then there’s the third variable: sustainability. To me, that’s the trump card, because it’s the only variable that actually constrains the long-term feasibility of the deal. I can live with a certain amount of waste; I can live with different levels of value. But I can’t live with 7% annual price increases, because that rate of price increase makes the deal objectively unsustainable in the long term.

A librarian presumably knows which journals are needed for their patrons. They could buy these journals individually. If a Big Deal includes all the needed journals at a lower cost, it a rational economic choice. Nice-to-have, but not necessary, journals in the Big Deal are just an extra benefit, not necessarily waste.

A librarian presumably knows which journals are needed for their patrons.

There are several problems with this presumption, Ken. The first problem is with the idea is that librarians can, in fact, know what patrons are going to need. In reality, we can’t — we can only guess (based on better or worse data), and we very often guess wrong. The second problem is with the idea that what patrons need is access to journals. They don’t. In fact, you can argue that there’s no such thing as access to a “journal.” What they need is access to articles, and the problem with current access models is that they require libraries either to buy articles in large batches at low per-unit prices (which means buying what isn’t needed along with what is, which is wasteful) or to pay very high prices for specifically-needed articles. That’s the dilemma that both libraries and publishers are currently struggling with. Well, one dilemma anyway.

Actually, Rick, librarians can learn a lot about what journals and what articles their patrons are accessing online; I would think publishers would happily provide this info. So, a librarian can know from past history that their patrons are likely to access X articles from journal A. A cost comparison among individual access, a subscription, and the Big Deal is possible, if a librarian is willing to invest the time. I wonder if there’s a market for this “comparison shopping” service for librarians?

Actually, Rick, librarians can learn a lot about what journals and what articles their patrons are accessing online; I would think publishers would happily provide this info.

Oh, absolutely — and publishers do, in fact, provide such information routinely. But the data isn’t available until after the usage has happened, and unfortunately payment must always be made up front. So at the point of payment, we librarians are in fact making a guess about future use. The same goes for buying books. Under a traditional collecting model (whether based on a Big or Medium deal), we can’t measure our patrons’ use of a journal or a book until the use has happened, and the use can’t happen until after purchase. One nice thing about both the Tiny Deal and patron-driven book acquisition is that this traditional collecting model is turned upside down — money isn’t spent until need has actually been demonstrated.

My point is, past usage is a pretty good indicator of future usage. Not perfect, of course, but good enough to determine the most likely lowest cost choice. Beats guessing!

And my point is that using past use to make predictions about future use doesn’t “beat” guessing — it’s a guessing technique.

I think this post is too conciliatory. Publishers, eager to maintain their big profits, have been quick to jump on the open access bandwagon by charging excessive amounts for open access. There are several mathematics journals that charge neither author nor readers. http://www.dcscience.net/?p=4873

I hope that everyone will sign up to the boycott Elsevier pledge. It would actually benefit science if people like Elsevier and NPG went out of business. The benefit would come not only because the money spent on their minimal services could be spent better in other ways. It would also break the stranglehold that a handful of high impact factor journals exert on the psyches of young scientists. Admittedly, the primary blame for this does not lie with publishers: they just cash in on it. The blame lies with senior academics, funders and administrators who have not bothered to read the literature which shows that impact factors are a nonsensical way to evaluate quality. In the absence of NPG people might actually be forced to read papers.

Rick, while there’s much to say in relation to this SK piece, I’ll limit to a couple of comments. First: about the following sentence re. your “Medium Deal:” “This was never a good model, but back when information could only be distributed in the form of printed documents, it was the only feasible one.” Though a rhetorical device, the actual statement makes little sense. It’s much like saying, “riding on horses was not a good transport model, but that’s all there was at the time.” Well, if journal publishing was the only feasible model (ditto horses versus airplanes) then to characterize “the model” as “not good” is pointless. In fact, in addition to feasibility, the journal “model” was a way for learned socieities of those days to bundle like with like as a convenience to readers. I’m sure that convenient bundling still serves a purpose these days.

Second, your conclusions/remedies, which as you say are imperfect, are also a little lame (more or less like the ending of the new Jo Nesbo detective book – The Phantom – I just finished, where the author took an easy but unsatisfactory way out of the story). I would argue that the “big deal” has high value when licensed consortially, as there are probably pretty much no unused articles during the course of a given year; but, for any single institution the “big deal” has the drawbacks you describe. So one of the “solutions” is about how libraries should best think together about their collective access to e-resources such as “big deals” for e-journals and now increasingy for e-books. What are we trying to accomplish and what is the fairest way to divvy up costs?

Insofar as what publishers can “commit to,” methinks it’s what I call the “flexible deal,” wherein said publisher offers the “big deal” to those who want it; or the “medium” deal (such as a subject or title-cluster or desired list of journals); and the “per article” deal, which offers a bulk, affordable way to buy just the articles a library wants/needs. We know that a number of publishers are doing these things already, and that a problem at the “per article” level is that the pricing is still as high per aritcle, as the “big deal” is for a compilation of journals. I’d like to think that articles could cost the same as songs on itunes, but no signs of that so far. It seems to me that bulk-article access is more affordable from renting entities (DeepDyve) or via an arrangement with entities like the CCC. Gone on too long, let me stop here. Ann

First: about the following sentence re. your “Medium Deal:” “This was never a good model, but back when information could only be distributed in the form of printed documents, it was the only feasible one.” Though a rhetorical device, the actual statement makes little sense. It’s much like saying, “riding on horses was not a good transport model, but that’s all there was at the time.” Well, if journal publishing was the only feasible model (ditto horses versus airplanes) then to characterize “the model” as “not good” is pointless.

Ann, your point would be well taken if it weren’t for the fact that in the current publishing environment, we still have lots of people selling horses for transportation, and lots of people buying them. The fact that people keep selling, buying, and riding this “horse” (the journal subscription) despite its obvious weakness as a transportation technology is what makes it seem necessary to point out its problems—along with the fact that those problems are intrinsic (not just relative to current alternatives) and have always existed, even when they were obscured by the lack of other options.

I would argue that the “big deal” has high value when licensed consortially, as there are probably pretty much no unused articles during the course of a given year; but, for any single institution the “big deal” has the drawbacks you describe.

As I said in my posting, I agree that the Big Deal has high value, even when it entails significant waste. And you’re right that in a consortial context, waste will tend to be much lower—there may even be no waste at all, if you evaluate efficiency at the consortial level. When that’s the case, then the only problem remaining is that of manifest unsustainability (assuming currently prevailing levels of annual price increase). Unfortunately, manifest unsustainability is a pretty big problem. And value has no effect on it.

Hi Rick, thanks for this thought-provoking piece. I have just a few thoughts that I don’t think have been put forward in the discussion above (I’m an editor of a society journal, but I’m speaking for myself off the top of my head, not for my employer).

I think that switching to the Tiny Deal model (i.e. selling each paper individually to the individual who wants to read it) would be hugely raising the stakes of a scientific career as the scholarly output of scientists would immediately have a direct, measurable financial value (at the moment I’d argue that not only is the financial value more vague owing to the Medium Deal, but that it isn’t even considered in a specific way – papers are accepted, at least at our journal, on their academic merits alone). I’m not sure this even more direct linking of a certain kind of scholarly value with financial value would be in the best interests of scientists (I mean specifically the human beings trying to build a career). I say ‘certain kind’ because I can imagine some papers with high scholarly value in the long-term would perversely have little financial value in the short-term (i.e. they’re important to the scientific record, but no one needs to actually buy them right now for their value to be realized – we can wait the 12 or so months until they are free).

If we were to switch to a Tiny Deal model, would those scientists whose papers then never sell lose all funding and publishing opportunities as a result (in an even more direct fashion than the same caused by a string of poorly cited papers)? Would they in fact owe a journal money in order to allow it to break even on their paper (i.e. would they have to convert their paper to OA if it cannot be sold)? Would those scientists whose papers sell well be able to negotiate publishing deals with publishers who would now pay them for their work? Perhaps a share of profits would be felt to be fairer to authors, but what would this do to academic standards? Would we not end up with an even more cut-throat marketplace and even greater competition to publish than we have under the current system? I would argue that there is a distinction between journals putting up with the low citation of a paper and putting up with making a direct and measurable loss on a paper (they are businesses, after all).

One last point: ultimately, publishers are only selling what scientists are providing. If it’s not good enough for people to want to buy, won’t the repercussions for authors (i.e. the scientific community) be as drastic and financially direct as the repercussions for the journals? Aren’t we really saying that some scientists just aren’t good enough and there shouldn’t be anywhere for them to publish, period? The Medium Deal is a very ‘socialist’ approach to publishing in my view (the stronger papers carry the weaker ones to the benefit of all), while the Tiny Deal would be pure capitalism (weaker papers that don’t sell will not be tolerated). At the moment the market usually allows weaker papers to find a journal sustained by the Medium or Big deals. If we get rid of ‘waste’ (either defined in terms of relevance or quality), aren’t we ultimately going to be getting rid of, perhaps a significant minority of, scientists. I suppose they can always ‘vanity publish’ in low-tier OA journals. What do you think?

Sam, thanks for these very good and thoughtful comments. I think they boil down to one essential (and important) question: to what degree should market-based risk play a role in the creation and dissemination of scientific information? Right now, risk is to some degree distributed across the whole system: an author performs an experiment with no guarantee that the resulting report will find a publisher; a publisher creates a journal without knowing for certain how many libraries or individuals will subscribe; libraries subscribe to journals without knowing what percentage of the price will turn out to have been well spent; individuals subscribe to journals without knowing what percentage of the content will prove useful to them.

In each of these cases, someone invests resources up front with no guarantee of return. You’re right that under the Tiny Deal arrangement, most of the risk would be shifted away from the library–there would be no risk of paying for something that the patrons don’t want–and onto the publishers. (Some librarians will object that the old risk is simply replaced by the new risk of spending one’s budget in an uncontrolled way, but that can be prevented structurally–though not painlessly.)

Is this okay? That question isn’t easy to answer definitively, because it’s an “ought” question rather than an “is” question. On the one hand, you’re absolutely right that it’s not possible to know the ultimate relevance and utility of a scientific study from the beginning; what looks marginal and irrelevant today may be urgently useful a week from now, or a year from now, or a decade from now. On the other hand, since that can be said of virtually any study, that fact doesn’t help libraries (or publishers) make good use of their strictly limited resources. There has to be discrimination; publishers can’t publish everything that’s submitted, and libraries can’t buy everything that’s published. So given that that’s the case, how should libraries discriminate if not on the basis of actual utility to their real-world users?

Would there be real downsides to the Tiny Deal? Of course. There are downsides to every kind of deal. The question we have to ask about every arrangement is how the downsides balance with the upsides. Unfortunately, it’s almost never possible to see all the downsides and all the upsides ahead of time.

Comments are closed.