Last week I posted a piece on Green Open Access (OA), in which I argued that Green OA will have an impact on the purchasing decisions of librarians, who would be more likely to cancel subscriptions, all other things being equal, if a journal were OA then if it were not. The proximate cause of my post was a mailgroup thread in which my fellow traveller, Rick Anderson, made the very same points. The discussion about this topic continues, with Rick now taking on the Herculean task of defending common sense. You can see more debate on this topic by searching the liblicense archive, which is now the focus of the conflict.
So in what ways has the conversation been extended?
1. A lost presentation has come to light through the diligence of my personal information pack rat, Terry Ehling of Project Muse. Terry keeps copies of EVERYTHING and has turned up the slides I referenced in my earlier post, which I have now uploaded to SlideShare and you can view at the bottom of this post.
Everyone should have a personal pack rat, and everyone should have a personal librarian. Like the names of good babysitters, the names of personal librarians are carefully guarded.
I wrote that presentation 8 years ago. What has changed? Well, the basic thrust of the presentation still holds today. Indeed, many of the publishers I work with are dealing with these very issues, namely, the cancellation of library subscriptions for a multitude of reasons including the availability of material from more than one source. Green OA is just one of these sources.
2. It could be argued that the very fact that the presentation is 8 years old proves the point that Green OA has not made a dent in library subscriptions. That would be incorrect. Small publishers are experiencing subscription declines and are taking steps to eliminate multiple sources for their material. Here again Green OA is but one of those sources. The form this takes is rarely tougher policies and almost never litigation, a mostly useless effort since chasing down unauthorized copies is like the game of Whac-a-Mole. One response has been the growth of hybrid journals, where the Gold OA alternative within the structure of a traditional journal preempts Green OA deposits. Another response, with bigger implications, is to partner with larger publishers, who have better resources to deal with Green OA and the current market environment. Thus a small publisher that sees subscriptions falling off cuts a deal with Elsevier, Springer, Wiley, etc., where all the publications are sold in aggregations at high (and higher) prices. Green OA, in other words, is a contributive factor–one of many–in the growth of the Big Deal and price increases.
3. Many participants in this conversation continue to ask, “Where’s the data?” One way to get the data would be to step into a time machine, stop 10 years in the future and assess the status of STM publishers, then turn around and report what the future looks like. To get the data you have to be a time traveller because what is at issue here are forecasts, not historical events. Publishing works that way; it’s a game of predictions, some good and some not so good. Publishers make judgments about which authors to support, which fields will grow, and what their rivals are likely to do. A publisher who does not act today on the implications of Green OA tomorrow could be in big trouble in a few years. And if Green OA proves to be a whimper in the marketplace, what has the publisher lost by acting on it today?
4. arXiv is always offered as proof that Green OA does not affect publishing economics, but it isn’t true. First, arXiv lacks the structure of most formal publications (How do I know if this is the final version of an article?) and is thus not a clear substitute for subscriptions. Second, arXiv certainly has affected the publishing economics in those fields where arXiv has achieved institutional status. Every publisher must weigh the significance of arXiv before investing in a new journal and arXiv’s very existence puts downward pressure on pricing. Beyond that, however, is the simple fact that what is under debate here is the cancellation of journals on the margin. Well-established journals that are at the center of a discipline are not at risk. What are those marginal journals that arXiv does or does not affect?
5. Since publishers must act with foresight, they are largely (if unhappily) supporting this paradigm: to protect the economics of a journal, make it available in an aggregation, which makes the journal less vulnerable to cancellation. Thus big companies acquire big companies and small entities forge publishing partnerships with larger entities. Invariably this means higher prices. Thus the core strategy is to safeguard a journal from cancellation (for any number of reasons, of which Green OA is one of them) by developing arrangements that impose a greater tax on libraries.
This situation cannot last forever, but it has several years left in it. In the meantime, Green OA will contribute to this paradigm, putting a greater and greater burden on libraries. This is the Law of Unintended Consequences at work–a law, it must be said, that prefers to haunt those that struggle to understand how the economy works. Green OA works best when it doesn’t work well. It thrives on disorganization and challenges to discovery. When it works beautifully it undermines the seed, the original publication, and goes away.
6 Thoughts on "The Conversation on Green OA Continues"
“arXiv’s very existence puts downward pressure on pricing” – well that is exactly what we want!
Downward pressure on pricing sounds great, but in reality, it puts a lot at risk if taken too far.
Most STM publishers are small — 81% of the market consists of small companies splitting just 16% of the revenues. If prices are low, there is little incentive to launch new journals and increasing risk that current journals and products will go away or have limited futures. And these companies have little room for error.
These price pressures bother the big publishers much less, meaning that downward pressures applied indiscriminately play into the consolidation in the industry — that is, the big get bigger.
Green OA is not a panacea, and has a reflexive problem at its core — the better it works, the more it hurts what it needs to work.
I have been involved in running or working at the world largest subscription agents from 1980-2009 and have collected data and monitored cancellation rates almost as a religion. Publishers have always experienced cancellations. At one time libraries bought the first, second and often third tier journal in most fields. Today that is not true. The big deals and custom deals have indeed captured much of library budgets. The small publishers have taken a larger hit than the big guys. I am still laughing at the concept that arXiv has held down pricing…… Nothing could be further from the truth. I watched closely as the Physics alternatives developed and expected to see wide cancellations. Well it didn’t happen. I do think if the title is green OA title that it will be in the cross hairs for cancellation. The biggest shift in libraries is moving money from monographs to serials. Look at any of the large ARL institutions and compare their expenditures from books over the past ten years. Also a big change is the amount that libraries are spending for databases.
The biggest shift in libraries is moving money from monographs to serials.
That’s exactly right, and for those who are paying attention this fact portends bad things for journal subscription levels in the future. Because obviously, you can’t take money away from the book budget forever. Once you’ve reached the bottom of that well, the only option left is cancellations. And when it comes time to decide whether we’re going to cancel Journal A or Journal B, you can bet that one factor we’ll take into account is whether or not a substantial portion of the articles in one of those journals is available for free in Green versions. That will not be the only criterion applied, certainly–but basic fiscal responsibility would require that it be applied.
The major reason that print subscriptions dropped for the journals that we published at Penn State while I was director was that, after we made the transition to e-journal publishing by joining Project Muse in 2000 (the first university press to do so when Hopkins opened it to presses other than its own), individuals at institutions subscribing to Muse no longer felt any need to subscribe themselves. This had nothing to do with Green OA, and the adoption of Green OA by the Press later had no measurable effect on cancellations whatsoever, mainly because the libraries with print subscriptions earlier were all now subscribing to Muse, which was producing more revenue for the Press than we had realized from just print subscriptions earlier.
First, I’m not sure I agree that aggregation and Big Deals automatically translate into higher journal prices. If anything, the Big Deal continues to survive because it offers libraries a chance to broaden their offerings to readers at a much lower price per journal than if they were purchased individually. The issue with the Big Deal is not value, it’s that it creates lock-in, eating up a significant amount of a library’s budget and reducing flexibility in choosing specific titles for purchase. When a smaller publisher or society partners with a larger publishing house, usually costs go down–the bigger publishers have economies of scale and pay less for materials and services. So I don’t think there is necessarily a 1:1 correlation between the notion of partnering with a bigger company and a massive increase in price to readers. That may be the case in practice for some publishers but it is not so for all, and not inherent in the concept.
Second, I think the speculative nature of forecasting is really important to recognize. Many leading the charge to new policies and mandates either come directly from the computational research world, or are at least under the sway of the success stories of Silicon Valley and the internet. For computational services and products, there’s a deeply ingrained philosophy of being iterative. The idea is to throw something out there, watch it fail, then build the next version with the lesson you’ve learned from those failures. That approach has been very successful in the computational/internet context, particularly for new products creating a new service previously unseen.
But it is not clear that such approaches translate effectively everywhere else, particularly to a mature market that offers a mission critical product. If you issue randomly chosen requirements, throw them up against a wall to see what sticks, it may result in a failure that breaks the entire system. That may be okay for a new service to share selfies with your Facebook friends, but would be problematic for the system of disseminating and verifying research results, as well as much of the basis for career advancement and funding in academia. Is it better to face that potential vacuum, or instead to plan carefully and make all requirements, such as embargo lengths, be evidence based?