Author’s Note: The post below derives from my presentation at the APE 2022 meeting earlier this week.
The term “liminal space” derives from the Latin word “limen”, meaning threshold. A liminal space is between what was and what’s next, a place of transition, of waiting and not knowing. Liminal spaces can be physical spaces between two places, such as hallways, stairs, elevators, waiting rooms, streets, airports, or train stations. There are also psychological liminal spaces, such as adolescence, the space between being a child and adulthood. This is where we find ourselves today as a community.
Scholarly communication is in a state of rapid change, a period I called “The Great Acceleration” a few years ago. We are leaving our traditional, print-based, subscription-based past behind, and moving toward a digital future of openness, transparency, access, and reuse. We know the end point of our journey, and we’re on our way, yet the path ahead is still unknown. How we get to the end goal of open access (OA), open science, open research will determine the end results – are they fair, are they equitable, are they affordable, are they sustainable?
Our journey to date has been marked by a massive wave of consolidation, or rather two separate but concurrent waves of consolidation: one in the journals publisher space, and the other involving our communications infrastructure. I wrote about consolidation in the publisher space, namely the ongoing demise of the independent research society publisher in these pages last month. In that post I used evolutionary biology as a metaphor, with organizations evolving to adapt to change in a business environment.
In recent years, the other wave is happening around the technology and infrastructure of scholarly communication, as the core tools we use to publish and access research results have gone from independently owned status to being part of larger, commercial publishing houses and technology companies. Aries, and their Editorial Manager submission and peer review system, is now owned by Elsevier. Wiley has been on a shopping spree as of late, buying up the Atypon platform, host to over 100,000 publications, along with J&J editorial services and most recently, the eJournal Press submission and peer review system. In a recent interview, Wiley’s Jay Flynn noted that more than 50% of the world’s peer-reviewed research goes through Wiley-owned platforms.
The last 5-10 years has seen a shift for many companies away from being a publisher and toward being a workflow provider, something Roger Schonfeld has written about extensively in The Scholarly Kitchen. The big commercial publishers have increasingly been building big portfolios of services encompassing all aspects of the research workflow. Publication is just one point in the research workflow, and the idea is to offer subscribed services to research institutions for every stage of the research process.
There’s been a lot of talk lately about funders getting involved in building community-owned or open source infrastructure, but all evidence so far is that this is not a great fit. Infrastructure is about hard, tedious, and often incremental work. That’s not really exciting for most research funders, who seem much more interested in riskier projects that hope to make a bigger splash. Where things do get built, research funders have shown that they’re really bad at maintenance and are usually more interested in moving on to the next thing than keeping the last thing they built running. Educopia’s 2019 study on failed infrastructure made this clear, and as I wrote at the time, any piece of infrastructure needs to be built from the ground-up to be self-sustaining, and that means business knowledge and business planning, which further reinforces our business environment that selects for businesses over other types of organizations.
Roger Schonfeld also recently wrote about yet another major market consolidation event, Clarivate’s acquisition of ProQuest. Clarivate is a really interesting company in our space, because while they’re building up the same sort of technology and services portfolio as a company like Elsevier, they are missing one essential component, content – Clarivate owns no journals and publishes no articles. Which raises a question – in an OA world, is publishing the worst part of the publishing business to be in? The answer is both yes and no, depending on how and what you’re publishing, so let’s look at how that falls out by thinking about the future.
The short-term outlook is that we’re going to be in this liminal space for a while. As William Gibson famously said, “the future is already here, it’s just not evenly distributed.” Change is happening, but it’s happening at a different pace in different fields and different geographies. As such, the market is going to remain a balancing act between the old and the new for the foreseeable future.
Springer Nature’s Steven Inchcoombe offered evidence of the remarkable progress they’ve made toward OA, but also noted that they hope to reach a level where 50% of their output is OA by 2024. The question is whether the remaining 50% will snowball and quickly go the same way, or if what’s been accomplished here is harvesting all the low-hanging fruit, with the other literature more likely to see a longer and more arduous path to an open future?
Lately I’ve been working with a lot of research societies that are negotiating new publishing partnerships or extending existing partnerships with larger publishing houses, and I’ve yet to see one where a major existing hybrid journal’s revenues aren’t still dominated by subscription packages through at least 2027.
That tracks well with other recent projects where I’ve been speaking with the heads of major library consortia throughout the world, and the regional differences are striking. In Europe and the UK, Transformative Agreements (TAs) are being widely locked in as the standard way that libraries purchase services from publishers. Elsewhere in the world, however, the appetite for TAs is much, much lower. Much of this stems from how different the financial structure of universities and research institutions is between the US and the EU. In the EU, where the universities are largely public institutions, there is much more centralization of both research and university funding, and so better flexibility to shift those funds around as needed. In the US, public universities are run at a state level, which means there’s no transfer of funds available between The University of Wyoming and The University of Arkansas for example. Even more problematic is that many of the most productive research universities are private institutions. This leads to two different definitions of “cost neutrality”, one where OA is mandated and cost neutrality includes both library subscription spending and article processing charge (APC) costs for all papers published from the institution, and one where there are no such mandates, and cost neutrality means current library spending on subscription journals alone.
At the core of the structure of the author-pays APC Gold OA model and Transformative Agreements is the idea that costs of publication are no longer spread among a large number of readers (via subscriptions) but instead highly concentrated onto a much smaller number of authors. So institutions with low research outputs (which include both liberal arts colleges and community colleges, but also large corporations) can expect to see a cost savings or even become free riders entirely, but any research-intensive institution that produces a lot of publications will ultimately see significant increases in its costs to make up the difference. For many years now, Harvard’s library has stated that it cannot afford what it currently pays for journal subscriptions. The idea that Harvard can somehow massively increase its spend several times over to pay for its research output is simply not in the realm of possibility. And at the same time, does anyone think DuPont or Merck should become free riders? While many US institutions are actively looking to drive open access and open science practices, they’re doing so under the limitation of activities being at or close to cost neutrality of current library spend, which will not support an output-based model of OA.
Another important factor in our journey so far is that in those negotiations for partnerships between publishers and societies, we’re seeing an increased emphasis on publishing in quantity – remember that under the APC and the TA model, you get paid for every article you publish, so more articles means more revenue. This creates another tricky balance. High-quality, selective, flagship journals are essential for selling subscription packages. Libraries want to subscribe to the best journals, or at least the journals their readers see as most important. But flagship journals publish a small number of articles and reject a lot of submissions. They usually have higher overheads and lower outputs than journals with less rigorous acceptance requirements. So for the subscription short term, you need these expensive-to-run, low-volume flagship journals, but for the OA long term, they cost too much and publish too little to be highly profitable. We’re starting to see a changing attitude from publishers toward these flagship titles, with an emphasis on lowering acceptance standards and publishing more articles over time. Some publishers go as far as requiring quotas for the journals – you must accept X number of articles per year.
Our journey to date suggests two concurrent ways that the long-term future is being shaped: a drive for low-cost, high-volume bulk publishing, and a shift for publishers to become paid service providers for most everything else.
Publishing in Bulk
Earlier in this post, I asked the question of whether publishing was the worst part of the publishing business and my answer was kind of a waffle – to better explain that, what I mean is that some types of publishing are likely to be highly profitable in an OA world, while others are unlikely to be worth the bother.
MDPI offers a possibility for the apex creature in this future ecosystem, the most highly evolved to meet the conditions of the environment, providing a model for optimizing for scale and efficiency. Their prolific and constant email marketing campaigns, paired with a special issue strategy aimed at providing a venue for extensive publication by even researchers in the smallest of niches, has seen their publication volume grow five-fold from 2017 to 2020, and their revenues grow almost fourteen-fold from 2015 to 2020. In 2021, they had over 39,000 special issues in the works. 39,000! The bulk of the work at MDPI is done by in-house staff, making efforts highly coordinated and standardized, and far more efficient and less idiosyncratic than at publishers where each journal is run as its own entity and is highly dependent on community resources.
The resulting journals and special issues are a mixed bag in terms of quality, largely depending on the amount of care or ethics the individual editors and authors put in. But this variability doesn’t seem to have slowed MDPI’s growth nor harmed its earnings. For the last few years, most major publishers have been actively growing their programs, both through publishing agreements with research societies and also through aggressively launching owned journals.
In an OA world, you want to emulate MDPI and have low-overhead, high-volume journals and, as subscription wanes, to rid your program of those pesky, small, high-quality titles with high expenses and low publication volumes. So what emerges from our journey is publishers investing heavily in low-rejection, high-volume journals.
The second shaping journey is where all those recent infrastructure acquisitions come into play. Unless we see some sort of massive reform in the academic career and funding system, which seems an area much harder to change than publishing, there will remain a demand in the author market for those high-prestige journals. There just won’t be a lot of money to be made from them, perhaps except for a small niche of the very top high-end journals, those like Nature and Cell that can charge $10K-plus APCs. Any level below that, you’ll be dealing with a small number of authors and be unable to charge them enough to support a rigorous, high-rejection program. Unless you’re at the very top, then why bother with all that work when you can crank out articles on a much more profitable scale without all the hassle?
This scenario has the most prestigious journals staying in-house where they can remain profitable, but most other prestige journals falling back to independent status, where they will be run with rigor and care by mission-driven research societies rather than profit-driven companies. But remember, all the tools and infrastructure needed to make these journals happen have been bought by the big commercial publishers, and so rather than working together as partners, the societies now become paying clients to the publishers, purchasing the technologies and services they need to keep their journals running.
You’ll need one of the submission and peer review systems owned by one of the big publishers, so why not purchase access to that as part of a package of tools and services like editorial support, production, marketing and all the other things publishers currently do for their partners?
For the publishers, you earn revenue from the expensive process of producing the material without having to cover any of the costs incurred. Then, once the paper is published, it will be CC BY-licensed, and you can still reap all the benefits as if you had published it yourself. You can plug it right into your workflow system and still sell the analytics and all the other pieces you’re selling for your own journals. That’s where Clarivate starts to make sense, earning all the benefits of publishing without doing any actual publishing. As Todd Carpenter noted in response to this idea, the real money being made during the Gold Rush was to be found in selling shovels.
The Path Ahead (or are there other branches?)
This is the path we are on. We live in a business environment, and business organisms have had OA thrust upon them and have adapted accordingly. These are the optimized strategies that have emerged – high-quantity, low-overhead publishing and controlling the means of production for every other type of publishing.
The question for the community as a whole is whether this is an acceptable long-term outcome. It definitely gets us to our end goal of OA, but the journey will have created the resulting shape of that OA. We shouldn’t assume that because this is the path we’re currently on that we can’t choose a different direction or create new branches. Are there other routes we should be investing in more in order to drive a differently-shaped future?
Are there routes that don’t require success to be based on scale and output volume? There are lots of experiments going on now that essentially ask libraries to pay for things they can otherwise get for free. While promising, I’m concerned that the frame shift needed, both for the library and the university, going from being a place that brings in money to spend on itself to more the mindset of an investment portfolio manager, sending money out into the world for the benefit of the larger community, may be both fragile and take a significant amount of time, given the priorities of academic institutions and how slowly they move.
If we are indeed stuck in a system where output quantity is key to success, then are there other routes beyond market consolidation available to the community to drive scale? Consider where the research community has had great success with mission-driven or community-owned ventures, like the largest of the university presses, Cambridge and Oxford, where a significant investment is made to enable them to be run as businesses meant to drive surpluses for the university, rather than as self-contained entities that merely need to self-sustain. Think of the organizations already in our community where smaller like-minded groups can come together in collaboration rather than competition, groups like GeoScienceWorld, or BioOne. By examining where these efforts have succeeded, it becomes clear that investment in infrastructure needs to start with a business plan for the long term, with an assumption that funding is going to be limited and driving a financial surplus to ensure long term sustainability is essential.
We know where we want to go. What’s the best way to get there?