Happy New Year.
My employer, Oxford University Press, holds regular “Oxford Journals Day” events where we bring together our society publishing partners and journal editors to catch up on the latest developments in publishing and to share their experiences. In the autumn of 2017, I was asked to give a “State of Scholarly Communications” presentation for this meeting and, being a fundamentally lazy person, I thought – this is great, Academia moves at such a slow pace that, with some minor tweaks, I’ll be able to re-use this talk for years. Six months later I was asked to reprise the talk for a UK event and I ended up having to rewrite about half of it. Six months later, I had to rewrite the other half.
I like to think of the period that we’ve entered into now as “The Great Acceleration,” a term coined by author Warren Ellis (or, as a recent exhibition states it, “Everything Happens So Much“). We aren’t really dealing with new issues – arXiv has been around posting preprints since 1991, mergers have been common for a while now (Wiley buying Blackwell happened more than 11 years ago), and the open access movement has been front and center since at least the year 2000.
But, like every other aspect of our lives in this interconnected, digital utopia in which we live, we’ve reached a point where everything feels like it’s happening at once. Every week it seems like another piece of crucial publishing infrastructure is changing hands, or a new open access policy is announced, or there’s a new open letter petitioning for change that you’re expected to sign onto, or a new technology or standard that you absolutely must implement.
The upside to this accelerated pace is that it gets us closer to our goals faster. We know that the field of scholarly communications is far from perfect, but now it’s so much easier to gather evidence about reader and author needs, so much easier to publicly discuss potential plans, and, at least in some cases, to put those plans into action and draw attention to them.
The downside is that the faster you go, the less effective are your brakes. Scholarly communications is a complex ecosystem, and one that for most participants, largely works pretty well. Deliberately disrupting one aspect of the chain may have unexpected consequences in hundreds of other areas, and by then it may be too late to stop things from collapsing. We know the damage that the “move fast and break things” philosophy of Facebook and others has done to our society at large. Is this what we want for academia as well?
I would argue that the two biggest forces driving change in the scholarly communication landscape are consolidation and regulation. By consolidation, I mean that there’s a now constant cycle of mergers and acquisitions, reducing the number of independent players in the market. By regulation, we’re talking about the increasing number of rules and the compliance burden being put on researchers.
We are in the midst of an era of mergers and acquisitions, and the biggest of publishers continue to get bigger. You’ll note that most now have names that are conglomerations of their former entities, “Springer Nature”, for example. The top 5 publishers account for more than 50% of the papers published each year, 70% in the social sciences.
In the past year or two, we’ve seen Wiley purchase Atypon, the platform that hosts more than a third of the world’s English language journals, along with Authorea and Manuscripts.app, both online paper writing collaboration tools. Elsevier has swallowed up bepress, which builds institutional repositories, SSRN, a widely used social sciences preprint network, Plum Analytics, a supplier of altmetrics, Aries, the company behind the Editorial Manager submission system, and in late December, Science Metrix. Springer Nature remains separate from, but still at least tangentially connected by ownership to Digital Science, which owns things like Altmetrics, the figshare data repository, the Overleaf authoring tool, and Dimensions, its direct competitor for Elsevier’s Scopus and The Web of Science. Speaking of the Web of Science, Clarivate has become a major player as well, acquiring the startup Kopernio, which helps users get to content they subscribe to easier, and building that in to their already existing portfolio which includes the Impact Factor and Journal Citation reports, the ScholarOne submission and peer review system, and Publons, a system for tracking and granting credit for peer review efforts.
Some of these acquisitions are driven by need – Wiley reportedly spent a lot of money building a platform that underperformed, and bought Atypon to replace it. The same goes for Elsevier, whose home brewed submission system, eVise, never quite worked out, prompting them to buy Editorial Manager.
But a lot is also driven by Wall Street demands. We know that library budgets are flat, if not declining and that investors demands that companies increase their revenue each year. So first, you gobble up more and more of the existing market. Then you build an open access publishing program – that’s seen as new money, coming directly from funders and institutions rather than from the libraries. A third option comes into play here – if the market is flat, what other markets can a company extend itself into? Remember that Elsevier no longer refers to itself as a publisher, rather it is a “global information analytics business” and we’ve covered the shift toward workflow tools extensively (see here, here, here, here, and here for examples).
This is creating a lot of anxiety in the market. If you’re a publisher and suddenly your mission critical infrastructure is owned by a competitor, that has to make you nervous. Combined with concerns about lock-in, this anxiety has led to a growing consensus that the market needs a major investment in shared and open infrastructure and standards. Rather than relying on a competitor or even a private company likely to be acquired by a competitor for key services, perhaps it’s better to work with a community-owned, not-for-profit service. Much of this is being driven by open source software community, which has many advantages due to its transparency, and portability.
It’s unclear whether there’s enough scale in our relatively small community to drive open source development at the level seen for larger industries. We’re just starting to see some of these systems emerging, and while the tools themselves look promising, what’s really needed are services built around those tools. Most publishers don’t have the internal capacity (nor the desire) to become software development and support companies, hence a need for outsourcing remains critical.
Given the high number of degrees awarded by universities every year and the very low number of tenure track faculty positions made available, research careers are something of a buyer’s market. We’ve seen universities continually increase the demands they make of their research employees. Researchers are required to do more and more beyond their actual research, including the usual teaching, mentoring, and serving on seemingly endless committees, but also primarily fundraising — science positions are increasingly similar to free-lance work, where the university essentially agrees to rent you space, and then you’re responsible for paying your own salary and costs through whatever grants you can bring in.
Now on top of this, researchers are being asked to jump through an enormous number of additional hoops, ranging from pre-registration of experiments, to posting of preprints (and monitoring and responding to resulting comments), to formal publication (where one must take great care to publish it in an outlet that follows the very specific rules set by your funders, your university, and all of your collaborators’ funders and institutions). Then you need to make the data behind the paper publicly available and help others use it, and if you really want to drive reproducibility, write up and release your methodologies. Societal impact is now deemed important, so you have to become your own publicist, promoting yourself and the work via social media. At the same time, people may be talking about your paper via post-publication peer review systems, so you need to monitor those and respond to any questions/criticisms. And of course, you likely have institutional, national and funding agency access policies with which to comply, so you have to figure out what those are (in a 2015 poll, more than half of researchers did not know their funder’s access policy), figure out the right version of the paper, figure out where it goes, under what conditions and at what time. Likely you have multiple authors from multiple institutions with multiple funding sources so you have to do this for multiple policies and deposit in multiple repositories.
These aren’t necessarily bad things, but they all add extra burdens to existing work, creating a huge time and effort sink when what researchers really want to do is research. No one goes into science because they really love bureaucracy and filling out forms. Further, if we see the purpose of research as benefiting society, then every second we take a researcher away from the bench means slower progress.
Plan S is a great example of acceleration — the research world has been moving slowly toward open access, with different fields moving at different paces via different routes. This evolution has taken place at, not surprisingly, an evolutionary pace, and a small group of significant research funders have declared their impatience with this level of progress. Plan S is a deliberate attempt to accelerate change, throwing a comet into a complex ecosystem in hope that it will produce mammals, rather than mass extinction.
Whether Plan S proves to shift the entire communication sphere or, like Finch and RCUK before it, is just the next incremental step, one result is certain — a lot more work for everyone at every point in the publication process. And, like the RCUK policy, implementation will undoubtedly be much more complicated, expensive, and difficult to monitor and enforce than is expected. Reality remains frustratingly complex.
That brings us back to the notion of much-needed infrastructure. If the open source community really wants to make a difference, then the some focus should be directed toward back-end, e-commerce billing systems. The regulatory conditions of the market have reached a point where it is incredibly inefficient for them to be tracked and applied by hand. We need systems that can take advantage of persistent identifiers (ORCID, the CrossRef Funder Registry, the developing ORG-ID) and automate the process of ensuring that each author on a paper has met their requirements. A modular system where each funder, government, and institution can plug in their rules and have those applied to the publication process would enable much more rapid progress than reinventing the article submission system or building yet another publishing platform.
Regardless of its outcomes, Plan S is very much of the moment, as those controlling research funding are tired of waiting on academia to right perceived wrongs, and stepping on the accelerator to drive progress as rapidly as possible. Whether we’re heading for the finish line or toward a cliff remains to be seen, but for those of us in publishing, this is both a time of great opportunity and risk. Every new burden placed on a researcher is a chance to build a service to lighten the resulting workload and help get them back to doing research. Those who are bold and willing to accept the failure of some experiments are going to be the market leaders. Whether this favors the incumbents with deep pockets to weather risk and failed investments, or new start-ups lacking historical baggage remains to be seen.
Either way, this is The Great Acceleration. Be prepared to move.