Happy New Year.
My employer, Oxford University Press, holds regular “Oxford Journals Day” events where we bring together our society publishing partners and journal editors to catch up on the latest developments in publishing and to share their experiences. In the autumn of 2017, I was asked to give a “State of Scholarly Communications” presentation for this meeting and, being a fundamentally lazy person, I thought – this is great, Academia moves at such a slow pace that, with some minor tweaks, I’ll be able to re-use this talk for years. Six months later I was asked to reprise the talk for a UK event and I ended up having to rewrite about half of it. Six months later, I had to rewrite the other half.
I like to think of the period that we’ve entered into now as “The Great Acceleration,” a term coined by author Warren Ellis (or, as a recent exhibition states it, “Everything Happens So Much“). We aren’t really dealing with new issues – arXiv has been around posting preprints since 1991, mergers have been common for a while now (Wiley buying Blackwell happened more than 11 years ago), and the open access movement has been front and center since at least the year 2000.
But, like every other aspect of our lives in this interconnected, digital utopia in which we live, we’ve reached a point where everything feels like it’s happening at once. Every week it seems like another piece of crucial publishing infrastructure is changing hands, or a new open access policy is announced, or there’s a new open letter petitioning for change that you’re expected to sign onto, or a new technology or standard that you absolutely must implement.
The upside to this accelerated pace is that it gets us closer to our goals faster. We know that the field of scholarly communications is far from perfect, but now it’s so much easier to gather evidence about reader and author needs, so much easier to publicly discuss potential plans, and, at least in some cases, to put those plans into action and draw attention to them.
The downside is that the faster you go, the less effective are your brakes. Scholarly communications is a complex ecosystem, and one that for most participants, largely works pretty well. Deliberately disrupting one aspect of the chain may have unexpected consequences in hundreds of other areas, and by then it may be too late to stop things from collapsing. We know the damage that the “move fast and break things” philosophy of Facebook and others has done to our society at large. Is this what we want for academia as well?
I would argue that the two biggest forces driving change in the scholarly communication landscape are consolidation and regulation. By consolidation, I mean that there’s a now constant cycle of mergers and acquisitions, reducing the number of independent players in the market. By regulation, we’re talking about the increasing number of rules and the compliance burden being put on researchers.
Consolidation
We are in the midst of an era of mergers and acquisitions, and the biggest of publishers continue to get bigger. You’ll note that most now have names that are conglomerations of their former entities, “Springer Nature”, for example. The top 5 publishers account for more than 50% of the papers published each year, 70% in the social sciences.
In the past year or two, we’ve seen Wiley purchase Atypon, the platform that hosts more than a third of the world’s English language journals, along with Authorea and Manuscripts.app, both online paper writing collaboration tools. Elsevier has swallowed up bepress, which builds institutional repositories, SSRN, a widely used social sciences preprint network, Plum Analytics, a supplier of altmetrics, Aries, the company behind the Editorial Manager submission system, and in late December, Science Metrix. Springer Nature remains separate from, but still at least tangentially connected by ownership to Digital Science, which owns things like Altmetrics, the figshare data repository, the Overleaf authoring tool, and Dimensions, its direct competitor for Elsevier’s Scopus and The Web of Science. Speaking of the Web of Science, Clarivate has become a major player as well, acquiring the startup Kopernio, which helps users get to content they subscribe to easier, and building that in to their already existing portfolio which includes the Impact Factor and Journal Citation reports, the ScholarOne submission and peer review system, and Publons, a system for tracking and granting credit for peer review efforts.
Some of these acquisitions are driven by need – Wiley reportedly spent a lot of money building a platform that underperformed, and bought Atypon to replace it. The same goes for Elsevier, whose home brewed submission system, eVise, never quite worked out, prompting them to buy Editorial Manager.
But a lot is also driven by Wall Street demands. We know that library budgets are flat, if not declining and that investors demands that companies increase their revenue each year. So first, you gobble up more and more of the existing market. Then you build an open access publishing program – that’s seen as new money, coming directly from funders and institutions rather than from the libraries. A third option comes into play here – if the market is flat, what other markets can a company extend itself into? Remember that Elsevier no longer refers to itself as a publisher, rather it is a “global information analytics business” and we’ve covered the shift toward workflow tools extensively (see here, here, here, here, and here for examples).
This is creating a lot of anxiety in the market. If you’re a publisher and suddenly your mission critical infrastructure is owned by a competitor, that has to make you nervous. Combined with concerns about lock-in, this anxiety has led to a growing consensus that the market needs a major investment in shared and open infrastructure and standards. Rather than relying on a competitor or even a private company likely to be acquired by a competitor for key services, perhaps it’s better to work with a community-owned, not-for-profit service. Much of this is being driven by open source software community, which has many advantages due to its transparency, and portability.
It’s unclear whether there’s enough scale in our relatively small community to drive open source development at the level seen for larger industries. We’re just starting to see some of these systems emerging, and while the tools themselves look promising, what’s really needed are services built around those tools. Most publishers don’t have the internal capacity (nor the desire) to become software development and support companies, hence a need for outsourcing remains critical.
Regulation
Given the high number of degrees awarded by universities every year and the very low number of tenure track faculty positions made available, research careers are something of a buyer’s market. We’ve seen universities continually increase the demands they make of their research employees. Researchers are required to do more and more beyond their actual research, including the usual teaching, mentoring, and serving on seemingly endless committees, but also primarily fundraising — science positions are increasingly similar to free-lance work, where the university essentially agrees to rent you space, and then you’re responsible for paying your own salary and costs through whatever grants you can bring in.
Now on top of this, researchers are being asked to jump through an enormous number of additional hoops, ranging from pre-registration of experiments, to posting of preprints (and monitoring and responding to resulting comments), to formal publication (where one must take great care to publish it in an outlet that follows the very specific rules set by your funders, your university, and all of your collaborators’ funders and institutions). Then you need to make the data behind the paper publicly available and help others use it, and if you really want to drive reproducibility, write up and release your methodologies. Societal impact is now deemed important, so you have to become your own publicist, promoting yourself and the work via social media. At the same time, people may be talking about your paper via post-publication peer review systems, so you need to monitor those and respond to any questions/criticisms. And of course, you likely have institutional, national and funding agency access policies with which to comply, so you have to figure out what those are (in a 2015 poll, more than half of researchers did not know their funder’s access policy), figure out the right version of the paper, figure out where it goes, under what conditions and at what time. Likely you have multiple authors from multiple institutions with multiple funding sources so you have to do this for multiple policies and deposit in multiple repositories.
These aren’t necessarily bad things, but they all add extra burdens to existing work, creating a huge time and effort sink when what researchers really want to do is research. No one goes into science because they really love bureaucracy and filling out forms. Further, if we see the purpose of research as benefiting society, then every second we take a researcher away from the bench means slower progress.
Plan S is a great example of acceleration — the research world has been moving slowly toward open access, with different fields moving at different paces via different routes. This evolution has taken place at, not surprisingly, an evolutionary pace, and a small group of significant research funders have declared their impatience with this level of progress. Plan S is a deliberate attempt to accelerate change, throwing a comet into a complex ecosystem in hope that it will produce mammals, rather than mass extinction.
Whether Plan S proves to shift the entire communication sphere or, like Finch and RCUK before it, is just the next incremental step, one result is certain — a lot more work for everyone at every point in the publication process. And, like the RCUK policy, implementation will undoubtedly be much more complicated, expensive, and difficult to monitor and enforce than is expected. Reality remains frustratingly complex.
That brings us back to the notion of much-needed infrastructure. If the open source community really wants to make a difference, then the some focus should be directed toward back-end, e-commerce billing systems. The regulatory conditions of the market have reached a point where it is incredibly inefficient for them to be tracked and applied by hand. We need systems that can take advantage of persistent identifiers (ORCID, the CrossRef Funder Registry, the developing ORG-ID) and automate the process of ensuring that each author on a paper has met their requirements. A modular system where each funder, government, and institution can plug in their rules and have those applied to the publication process would enable much more rapid progress than reinventing the article submission system or building yet another publishing platform.
Regardless of its outcomes, Plan S is very much of the moment, as those controlling research funding are tired of waiting on academia to right perceived wrongs, and stepping on the accelerator to drive progress as rapidly as possible. Whether we’re heading for the finish line or toward a cliff remains to be seen, but for those of us in publishing, this is both a time of great opportunity and risk. Every new burden placed on a researcher is a chance to build a service to lighten the resulting workload and help get them back to doing research. Those who are bold and willing to accept the failure of some experiments are going to be the market leaders. Whether this favors the incumbents with deep pockets to weather risk and failed investments, or new start-ups lacking historical baggage remains to be seen.
Either way, this is The Great Acceleration. Be prepared to move.
Discussion
8 Thoughts on "Welcome to The Great Acceleration"
“Combined with concerns about lock-in, this anxiety has led to a growing consensus that the market needs a major investment in shared and open infrastructure and standards”
Ironic that libraries make exactly the same observation about their relationship with publishers…
While they anxiously wait to be saved by the emergence of an ideal “shared and open infrastructure” (presumably made available and maintained for free in perpetuity), publishers might experience an immediate Great Acceleration by better understanding and investing in relationships with the vendors/partners who loyally and dependably serve them today. Too simple and pragmatic an idea?
And when the loyal, dependable vendor/partner sells to the publisher’s direct competitor? What then?
Ouch! Yes, good – sharp – point! I understand the strategic issue, but ownership comes and goes and does not necessarily impact the the quality of the service delivered. For example, I’ve heard from some publishers that compete with Wiley that the solution from Atypon has improved since the acquisition.
For the vast majority of publishing operations their survival and success depends on good operational management not strategic positioning regarding base technology ownership. My main point is that most publishers should not overlook the immediate and significant competitive advantages to be gained by fully optimizing their existing relationships, even as they anticipate the green grass of open source.
Dear David,
Thank you for the great recap and your foreshadowing of future events in the scholarly publishing industry. Your point of institutions feeling or being “locked in” takes me back to the 1999 lecture of Professor Sandra Vandermerwe of the Imperial College at the Reed Elsevier Executive Development Program at Green College, Oxford University. In her book Customer Capitalization, she speaks about having your customers to “Lock on” to your services because of them having a delighted experience versus feeling “Locked in” and feeling frustrated.
Professor Vandermerwe speaks about the industry creating an information eco-system where competitors would seriously engage with each other to provide the user with a full delighted experience.
The discussion around shared technical services on one level makes sense but to achieve such a lofty goal it does requires financing, governance and most of all trust. Note, back in 1998 Karen Hunter, SVP Strategy for Elsevier and Pat Sabosik, GM for ScienceDirect had put forth the idea of allowing the publishing community to load up their content (no cost) to leverage the ScienceDirect platform. The publishers would be responsible for the commercial terms for their content. Not one mid-size or large publisher had taken up Elsevier’s offer.
The reason for the other publishers not taking up Elsevier’s offer was due to a number of reasons. Trust was a prime reason but there was also the thought of “owning the desktop”. Owning the desktop was a very popular notion that if you built a superior interface (platform) you would capture the individual’s desktop. As we have learned, the researcher has so many different needs and requirements that it is virtually impossible for one company to meet the needs and demands of the research community.
To achieve the goal and aspiration for “User Platform Nirvana”, the industry will need to come together to develop the trust, the roles, the technology, and the business model to make it a sustainable ongoing concern.
Acquisitions will continue and new companies will emerge. A key question that the industry must address is the “Value Chain” of the researcher. What will be the key activities of the publishing process and what improvements will emerge to increase the productivity of the researcher. What new tools will be developed and what existing tools and processes will be retired.
Innovation requires vision, risk, investment and the willingness to fail fast. If the scholarly publishing industry wishes to pursue this idea of a shared technical infrastructure, then it will require a number of existing and new pioneers to take on this challenge. As Karen Hunter (1945-2018) always said, “You can tell who the pioneers are by the arrows in their back”!
To achieve this idea of “User Platform Nirvana” it will require a number of pioneers to create wagon train to establish the path forward.
I’d like to see the following from our publishing partners: (1) no more “dummy” contract for authors. Assume the scholars want as open as possible and will thank you for it in the future. (2) Figure out how to keep track of tax exempt certificates and stop asking for them again and again. (3) No more trying to make money on advertisements or tracking users. Assume nobody would opt-in regardless of the customization you attempt to do. You’re not good at it. Amazon and Google are barely acceptable at is so don’t assume you can do better. (4) Give societies a bigger piece of the pie. Don’t drag out their negotiations for years and assume they will be forced to accept your bad terms because they are small. (5) Partner with every library consortia bigger than 4 libraries on open access. If you haven’t even had a discussion with the consortia don’t assume what they need or want.
As I read about whom swallowed who I am reminded of Monte Python’s one mint!
https://www.google.com/search?q=youtube+monty+python+one+thin+mint&oq=utube+monte+python+one+mi&aqs=chrome.1.69i57j0l2.15787j1j8&sourceid=chrome&ie=UTF-8
You wrote, “A modular system where each funder, government, and institution can plug in their rules and have those applied to the publication process would enable much more rapid progress than reinventing the article submission system or building yet another publishing platform.” On this front, please note the open-source Public Access Submission System (PASS) from Johns Hopkins, http://pass.jhu.edu/ . It’s not the full infrastructure you describe. But you called for a modular system, and PASS can already play the role of at least one of the modules. Also see the November 2018 launch announcement, https://blogs.library.jhu.edu/2018/11/hopkins-partners-with-harvard-and-mit-to-launch-public-access-submission-system-pass-and-support-open-access/ .