Editor’s Note: this is a guest post by Rob Johnson and Andrea Chiarelli of Research Consulting. Building on the findings of a recent study, ‘Accelerating scholarly communication: The transformative role of preprints’, commissioned by Knowledge Exchange, they consider how publishers are responding to recent growth in the uptake of preprints.
Preprint servers have been growing explosively over the last ten years: over 60 platforms are currently available worldwide, and the sharing of research outputs prior to formal peer-review and publication is increasing in popularity. Preprint servers have a long history in fields such as high energy physics, where extensive collaboration and co-authorship are the norm, and economics, with its lengthy review and publication process. Services like arXiv and RePEC emerged in the 1990s as a means of enabling early-sharing of research results in these disciplines, and have co-existed with traditional journals for decades.
Other disciplines have been much slower to embrace the posting of preprints, but this is now changing rapidly, as Figure 1 shows. The ‘second wave’ of preprint servers has been gathering pace in the last three years and raises afresh the possibility that preprints could disrupt traditional scientific journals.
Why are authors engaging with preprints?
Over the last 12 months we’ve been working on a project commissioned by Knowledge Exchange to explore the role of preprints in the scholarly communication process, speaking with researchers, research performing organizations, research funding organizations, and preprint service providers. Our interviews with authors indicate that early and fast dissemination is the primary motive behind preprint posting. In addition, the increased scope for feedback seems to be highly valued, with much of this interaction taking place via Twitter and email, rather than via direct comments on preprint servers. Early career researchers see particular advantages: the inclusion of preprints on CVs or funding applications enables them to demonstrate credibility in a field much sooner than would otherwise be the case.
The extent to which these factors lead to widespread uptake of preprints varies by discipline, however. The ‘second wave’ of preprint servers is developing quickly in areas such as biology, chemistry and psychology, and early adopters and nascent preprint servers can now be found in virtually all scholarly communities. Nevertheless, the level of adoption depends on a number of socio-technological factors, including individual preference, community acceptance of open scholarship practices, publisher and funder policies, and existing scholarly communication infrastructure.
In fields with a longstanding preprint culture, such as economics, scholarly practice has evolved to the point where ‘the working paper [on RePEc] is downloaded many times more than the article’. Similar patterns have been observed in mathematics, where arXiv-deposited articles appear to receive a citation advantage but see a reduction in downloads, and there are early indications of citation and altmetric advantages to biological science papers deposited in bioRxiv. For authors in other fields, however, evidence of the benefits of preprint posting remains largely anecdotal, and readers continue to privilege the version of record. In April 2017, Judy Luther asked “How long will it take to reach a tipping point where the majority of academic review and hiring committees recognize preprints as part of their body of work?” Looking at the current state of play, it appears this tipping point remains a long way off.
The evolving role of academic publishers in a preprint world
How publishers respond to preprints will depend in large part on whether the recent rate of growth continues. Nobody knows exactly what the future holds, but let’s consider three possible scenarios:
- Turn of the tide: The second wave of preprint servers fades, and preprints remain a major component of scholarly communication only in the fields where they are already firmly established, e.g. those served by arXiv and RePEC.
- Variable adoption: Preprints grow in some additional fields, such as those within the scope of ChemRxiv and bioRxiv, but not all.
- Preprints by default: Preprints grow in all fields (at different paces) and are accepted by the research community at large.
Recent trends in life sciences could lead us to think the second scenario is well on its way to becoming a reality. Meanwhile, the proliferation of new preprint servers means default adoption of preprint posting is now at least a viable possibility for virtually all researchers. However, the potential for an upward growth trajectory to go into reverse can never be discounted, as the recent decline in megajournals’ publishing volumes shows. The rapid increase of preprints in life sciences has gained much attention, but it is instructive to note that the number of biology preprints posted in 2019 relative to new publications in PubMed stands at just 2.3% – a far cry from a default position.
Are preprints disruptive?
Joshua Gans, author of ‘The Disruption Dilemma’, observes that established firms tend to hold back from reacting to disruptive innovations for two reasons: uncertainty (“will preprints ever really threaten subscription revenues?”) and cost (“can we afford to develop new services and workflows to accommodate preprints?”). He also argues, however, that many businesses do find ways of managing through disruption, and outlines three key strategies for doing so:
- Beat them – attack by investing in the new disruptive technology.
- Join them – cooperate with or acquire the market entrant.
- Wait them out – use critical assets that new entrants may lack.
The extent to which publishers are adopting each of these strategies provides a useful barometer of preprints’ potential for disruption.
One response to disruptive events is for incumbents to try to replicate, or improve upon, the new market entrants’ approach. As Roger Schonfeld has noted, competition for content is moving steadily upstream in the research workflow. In this context, it is striking to note how few publishers have sought to develop their own preprint servers, or to replicate the functionality of new platforms such as F1000 Research, which enables the sharing of articles prior to and under open peer review as part of its publication workflow.
An explanation for this lies in the fact that most preprint servers have not in fact entered the publishing ‘market’ at all. One of the few publishers to develop a preprint server, MDPI, chose to establish preprints.org as a free and not-for-profit service, and community-owned and/or scholar-led initiatives remain the dominant model. While this raises questions over the scalability and sustainability of some services, it has not prevented rapid growth occurring in the 25 or so preprints services hosted by the non-profit Center for Open Science, for example. Most of the funders, librarians and researchers we consulted indicated a strong preference for preprint servers to remain not-for-profit and community governed. With existing preprint servers representing neither an immediate threat to subscription revenues, nor a source of significant revenues from other sources, the case for publishers to invest aggressively in replicating their functionality appears weak.
While investment can forestall disruption, another way to achieve this result is by acquisition. Elsevier’s 2016 purchase of SSRN is perhaps the clearest example of this in the field of preprints, and it has since sought to leverage the technology to launch additional services, such as the Chemistry Research Network (ChemRN).
With so many preprint servers provided by not-for-profit actors, though, acquisition is not always an option. ‘Joining them’ in this context is therefore more likely to be achieved through strategic partnerships and alliances. This has become a recurring theme in the last couple of years, with recent examples including:
- The American Chemical Society, Royal Society of Chemistry and German Chemical Society (GDCh) partnering with Figshare to launch ChemRxiv within a week of Elsevier’s 2017 launch of ChemRN. As of August 2019, they were joined by the Chinese Chemical Society and the Chemical Society of Japan as co-owners of the service.
- Springer Nature partnering with Research Square to develop its In Review service.
- PLOS and Cold Spring Harbor Laboratory announcing a partnership to enable the automatic posting of manuscripts to the bioRxiv and medRxiv preprint servers.
- Emerald Publishing partnering with F1000 to develop the Emerald Open Research
We can expect more such partnerships to emerge in the coming months and years. Meanwhile, those who lack the scale to negotiate bilateral relationships of this nature may come to be served by emerging ‘marketplaces’ like Cactus Communications’ PubSURE, which aims to connect preprints, authors and editors on a single platform.
Wait them out
It is rare that a disruptive innovation allows an entrant to build out all of the key elements in a value chain. In this respect little has changed over the last decade. Preprints servers cannot replicate the functions of validation, filtration, and designation served by scientific journals, and nor has the ‘unbundling’ of publishing progressed to the point where services such as portable peer review or overlay journals can address these needs at scale.
Journals with strong brands, or in fields that have yet to show much interest in preprints, may therefore find that a wait-and-see strategy serves them best. It remains unclear how many of the new crop of preprint servers will be able to develop a sustainable business model, and the recent decision by PeerJ to stop accepting new preprints lends credence to a cautious approach. Having established the first dedicated services for preprints in biology and life science, PeerJ’s management team have now opted to focus solely on peer-reviewed journals – effectively conceding the territory to not-for-profit preprint servers such as BioRxiv. As PeerJ’s CEO Jason Hoyt observes: ‘What we’re learning is that preprints are not a desired replacement for peer review, but a welcome complement to it.’
The second wave of preprint servers has much to offer the researcher community, but those expecting it to wash away existing scientific journals are liable to be disappointed. In our view, the biggest threat to academic publishers will come, not from preprint servers, but from other publishers that do a better job of addressing authors’ desire for accelerated dissemination, feedback and scholarly credit. This might be achieved through improved internal workflows, acquisition or strategic partnerships. In each case, seeing the integration of preprints into the research workflow as an opportunity, rather than a disruptive threat, is likely to offer publishers the best hope of continuing to identify and attract high-quality content.
The authors gratefully acknowledge the assistance of Phill Jones in the preparation of this post and the support of Knowledge Exchange, who commissioned the study ‘Accelerating scholarly communication: The transformative role of preprints’. The views and opinions expressed here are those of the authors alone.