For years now, preprint communities have provided a glimmer of an alternative to the journal publishing system, that speed and efficiency might replace what has seemed to many like a cumbersome editorial and peer review process. What started in a small set of originating fields such as high energy physics in 1991 has, in recent years, begun to take hold elsewhere, including the biomedical sciences. Today, Ithaka S+R has published an overview of key developments in preprint communities, which are grappling with an array of policy issues as they seek to build trust in a contested information environment and build durable business strategies. 

Rob Johnson and Andrea Chiarelli recently looked at some of the options that publishers face in engaging with preprints. Today, we observe that beyond preprint communities that are typically organized around a field or set of fields, in recent years all the major publishers have made their own investments in preprint platforms. Publishers are integrating preprint deposit into their manuscript submission workflows, and adopting a common strategy designed to take back control of preprints.

The first preprint service was founded at Los Alamos National Laboratory. “Los Alamos Scientific Laboratory published The Atom, an employee magazine. In 1974, the magazine highlighted women scientists and featured Julia Hardin on the cover.

Springer Nature

Launched in 2018, Research Square is today being marketed as the world’s fastest-growing preprint platform, hosting over 20,000 preprints as of May 2020. Springer Nature is an important minority investor, and former Springer Nature open access executive Rachel Burley is ResearchSquare’s President. Last year, it began an important initiative to host preprints of the manuscripts submitted to select Springer Nature titles, its so-called In Review” service. 

In Review is a preprint service that gives authors the option to have their paper posted online at the time of submission. It also gives authors and readers access to the status of a manuscript via a peer review timeline during the peer review process. If a manuscript is not accepted for publication, the associated preprint remains on the platform but disassociated from any journal branding or submission process information.

Springer Nature’s goal is not to provide a one-to-one link between the journal website and its preprints site on Research Square. Rather, preprints can live in a variety of locations per the author’s choice. However, for those Research Square preprints that yield a published article on a Springer Nature platform, there will be a link from the preprint to the version of record. Springer Nature’s Steven Inchcoombe described this as an “article-centric, not journal-centric, approach.” 

Springer Nature first introduced the In Review service on its open access portfolio (i.e., SpringerOpen and BioMed Central), since presumably author acceptance of the service there would be highest. Over time, it will be expanded to most if not all of Springer Nature’s hybrid/traditional portfolio as well. 

The workflow to enable a service like this is no small thing, especially given the interoperability required across the systems. The manuscript submission system, through which the author submits an article for consideration by a specific journal and the editorial and peer review process is managed, is the linchpin. The workflow enabled by these systems must be modified to add an option for the author to choose whether to deposit the manuscript as a preprint. If the author chooses the preprint option, the system must affirmatively initiate the deposit (both of the manuscript and its accompanying metadata). And this process must interact appropriately with whatever screening process (for plagiarism, ethical considerations, etc.) is put in place for submissions on the preprint platform. As the manuscript works its way through the journal’s editorial and peer review process, updated progress information must be shared back to the preprint site. And finally, when the article is published, a link to it is placed on the preprint platform, while if the manuscript is rejected or withdrawn, the preprint metadata are updated in other ways. The upshot: the back and forth between the manuscript submission and management system on the one hand and the preprint service on the other hand is extensive. 

The Springer Nature journals in the current In Review service utilize Aries/Editorial Manager as their manuscript submission platform. This caught our attention because some observers have questioned whether Elsevier would be able to maintain neutrality for Aries following its purchase. But, whether this happened readily or after protracted negotiations, Elsevier’s Aries is now supporting a workflow that enables the deposit of manuscripts not with its own SSRN but with Springer Nature’s Research Square. Some observers will read into this a meaningful victory for cross-publisher interoperability. Springer Nature and its society publishers also utilize three other manuscript submission and management systems, and an expansion of In Review will require modifications to these other systems to enable links to ResearchSquare. 

Inchcoombe was comfortable referring to Digital Science as a “sister” company, emphasizing that it would be surprising if there were not ways for Springer Nature’s Research Square and Digital Science’s Figshare to collaborate more closely over time. It is interesting to hear the nature of the family relationship from the Springer Nature perspective, given questions about that relationship that one of us has raised and the fact that Digital Science has emphasized its independence.  


As Wiley’s Todd Toler told us emphatically, “We are very pro preprinting.” Much like Springer Nature’s service described above, Wiley’s Under Review service enables authors to deposit their manuscript as a preprint while it is being editorially reviewed by a Wiley journal. Under Review is currently being expanded from a preliminary pilot at the beginning of the year, and Wiley has expanded it to 37 journals (a mixture of traditional/hybrid and pure open access titles) more quickly than anticipated to accommodate more of its journals that are most involved in pandemic research. A little more than one third of submissions to Wiley’s pilot journals are now being deposited as preprints as a result of this service. 

When the author agrees to submit the preprint, it is deposited with the Authorea collaborative authoring platform of Wiley’s Atypon publishing technology business. Information is available on that preprint about review status and ultimately publication, although if the paper is not accepted then the preprint metadata is disassociated from the manuscript submission process for the journal in question — another point of contact between the manuscript submission system and the preprint platform. 

The complexity of this workflow makes it understandable that a pilot is limited in certain ways. Initially, the service is being offered only for Wiley journals. Eventually, it is possible that it will be made available to all of Atypon’s platform customers. It will be interesting to see whether these preprints would be portable should a publisher switch from Atypon to, say, Silverchair — or whether they will serve as a form of stickiness to entice publishers to stay on the Atypon stack. For now, Under Review is limited only to one manuscript submission system. Wiley’s journals portfolio operates on several manuscript submission systems, and thus far only ScholarOne accommodates the Under Review workflow. With Springer Nature already able to utilize Aries to deposit in Research Square, it cannot be long until Wiley is similarly able to utilize Aries to deposit with Authorea.


While Springer Nature and Wiley pursue fairly similar submission-integrated models using Research Square and Authorea, respectively, Elsevier has a far more powerful building block with SSRN. When Elsevier acquired SSRN in 2016, it was buying not merely a technology platform, but more importantly a set of field-specific communities (or “research networks”). Since the acquisition, SSRN has continued to develop these communities, currently hosting more than 50.  

Elsevier’s 2018 acquisition of Aries Systems, which provides a suite of publishing workflow solutions, most notably Editorial Manager which was referenced above as part of Springer Nature’s In Review system, is an important piece of the puzzle. SSRN’s president at the time of acquisition, Gregg Gordon, was also assigned some Aries responsibilities for a period of time following its acquisition.

As Gordon explained to us last week, “We have never believed that preprints are a replacement for the published article.” Instead, much the same as Springer Nature and Wiley, Elsevier has steadily developed its systems to connect preprints and SSRN with the Aries / Editorial Manager publishing workflow. 

First, it has developed the FirstLook service, which enables manuscripts submitted to journals to be viewed as preprints on SSRN. Some 60 journals are currently using this service to create branded homes for their preprints on SSRN. It has seen substantial uptake through Cell Press and Lancet titles (all Elsevier publications), especially in light of pandemic-related research. Gordon stressed to us that no medical-related materials are issued as preprints without basic editorial review by a Cell Press or Lancet editor. Gordon told us that Elsevier is “in active conversations with several journals from other publishers about launching First Looks for them.” It will be interesting to see if the SSRN/FirstLook service is integrated with other manuscript submission services beyond Aries/Editorial Manager.

FirstLook has a workflow quite similar to In Review and Under Review, in that a manuscript is submitted to a journal and then, following comparatively light editorial review, it is deposited as a preprint while subsequent editorial steps, including peer review, take place. But Gordon distinguishes FirstLook because of the community elements that the preprint (and its authors) experience through the SSRN research networks. As a result, an author might hope to see more constructive engagement with their work, in parallel with the journal editorial process, allowing them to receive additional input on their manuscript, which can be thereby improved prior to publication. There is also currently a pilot to enable the use of an Aries/Editorial Manager workflow to deposit an article directly into an SSRN research network, without a journal branded FirstLook.

And, Elsevier has been developing (but has not yet released) an “Ingest from SSRN” workflow that would allow journal editors using Aries / Editorial Manager to source manuscripts at an early stage in the research lifecycle from among the working papers, proceedings, and preprints, among other content, appearing on SSRN. If it is worthwhile to offer a “vertical stack” manuscript sourcing service, it is because SSRN has an active preprint community of deposits and accompanying engagement prior to manuscripts being submitted to journals.

SSRN continues to integrate with other parts of Elsevier, for instance enabling content exchange with Mendeley, Mendeley Data, Pure, and Plum. It also has piloted an integration with Digital Commons, the repository service that Elsevier acquired in 2017 as part of bepress. Digital Commons hosts numerous preprints on behalf of institutional customers, and perhaps some of these SSRN/Aries integrations will one day carry over to Digital Commons as well.

Taylor & Francis

If Springer Nature and Wiley are each introducing a preprint platform into its article submission workflow, and Elsevier is establishing a bidirectional flow between article submission and its SSRN preprint communities, Taylor & Francis (T&F) seems to have been eyeing the possibility of an even bolder change to its editorial processes when, in January 2020, it acquired F1000 Research (hereafter just F1000)

In many ways, F1000 provides the workflow that the other publishers discussed above are trying to jury rig on existing systems. As F1000’s Liz Allen explained to us, they “want to change the publishing model” and are well positioned to do so as they are not trying to retool a legacy system.To be sure, F1000 has probably received the most attention for its post-publication “open peer review” model, as well as its emphasis on funder specific portals, for example Wellcome, the Bill & Melinda Gates Foundation, the Health Research Board Ireland, and most recently the European Commission. But, for our purposes today, F1000 is best understood as having provided an elegant publishing workflow from submission to preprint to review to publication. Once an author submits a manuscript and following a basic level of editorial review, for example for plagiarism, the version that others might consider a “preprint” is “published” and the peer review process is initiated. As (open) peer reviews are submitted, the author has the opportunity to revise the paper.

The F1000 model is not identical to the approaches that the others are taking. They have not (yet) made any moves in the direction of open peer review, and they have been careful to enable a rejected or withdrawn manuscript to have its preprint disassociated from the journal submission workflow so that it could be resubmitted elsewhere and published. Still, this fundamental F1000 workflow appears to be the direction in which the others are headed. 

Bearing this in mind, it is reasonable to anticipate that F1000 may represent a model workflow that T&F may wish to adopt for other parts of its more traditional publishing program. Already, F1000 is being used as the basis for open publishing workflows for other publishers such as Emerald Publishing.  It will be interesting to see if, over time, not only the F1000 workflow but also its technology stack, or elements of it, are brought into larger scale use at T&F. The idea that with F1000 T&F was purchasing not just an open access service, but potentially a technology stack that could one day replace Atypon/Literatum and ScholarOne and Aries, would make it an even more interesting acquisition than some have recognized.


A number of other publishers are operating preprint services. Most of these are scholarly societies that are attempting to build preprint communities that do not yet appear to be connected to any publishing workflow. For example, 

  • IEEE operates TechRxiv, for technical research in electrical engineering, computer science, and related areas; 
  • The American Chemical Society, Royal Society of Chemistry, Chemical Society of Japan, the Chinese Chemical Society, and German Chemical Society operate ChemRxiv
  • The American Geophysical Union (in partnership with Atypon and Wiley) operates Earth and Space Science Open Archive (ESSOAr); and 
  • The American Political Science Association (APSA) (in collaboration with Cambridge University Press) operates APSA Preprints.

Preprint communities attached to a scholarly society, or a group of allied societies, has a certain logic to it in expanding the peer networking element of a society. 

Additionally, just as Elsevier envisions for SSRN, so several other publishers are also seeing the possibility to use preprints for article sourcing purposes. SAGE operates a preprint service called Advance, focusing on humanities and social sciences scholarship. Advance runs on the Figshare platform (as do other preprint services, including those from several scholarly societies). Once approved for posting on Advance, papers can be submitted, at the author’s option, to a SAGE journal. Similarly, eLife recently launched a service in which papers submitted to bioRxiv are reviewed both for potential publication with eLife and for public comments on the bioRxiv site. Both the SAGE and eLife examples show publishers looking at preprint services as article sourcing platforms. 


Preprints are not an unalloyed positive good, and some observers are less than keen on them. Kent Anderson in particular has been waging war against preprints over the past year, in conference appearances and especially on his blog The Geyser. He has been marshalling an array of arguments about the damage preprints are doing to the culture and practice of scientific communication. The major publishing incumbents do not seem to agree: The stars have been aligning for preprints, and now each of the major publishers has made a substantial investment.

To be sure, publishers have different ideas about preprints, or are at different stages of development. It is clear however that the largest commercial publishers are working in parallel, in a challenging systems environment, to expand their publishing workflows to incorporate preprints. 

They appear to have some combination of two objectives. First, it seems they will use preprints to argue that they have accelerated the pace of scholarly communication without compromising the peer review process and the time it requires. Rather than suffering a critique for the timeline from submission to publication, they can argue that they have disseminated the preprint just days after submission, since it is now their own services that are distributing the preprints rather than a true third party. They will also be able to demonstrate, for the doubters, the exact value provided by editorial and peer review processes.

Perhaps more significantly, however, they are bringing preprints inside their publishing workflows. This will afford them an opportunity to emphasize the importance of the version of record and its integrity. And, it will allow them to maximize their control over the research workflow as a whole, including datasets, protocols, and other artifacts of the research and publishing process. If successful, over time publishers will see fewer of the preprints of their eventual publications living “in the wild” and more of them on services and in workflows that they control. 

As a byproduct of introducing these workflows, publishers may over time be able to introduce a  more efficient and consistent layer of quality control than has been available through some existing preprint services. For scholars and the general public, this may be a benefit that publisher involvement will bring. Given that only 50-70 percent of preprints eventually get published in peer reviewed journals, it will be interesting to see if publishers will continue focusing on early versions of articles they hope to publish, or if they will expand into early versions of other format types they publish, such as short works or case reports .

And their involvement is coming with no small amount of investment. Beyond the expenses to purchase or build preprint platforms, the workflow and resulting platform integration challenges are substantial. Several major publishers are proceeding by altering existing manuscript management systems to enable workflows that connect them up with preprint services. T&F seems to be taking a different approach, envisioning that it will find more flexibility by scaling up the F1000 platform. The competitive dynamics here, especially in light of Elsevier’s purchase of Aries, will continue to be intriguing to follow. 

Notwithstanding the expenditures they are making here, it is not clear that in entering the preprints business publishers stand to see any substantial revenue opportunity. This is an opportunity to strengthen control of the workflow and defend the version of record. In a future Kitchen piece, we will explore how this landscape of publisher initiatives for preprints connects to other community preprint initiatives. 


We thank the following individuals for interviews and other assistance in preparing this piece: Liz Allen, Camille Gamboa, Gregg Gordon, Shari Hofer, Steven Inchcoombe, Eric Merkel-Sobotta, Kristen Modelo, Alberto Pepe, Caroline Sutton, Todd Toler, David Tucker, and Susie Winters. We thank Kimberly Lutz for reading a draft of this piece.

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.

Oya Y. Rieger

Oya Y. Rieger is a senior strategist on Ithaka S+R’s Libraries, Scholarly Communication, and Museums team. She spearheads projects that reexamine the nature of collections within the research library, help secure access to and preservation of the scholarly record, and explore the possibilities of open source software and open science.


25 Thoughts on "Publishers Invest in Preprints"

I take issue with my research and objections around preprints being described as “waging war against preprints.” Critiquing a new practice in hopes such critiques will lead to improvements and remediation of parts that don’t work isn’t “waging war.” It’s positive and hopeful criticism, done in good faith and with an eye toward pointing out flaws so they can be addressed.

A more measured description of what I’ve been doing via my research and writing might be “raising valid concerns about preprints.” Evidence of this came, as Rieger knows, from our debate at the Charleston Conference, where the sentiment change registered among attendees (mostly librarians) moved from favorable to concerned after we presented our two cases. The moderator — who has run many of these about many topics in many settings — said the swing in sentiment was the greatest he’d ever seen. So, the issues I’m raising via research and writing are apparently valid concerns.

Portraying my work in this area as a “war” isn’t fair or accurate, and indicates a lingering unwillingness I see in the community to address the valid concerns about unreviewed manuscripts — especially in biomedicine — circulating to the press and public, creating public health concerns, exhausting concentration and attention, and being weaponized for commercial or political purposes.

I do my research and write my critiques in an effort to improve things, not as part of a “war.”

Kent, As I think you know, we both respect your work and opinions on this area, which is why we cited you on this topic! We even agreed, at least to some degree, as perhaps you noticed in the first sentence of that paragraph. But that said, your work taken as a whole is critical of preprints, rather than offering a nuanced view both of the benefits and tradeoffs. Looking forward to your comments on the substance of our analysis.

I think a really interesting question raised here is around the strategy of investing heavily in something that is not likely to ever produce a significant (if any) revenue stream. Given that the whole concept of preprints is that they are widely discoverable and publicly available, why build your own storage facility for them (with all the initial build and ongoing editorial and maintenance costs) when instead you could build your own service to scrape everyone else’s preprint offerings? It seems the only advantage of having the server in-house is easy access to the authors and lowering the burden of cascading a rejected article to a different journal, but wouldn’t that be achievable by scraping author contact information and metadata from published preprints? And by doing so, you’d widen the pool of authors/manuscripts you’re dealing with rather than being limited to those submitted to your own journals.

Or perhaps making this service available to publishers that don’t want to build their own servers is a new stand-alone business.

Hi David, my technical understanding is limited and might be wrong, but I think it is not possible to scrape Elsevier’s SSRN. I think there is an element of content protection and lock-in here.

Thanks Christos, I hadn’t realized that. I suspect (and in relation to Federico’s comment as well) that if/when any funding agencies start mandating preprint deposit, we’ll see a set of standards emerge for what is considered an acceptable preprint outlet that can be used to meet those requirements. Hopefully that will include a level of openness and interactivity that is beyond what some current services provide.

I don’t think it’s crazy to require things like DOIs or better efforts to connect preprints to the final published version when/if that exists. Everything else needs some careful consideration, but if the goal is open science, then asking for openness seems a given.

I don’t think it’s crazy either to ask for openness if one wants openness. But, one might still want to consider whether there are any indirect effects one might not want …

Hi David, the ‘storage facility’ analogy is consistent with the ‘pre-print server’ concept itself, born with ArXiv in 1991. 1991! That was two years before the Netscape browser, 4 years before the first large scale online journal effort (JSTOR), and 7 years before ScienceDirect and the great journal digitization wave really takes off. Without honing in on the scraping idea, I think you are making the point that in the age of Google it doesn’t really matter where these preprints live on the web, just like it doesn’t really matter that journal articles are spread across a variety of publisher platforms. Why should publishers take such an interest then? First, let’s take a broader view that what we are talking about here is prepublication sharing of works in progress; which includes anything from artifacts to meeting abstracts and posters to funded research outputs that may never make it into a journal – it’s all one big continuum of research communication. One reason publishers have gotten interested in prepublication is simply a matter of trying to serve the research community better by looking upstream and trying to add value for authors before they typically submit to a journal. Integrating preprinting into the post-submission workflow, especially with realtime updates from the peer review process, is really about transparency and experimentation with the form itself in the ongoing evolution from print to web.

I’m confused. Why is this article billing websites like as somehow comparable to preprint servers/open archives? Where’s the OAI-PMH endpoint or API to index their content? At most they seem a substitute of the author’s webpage or blog.

Hi, Federico! I head up operations at Research Square. We issue versioned DOIs and deposit all content into Crossref. You can find all our full-text content here:,prefix:10.21203&rows=10. The content is also indexed by Europe PMC and Dimensions. API access can be requested here:, but we’re in the process of putting together detailed documentation for its open use. We’re a small team, so this has taken us some time.

The journal-integrated service, In Review, also allows authors and readers to see, alongside the preprint, a detailed timeline of editorial events occurring at the journal. Hopefully, this feature, along with full-text html, commenting and annotation capabilities, and multimedia integration make it more useful for authors than a standard webpage or blog.

Hi Michele! You have mentioned that through Research Square you put all the preprint content to the Crossref. I suppose that in this case it will be picked up by GoogleScholar too and will be linked to a certain journal. Then, once a certain preprint is cited it will be fixed by GoogleScholar too and will influence the overal journal performance. Have you already measured the contribution of preprint citations of a certain journal to the overal citations (I mean published papers too) of the same journal? How do preprint citations influence the citation metrics of the journal at Research Square? Will the preprint citations of the journals at Research Square be picked up by SCOPUS or WOS? If it will be then Research Square can be considered as a powerful instrument to infuence the journal impact factor.

Hello Alex, yes the preprint in most cases will be linked to the ultimate journal article (if there is one), but citations would not be consolidated at that point. The preprint and its associated journal publication will have discrete metrics, so only citations of the latter would stand to influence the journal’s performance (IF-wise). GoogleScholar does a decent job of aggregating related versions of a paper, which should bias in favor of citing the version of record – though I don’t know how true this is in practice. Theoretically, it should be possible for WoS to aggregate citations in this way too, but I can think of a number of reasons they wouldn’t want to and that it may not be advisable to. SK had a piece related to this last year:

Thank you, Michele. I see your point of view. I expected the way you answered both related to GoogleScholar and WoS. Finally, I see that you want to be accurate in fixing the preprint impact and you aspire to avoid overlapping and mixing the published article and preprint citations. Personally, I like your logic, else the larger Publishers, having a lot of retained earnings, will invest it in the in-house preprint platform not to disclose the research results sooner, but to gain more citations and accellerate the journal impact factor. The only unanswered issue here is about how the university employers will encourage the intention of researchers to make their preprints public and if the universities will reward the researchers of the highly-cited preprints. Do not you think, that finally preprints will strike the market for the journal subscription? As you see the preprints could be “Pandoras’ box” for some Publishers and those providing journal metrics (WoS and SCOPUS).

Thanks for the thoughtful summary, Roger and Oya. Just one addition: AGU’s ESSOAr platform is indeed connected to their publishing workflow. Authors submitting to AGU’s journals may deposit their papers simultaneously into ESSOAr.

One crucial question that Schonfeld and Rieger leave largely unaddressed: What is the status of so-called postprints on the platforms? Publishers have an arguable incentive to discourage postprints—those accepted and peer-reviewed manuscripts that aren’t yet formatted. Could this move have the ancillary benefit (to them) of choking these off?

More fundamentally, this is a naked attempt at non-profit capture—at the ominous intersection of two existing campaigns: the (1) largely successful embrace of an APC-based open access model; and the (2) the build-out of a soup-to-nuts researcher workflow software stack. The stakes are, therefore, enormous: The existing preprint community is largely nonprofit, scholar-led, and fee-free (funded through library/institutional membership programs). I see this as the 37 percenters taking dead aim at a crucial piece of academy-owned infrastructure. (Note: I have a dog in this fight, as co-founder of the nonprofit, scholar-led MediArXiv.)

Another one-sided post about preprints from SK, followed by not just a mischaracterization of Kent Anderson’s writing — much of which covers how preprint servers could improve what they did, if they wished to, such as this one ( — but a manipulative mischaracterization of it. “Waging war” because he questions preprints? Not providing a “nuanced view”? Where is the “nuanced view” of the potential harms of preprints in your essay here? One sentence about preprints perhaps not being an “unalloyed good”, which doesn’t say how or why that might be, and characterizing the only source you point to that does as having an axe to grind. Your approach here is neither fair nor nuanced.

I think that preprints serve some scholarly communities very well, especially math and physics, but I believe their potential harms have been insufficiently considered in others, particularly in medicine. Three other journal editors and I published our stance on them well before COVID-19 here ( We questioned the benefit/harm ratio and concluded that the harms outweigh the benefits in most clinical research, and decided we would not consider for publication any clinical research that had been posted to a preprint server. This was a principled stand — it could only restrict our inflow, there was no possible financial upside to what we did — and it was based on a serious analysis of how preprints are likely to harm patients more than help them.

Our viewpoint is far from normative on this, sadly, as preprints are gaining traction in medicine. I can see why medical researchers like them — it’s possible to have something to “show” to one’s funder (or university promotions committee) much more quickly, and it’s possible to get one’s work “out there” in the media as well. But to the degree that the researcher is only one stakeholder, and the patient is another, the balance is lacking. Unvetted, self-serving claims are widespread on preprint servers, as are simple but important errors, and in the COVID era, the noise-to-signal ratio has gotten especially bad. Journalists have picked up on the juicier claims (see this in the LA Times yesterday, and once we give up reputation in the eye of the public it is going to be hard to get it back.

Do unvetted, self-serving, and erroneous claims appear even after peer review? Of course. But to say that the fact that they appear both in non-peer-reviewed venues (preprint servers) and peer-reviewed journals, as some have, creates a false equivalence. The frequency with which these problems appear in an unrefereed setting is obviously much, much greater (as, once again, we’ve all seen during COVID, but was obvious to my colleagues and I when we wrote our editorial on that topic almost 2 years ago).

I have enjoyed the SK blog, but would enjoy it more if both sides of this (and other) issues were covered with better balance. In particular, here, I think you’ve not adequately considered the potential harms of preprint servers in some scholarly communities and the constituencies they serve, and you’ve unfairly characterized the thoughtful work of someone who has asked good questions — and made suggestions for improvement — on those themes.

Seth S. Leopold, MD
Clinical Orthopaedics and Related Research

It’s clear that we need NISO standards about preprints, about which see:

There has been some significant coverage of preprints with respect to Covid research. The Economist ran and article and NYT. I haven’t made up my mind about this cluster of issues as they bear on dissemination of information relating to pandemics. Nor uses of preprints in the medical space generally.

It is clear to me that in other areas of science, as the article above suggests, we need a way to accommodate the human urge for establishing priority of discovery. The competitive incentive it provides outweighs its negativities, in my view.

I asked a colleague at a high powered medical research institute about using preprints to establish priority, and his response was “Publication equals priority”, and that if everyone is citing the Nature paper on the subject while you’re arguing that you had a preprint with a timestamp that came out before the Nature paper, then you’ve already lost that argument. The culture may change over time as preprints become a more established part of workflow, but it’s not clear how much weight they carry at the moment.

Thanks David.
While perhaps true of the medical space, I wonder if your colleague’s claim holds for other areas of science. I imagine in certain areas of science, it is considered very bad form to claim an idea previously disclosed in a preprint. If it is not, it should be, and again societies should define professional standards of conduct in this arena.
One quibble here with your colleague’s statement. To assert that “publication equals priority”, sans phrase, is question-begging. Preprints are a variety of publication.
It would make for a great research study to see if perceptions are changing about preprints.
In any case, I think that establishing NISO standards in this space could help. Perhaps it will help in empirical areas of science to encourage posting of data, or linking to it in an OA data repository. It’s hard to argue that one got on plate first if one has the data.
Well worked out proofs in mathematical areas, provided in great detail, may also help.
But again, not talking about biomedical areas. I can see both sides of those debates. Given how poor journalistic standards are generally (not a partisan comment), I have more hesitation about use of preprints than I did previously, but am still somewhat on the fence, and it’s outside my area of interest.

I don’t think it’s a question of claiming someone else’s idea for your own, more that different research groups discover the same things over time, and, at least historically, the first to get their observations through peer review and published have taken precedence. This is why so many are concerned with “scooping”, whether justified or not, and despite efforts by journals to provide “scoop proof” review, that fear still lingers. Whether you had the results first in a preprint, if someone else publishes those results in a journal first, your paper is going to be confirmatory and likely to end up in a journal that will count less toward career advancement (not praising the current career/funding structure here, just acknowledging how it currently works).

Also worth noting that preprint servers like biorxiv and medrxiv assiduously avoid using the word “published” anywhere on their websites, preferring instead the term “posted” to differentiate what they have done.

And again, preprints are largely new to the biomedical world, and culture change takes a long time (often an eternity in academia).

A project for the remainder of this year, time permitting, is to complete a project on which I’ve embarked a bit already. Namely, to trace the history of scientific communication to establish whether forms of scholarly communication prior to the peer review journal publishing characteristic of the twentieth century, and perhaps even prior to a point of time well into the early 20th century, were used to establish priority of discovery–even without anything like peer review.
My preprint about preprints on ResearchGate talks about a possible analogy between preprints and the role correspondence played in earlier day. But I need to look further to establish that correspondence actually played in establishing establishing priority. This important because of course there is a distinction between airing one’s views privately in correspondence, and this being used as evidence of priority of discovery by one’s peers, not to mention by historians of science.
But who care about history? It can feed into establishing the thesis that rapid forms of communication that establish priority play a very serious role in spurring advancements in science. They physicists get that. (Again, though, I’m at sea about whether preprints should play this role in the biomedical space. Kent Anderson’s views have given me pause to some extent)
There’s a debate about whether establishing priority is even important and doesn’t introduce its own problems. (The sociologist of science Robert Merton had views on this; see the NISO piece.) Perhaps my views on its importance will change, but for now, it seems awfully important to provide a platform that accommodates human nature and its drives, on that front. Perhaps a few years from now I’ll be appalled at having said that. But, for now….

I think the job market is always going to be important, particularly given that the majority of research is being done by graduate students and postdocs (at least in the biomedical sphere). It’s a lot easier to get a job as the person who discovered X than it is as the person who confirmed someone else’s discovery of X.

That’s true, as pertains to the current system. Tenure and promotion, not to mention grant funding, will certainly have to change if preprints are going to have status comparable to a peer reviewed publication, in certain fields at least.

Just for the record, JMIR Preprints, operating since 2009 and preprinting mainly manuscripts under review at JMIR journals, is missing in this overview: Perhaps the inspiration for Wiley’s “under review” variant…

Comments are closed.