The Scholarly Kitchen

What’s Hot and Cooking In Scholarly Publishing

  • About
  • Archives
  • Collections
    Scholarly Publishing 101 -- The Basics
    Collections
    • Scholarly Publishing 101 -- The Basics
    • Academia
    • Business Models
    • Discovery and Access
    • Diversity, Equity, Inclusion, and Accessibility
    • Economics
    • Libraries
    • Marketing
    • Mental Health Awareness
    • Metrics and Analytics
    • Open Access
    • Organizational Management
    • Peer Review
    • Strategic Planning
    • Technology and Disruption
  • Translations
    topographic world map
    Translations
    • All Translations
    • Chinese
    • German
    • Japanese
    • Korean
    • Spanish
  • Chefs
  • Podcast
  • Follow

Guest Post:  Preprints Serve the Anti-science Agenda – This Is Why We Need Peer Review

  • By David Green
  • Apr 17, 2025
  • 36 Comments
  • Authority
  • Controversial Topics
  • Open Science
  • Peer Review
  • Research
  • Research Integrity
Share
Share
0 Shares

Editor’s Note: Today’s post is by David Green. David is a Wildlife Ecologist and Founder of Stacks Journal, a scientific journal designed for ease and ethics. He is passionate about open science, scientific publishing, and peer review.

For decades, scientists have relied on peer review to ensure scientific knowledge is built on a foundation of rigor and credibility. However, preprints are adding to the crumbling of that foundation, which is already under attack by anti-science political agendas.

Preprints are like blog posts that carry most of the markers of rigorously peer-reviewed articles. They are designed to look and feel credible, even though nearly anyone can post an official-looking preprint. Scientists barely engage with preprints about their quality and reliability; a 2020 study found that only 7% of preprints posted to bioRxiv and medRxiv received any comments at all with the median of comments received being just one.

cartoon of scientists examining a research paper with a magnifying glass, surrounded by other laboratory equipment

Yes – there are benefits to more freely and quickly sharing science. But preprints join predatory publishers and paper mills to fuel a growing challenge for our society: distinguishing credible science from inaccurate, biased, and misleading work. This is encouraging a race to the bottom, where good science can carry the same weight as bad science, ‘alternative facts,’ and ‘truthiness’ – which is actually no weight at all.

So it should be alarming when preprints are cited like fully-reviewed articles and used in the news media when their lack of legitimacy is often buried in fine print easily missed or misunderstood by the average reader. (As well as the informed reader — I’ve interviewed dozens of scientists who don’t know how to evaluate preprints, what they can trust, or what they can use in their own research.)

Proponents of this free-for-all style of scientific publishing argue that this is just how science should adapt to the ways knowledge moves through the internet – that the quick dissemination of new research is a modernization that is always inherently beneficial.

But science shouldn’t work like the rest of the internet. Scientific knowledge shouldn’t be treated the same way as a Reddit thread or a post on Bluesky. The foundations of science, and the ability to trust scholarly research, should be rooted in integrity, rigor, and meaningful debate – which should not be relegated to the comments section of what are essentially glorified social media platforms designed to feel like dependable peer-reviewed journals.

This is not to say that peer review, in its current state, is what we need. As many people have pointed out to me, our traditional two-person peer review can also put a stamp of credibility on bad science. But peer review has its flaws because it has been taken over by big publishing businesses that prioritize profits over accuracy, not because the idea of peer review is inherently flawed. The answer is to fix peer review so it meets the needs of modern science, not to scrap it entirely or pretend it can work ad hoc in the comments section of a preprint.

Science thrives on openness, transparency, and the rapid exchange of new information and ideas, but it also relies on careful evaluation and verification. Rather than accepting preprints and non-peer-reviewed research as the new norm, or dismissing the process of peer review as outdated, we should focus on improving the processes that ensure scientific research meets the highest standards.

I’ll own my bias here. I’ve spent the last few years of my career trying to do just that. As part of my work building Stacks Journal, I’ve explored how technology can make publishing more transparent, open, and expedient – without losing the rigor of peer review. We treat peer review like a double-blind grad school seminar, bringing a handful of reviewers to every article and giving them the tools to discuss and debate the science. This is what I believe science needs: modern, easy-to-use tools to evaluate new research before we give it the stamp of credibility.

It’s a tumultuous time for science, but we’ve been here before. In the 1970’s when the systems of external peer review at the National Science Foundation were under attack by an anti-science Congress, scientists stood up for those systems. They united around the idea that science needs vetting, and that the only people who should do that vetting are experts in the field. They stood up for integrity.

It’s time for scientists everywhere to do the same again – to unite and defend the core principles of science. The challenge today is to reconnect to those principles and values that have driven our fields for hundreds of years: knowledge, trust, and rigor. Yes, we also want expediency. Yes, we also want ease. But it seems we’ve lost touch with what truly matters and what drives most of us to do the challenging work we do.

If we don’t stand up for integrity now, and especially now, we will have nothing left to defend against the full unraveling of scientific research as we know it. Science will become just another example of a once-honored institution that has devolved into an unregulated free-for-all where anything goes.

Scientists (and publishers) — the choice is ours. Do we want to let our frustrations fuel our own downfall, or do we want to defend the values that have been the bedrock of science for generations?

Share
Share
0 Shares
Share
Share
0 Shares

David Green

David Green is a Wildlife Ecologist and Founder of Stacks Journal, a scientific journal designed for ease and ethics. He is passionate about open science, scientific publishing, and peer review.

View All Posts by David Green

Discussion

36 Thoughts on "Guest Post:  Preprints Serve the Anti-science Agenda – This Is Why We Need Peer Review"

Thank you for the article, David, and allow me to add a few thoughts: While I acknowledge your points raised, I don’t agree that preprints are just like blog posts or that they feed into the noise produced by paper mills and predatory journals. What I had expected under the title of this post was a call for better organised preprint-based peer review and curation of community debates on scholarly platforms and have repositories adding features where connections can be made with publicly available reviewer report with just a click instead of the often seen warnings that “the preprint has not undergone peer review and thus cannot be trusted” claim. Why would any researcher and research team put their career on the line, unless they do that anyways and get through traditional publisher-based peer review through data/image fabrication and other misconduct?

There have been reports, studies, comments and debates arguing that preprints are sometimes better than the VoR where relevant points made by the authors went missing because of the reviewers comments. And from what I see is that preprints are increasingly being shared not instead of journal submissions but alongside – actually adding value to the editorial teams who can look up discussions on the same on social media, preprint based reviewer platforms etc.

We all have witnessed how the scholarly landscape has changed over the past decades with ever increasing APCs raised by commercial publishers, an ever widening gap of inequality between high vs low-resource scholarly communities and institutions, let alone countries and world regions. We have global challenges to tackle and make sense of the ever growing stack of scholarly literute thanks to publication pressure due to the still largely undervalued and way to slow being implemented suggestions of incentive/assessment changes/reform by DORA, CoARA, INORMS, and others.

Preprints are here to stay for some of us and not everyone is forced to adopt the model of sharing their research in that way.

And yes, by all means, let’s continue upholding research integrity by supporting existing organisations and initiatives who work with them to enable balanced and equitable scholarly discourse, globally: PREreview, MetaROR; PCR… just to name a few 🙂

Do not worry, there will be a place for journal publishing as you know it. the place might just shrink a little making space for the variety of longstanding as well as emerging research finding dissemination routes and workflows. Let me end this with a hint and call for Diamond Open Access publishing, which as well will certainly benefit by an occasional uptake of preprint-based peer review where applicable as judged by scholarly community editorial teams alone. And finally, of course, there are always disciplines, research topics and cases where any kind of sensitive data and information should be kept confidential, behind paywalls, other walls and curtains for any sensible reason.

  • By Jo Havemann, Access 2 Perspectives
  • Apr 17, 2025, 8:59 AM

Hi Jo, I don’t recall having seen any of these studies: “There have been … studies … that preprints are sometimes better than the VoR where relevant points made by the authors went missing” … can you provide a couple of citations? I’m very interested in analysis of preprint–> VOR. Thanks!

  • By Lisa Janicke Hinchliffe
  • Apr 17, 2025, 9:33 AM

Hi Lisa, about my claim of preprints sometimes being “better” then the VoR: I should perhaps written “different in scope and emphasis” instead, as that tends to change to varying extent thanks to reviewer feedback. What I was thinking of was that revised manuscript versions might suffer from a phenomenon sometimes referred to as “Verschlimmbesserung” (a German word for an attempted improvement that makes things/manuscripts worse.) which can happen when trying to please everyone involved in a discussion or as here to get the submitted manuscript approved for publishing under time an career incentive pressure.
To show evidence of that beyond anecdotal references I heard and read in my career thus far is difficult where preprints and review reports are not accessible. Much of that can be seen and interpreted from discussions about and arguments for a reform of peer review:

For upfront disclosure, I co-authored this discussion article: Ten Hot Topics around Scholarly Publishing, 2019 VoR: https://doi.org/10.3390/publications7020034, preprint: https://peerj.com/preprints/27580v1/ where you might want to read into 2.3. Topic 3: Does approval by peer review prove that you can trust a research paper, its data and the reported conclusions? AND 2.4. Topic 4: Will the quality of the scientific literature suffer without journal-imposed peer review?

The section on “Social and epistemic impacts of peer review” in “The limitations to our understanding of peer review”, https://doi.org/10.1186/s41073-020-00092-1 gives additional hints and arguments in that regard.

Here are a few studies that compare the preprint with its respective Open Access VoR:

– Tracking changes between preprint posting and journal publication during a pandemic – https://doi.org/10.1101/2021.02.20.432090 (the preprint), https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001285 (the VoR) // 17.2% of abstracts underwent major changes in conclusions, including one instance where the published version directly contradicted the preprint’s conclusion. Some published versions omitted specific technical details present in preprints to meet journal space constraints.

– Examining linguistic shifts between preprints and publications, VoR https://doi.org/10.1371/journal.pbio.3001470

– A Synthesis of Studies on Changes Manuscripts Underwent Between Submission or Preprint Posting and Peer-Reviewed Journal Publication, https://peerreviewcongress.org/abstract/a-synthesis-of-studies-on-changes-manuscripts-underwent-between-submission-or-preprint-posting-and-peer-reviewed-journal-publication/

– Comparing Published Scientific Journal Articles to Their Pre-print Versions, https://doi.org/10.1145/2910896.2910909

– Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature, preprint: https://doi.org/10.1101/581892

– Meta-Research: Releasing a preprint is associated with more attention and citations for the peer-reviewed article, https://doi.org/10.7554/eLife.52646 Making another and yet also important finding: “we found that articles with a preprint had, on average, a 49% higher Altmetric Attention Score and 36% more citations than articles without a preprint.”

– Robustness of evidence reported in preprints during peer review, 2022, VoR: https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(22)00368-0/fulltext // “Uncertainty was reduced during peer review, with CIs reducing by 7% on average. These results support the use of preprints, a component of biomedical research literature, in decision making.”

Happy reading =)

  • By Jo Havemann, Access 2 Perspectives
  • Apr 17, 2025, 11:44 AM

Ok, thanks. I have all of those. It was your claim of BETTER that caught my attention so I appreciate you clarifying. There is no question that they are DIFFERENT at least a notable % of the time.

  • By Lisa Janicke Hinchliffe
  • Apr 17, 2025, 1:42 PM

Lisa, if you have all of these articles I listed for you, perhaps you might want to read them as well. Or at least the parts that I highlighted for you which support my statement that preprints are at times better (as in more informative) than the VoR. More often than that they are different, to varying degrees, which of course is to be expected with input from other scholars (aka reviewers) being integrated in subsequent versions. If we need “traditional” commercial publishing and journal-tied peer review for that remains to be debated further.

  • By Jo Havemann, Access 2 Perspectives
  • Apr 17, 2025, 5:58 PM

Yes, Jo, not only do I have them, I’ve read them. Thanks.

  • By Lisa Janicke Hinchliffe
  • Apr 17, 2025, 8:12 PM

Hey Lisa, I wanted to acknowledge that my last comment in response to you came off as cocky, I certainly did not want to be disrespectful but let my reaction get ahead of my better judgment.

I admire and honour your work and the space and effort you provide through advocacy for librarianship and your thought leadership, not only as a chef here at the kitchen, but also where I saw you take the stage and speak in webinars. While we may or may not have differing views and opinions about certain aspects of scholarly publishing, I learn a lot and get inspiration from you and others for the bigger picture wrt to research integrity and effective knowledge sharing in the digital era.

I am sorry for my comment and tone earlier, and hope that we can continue the discussion on preprints or another topic here and in other fora.

  • By Jo Havemann, Access 2 Perspectives
  • Apr 19, 2025, 3:38 AM

Hi Jo — Thank you so much for sharing this perspective. I agree — we definitely need better tools to encourage scholarly discourse around new research. I also find your line of thinking about how the VoR can vary substantially from the preprint, and how this may affect the quality of the research, to be a very interesting one. In the best experiences, peer review can help strengthen new research. But in the worst experiences, I can understand how this might also be an outcome. We definitely have some work to do here! It will be interesting to see how the landscape of scholarly publishing changes over the next decade.

  • By David Green
  • Apr 17, 2025, 2:54 PM

Thank you, David, also for starting the discussion off with some thought-provoking comments. Sometimes we need those to get the better arguments on the table so that we can move the needle effectively. I firmly believe that peer review, in most cases by far, is supportive and well-intended to add value to the author/s’ manuscript. The issue I see is that nowadays, with the exorbitant number of articles being published every day, to keep up with the standard, we need more efficient and diverse forms of peer review. The debate is well underway. And again, thank you for allowing the space for reflection through your guest post.

  • By Jo Havemann, Access 2 Perspectives
  • Apr 19, 2025, 6:56 AM

I appreciate this post and the author’s perspective. Peer review is the gold standard for scholarly publishing.

  • By Bob Henkel
  • Apr 17, 2025, 9:03 AM

Absolutely correct! A cause we can all stand behind. I may share some early morning, musing.

This problem is magnified by the large language models that crawl the preprint servers with their openly available information, but do not crawl the papers behind a pay wall which are vetted and carefully editorially and peerreviewed. This contributes to junk science being put in the LLM echo chamber while peer reviewed vetted science is not accessible to harvest by generative AI systems. This potentiallyresults in magnifying kooky theories that are open or in preprint servers, but not vetted information.

This is a horrible choice for publishers. How can they afford to keep to their mission, run publishing operations, and get reliable, trustworthy information to their readers? In the emerging Diamond open models nobody pays. Not the author and not the reader. But there are still considerable expenses born by the publishers and the authors. People don’t work for free they have to feed and house themselves. Labs and research cost money to run. Edits, layout, computers for distribution of information, software platforms, etc. have considerable expense. Is the difference between diamond open access and preprint servers peer review? Or do preprint servers actually have peer review out in the open by allowing comments?

The knee jerk reaction seems to be to get a grant or a sponsor for research and authors. The days of government supported research may be narrowing, and the grants are imperiled. Competition and is tough As an industry, we have swung to government support and university sponsors. One driving force was that those would be neutral sources not profit driven. In the 70s and 80s there was more partnership and funding from industry. In today’s climate, people are less inclined to look to for profit money to support them. Perhaps that introduces bias as well.

Does the utopian model of diamond open move us down unsustainable path? Are we headed toward a world where only those with deep pockets and an objective to support will sponsor research and publication. That feels seductive and dangerous. But maybe we’re already there.

  • By Marjorie Hlava
  • Apr 17, 2025, 9:29 AM

The problem isn’t magnified just by LLMs crawling preprint servers, but by authors using LLMs to write and post preprints. I was recently researching the use of LLMs for peer review and found what looked to be a useful review/comment preprint article with important references. But when I tried looking up the key references it became obvious that they were all hallucinations and the entire preprint was an LLM-generated fake. The kicker was that X accounts, including from a certain site focused on retractions, were promoting the preprint as supporting the proposition that AI could replace human peer review.

  • By Daniel Evanko
  • Apr 17, 2025, 10:11 AM

Absolutely, Daniel. Without having any sort of peer review, it all just turns into an LLM-generated mess that makes it harder for researchers to wade through.

  • By David Green
  • Apr 17, 2025, 3:07 PM

Marjorie- I couldn’t agree with you more. There is no better way to undermine science than to flood preprint servers with with bad science so that it can come out as “truth” at the other end of an AI process.

  • By Roy Kaufman
  • Apr 17, 2025, 12:17 PM

Hi Marjorie — Yes. LLMs are also adding to the unraveling, and it’s dangerous that they are pulling in unverified, unvetted research into their algorithms. We really need to improve our systems of peer review and publishing before it’s too late.

  • By David Green
  • Apr 17, 2025, 3:06 PM

I couldn’t agree more, David. Although there is value to preprints, I’m very concerned that too many people in the scholarly community treat them like peer-reviewed articles. This practice now extends to including them in scholarly research databases like PubMed and Dimensions, which increases the likelihood of people judging them similarly to peer-reviewed articles, or not even realizing that what they are looking at is a preprint.

To emphasize the fundamental difference between preprints and peer-reviewed articles, a few years ago the AACR journals changed their referencing policy to state that references to preprints should be placed inline in the main text, more like how a personal communication or other unpublished observation must be referenced. By not including them in the main reference list at the end of the article, this emphasizes just how distinct they are from peer-reviewed articles, books, etc.
In relation to citations, I was recently alarmed to see that preprints are being counted as citing items when calculating citations to peer-reviewed articles in Crossref data that the publishing community relies on. This provides a simple method for authors and citation rings to boost the citations of peer-reviewed articles. Unfortunately, there is no easy way to eliminate any resulting false citations from services like Dimensions that use this Crossref data, and it becomes necessary to rely on companies like Clarivate to tightly control which sources they include when counting citations.

I think it is critical that the community take efforts to reverse actions that are facilitating the conflation of preprints with peer-reviewed articles. Preprint data doesn’t need to be removed, but it should be clearly segregated by default so that a person needs to make a conscious decision to include it when they are searching and evaluating scientific communications.

  • By Daniel Evanko
  • Apr 17, 2025, 9:30 AM

Thank you for this, Daniel. I didn’t realize that preprints were being pulled into scholarly databases like PubMed. I can see how this adds to the confusion and will continue to make it harder to know what research is trustworthy and vetted.

The approach of AACR journals on how to cite and reference preprints sounds really interesting. I like that it is helping to distinguish verified research from non-peer-reviewed preprints. I hope more journals start to think critically about how to display this type of information.

  • By David Green
  • Apr 17, 2025, 3:12 PM

There have been a number of studies which show that when a preprint is subsequently published in a peer-review journal, the changes, if any, are minor. This suggests I don’t have to wait for peer-review, I can trust the pre-print. However, this suggestion is weakened by the growing number of journal articles being retracted. If the peer-reviewed ‘version of record’ can be flawed, then so can the pre-print version. This circular argument suggests that the central issue isn’t peer review or a publication’s container (pre-print server or journal), it’s trust. So, how to boost trustworthiness in preprints and journal articles? I’ve spent the latter half my publishing career working with research institutions outside the academy. There’s quite a range of them: IGOs, government bodies, NGOs, independent research centres, think tanks, and companies. Whoever they are, they share one behaviour when one of their staff publishes something: they stick their logo on it. These institutions value one thing above all else, their reputation, so sticking their logo on something isn’t done lightly. Hard to win and quick to lose, their future funding depends on it. So before they allow their logo to adorn a publication, they’ll make sure the content is reviewed – and, from my experience, these steps can make the academic peer-review process seem superficial – I know of one organisation whose major annual report has 80 reviewers. To be clear, this does not mean the organisation is marking its own homework, they organise/manage a review process, they don’t do it themselves: independent experts to do the reviewing. Trust the logo, trust the content. Have medium/low trust in the logo . . . you get the drift. As with journal brands, trusting logos isn’t binary, it’s a spectrum. Now, I bet you’re thinking, this is simply moving the management of the peer-review process to universities from journals. Hardly a big change. Well, I think it would be. Firstly, it would remove the conflict of interest that APC-funded journals must face with every possible rejection. Secondly, with the university’s reputation on the line every time one of their staff publish, you can be sure they’d make sure the review process is robust. Would this eliminate mistakes/errors/fraud – no, of course not, nothing is perfect. Thirdly, with universities footing the bill for peer-review, there would be downward pressure on the number of papers submitted for review (What! Again, Doctor X? But this is the tenth paper you’ve submitted this year and it’s only April!). Crucially, it would bring an actor into the frame who is startlingly absent from the trust system at the moment. A heavyweight actor which should surely be deeply invested and active in research integrity and trust in publications, especially with what’s going on in the US and in other countries at the moment. I also think it would be a step towards another win. By bringing the management of the peer-review process in-house – as non-academic research organisations do – I reckon universities would also be taking a big step toward an expansion of the diamond open access publishing model. Why? Because this just happens to be the most common publishing model used by non-university research organisations. Win-win.

  • By Toby Green
  • Apr 17, 2025, 11:02 AM

Facts are under attack. A defense of peer review is not going to save us. (Apologizes for the snarky comment, but the academy and scientific research are facing far graver problems.)

  • By Daniel Dollar
  • Apr 17, 2025, 12:31 PM

Thank you for this post. I want to mention that in the subject of Mathematical Physics the preprint platforms, such as the arXiv and Hal, play a very important role in the fast communication of new research results, and this has been the case for many decades (also in other subjects of course). There are good reasons for the success of the arXiv, but in this comment I want to point out another benefit of the preprint archives that may not be so well known, namely it’s benefits to diamond open access scholarly publishing.

We have recently (in April 2021) founded a new journal titled “Open Communications in Nonlinear Mathematical Physics”:
https://ocnmp.episciences.org/
This is a Diamond Open Access journal and all its articles (after being vetted and accepted for publication in a strict peer reviewing process) are overlayed at the arXiv or at Hal (the authors have a choice between those two). This process is made possible for our journal by Episciences who have an agreement with the mentioned archive platforms. This archive overlaying benefits our authors, as it helps make their papers more visible to a large and relevant community, and it certainly benefits our journal as well.

In this sense, the preprint platforms can play an important role in trustworthy scientific communication and it can moreover help promote the Diamond Open Access Publishing Model, which is in my opinion the best way forward for scholarly publishing in general.

  • By Norbert Euler
  • Apr 17, 2025, 4:31 PM

The overlay idea is old but very much a direction I’d like to see. Thus, good luck!

And I think even some commercial publishers are taking notice. For instance, Elsevier now even encourages to deposit your submissions as pre-prints. In arXiv-fields you can, could, and really should also eliminate the submission hassle; authors would merely submit a DOI.

As for the article itself: there were a lot of questionable claims; e.g., if you need a peer review stamp to cite a paper (or a pre-print), you shouldn’t really be doing science. (Hint: maybe try reading it?) Although not always, peer reviews are increasingly also a lottery for authors of decent enough work. For that reason too, I am increasingly in favor of arXiv. (And, yes, I cite plenty of work there too, but of course I know what I am doing.)

  • By Jukka
  • Apr 27, 2025, 1:15 PM

Thank you Jukka. I agree with you about the arXiv. Just to clarify one point that I made earlier about the overlaying of the articles that get published in our journal. Our procedure is as follows: The authors first need to post their papers on the arXiv (or at Hal) before it can be submitted to the journal. The submission to the journal is then done by the authors themselves at the journal’s website by simply using the arXiv identifier (it’s very simple). If the article is accepted for publication, the author needs to overlay the final accepted version (in the style of the journal) and hence update his/her origin arXiv version with the final accepted version (BTW possible updates or corrections are also easy to implement at the journal and at the arXiv if necessary). This is somewhat different from what commercial publishers allow and what you have referred to. Our journal is a Diamond Open Access journal so we don’t have that financial aspect that commercial publishers obviously have (they of course want the reader to purchase the final journal version that could possibly contain some corrections with regards to the originally posted arXiv version, or the authors must pay for the open access).

  • By Norbert Euler
  • Apr 27, 2025, 8:54 PM

Thank you, David, for your thoughtful post. Peer review is essential to science, and the community must ensure it conveys that importance to society at large. Preprints do have a long history, extending back to before the Internet as printed papers circulated informally, and while their move to digital form has come with benefits, there are also perils, which you note. One easily remedied peril is linking every digital preprint that’s been published to its version of record. Some servers (eg, bioRxiv) link to a published VOR programmatically, others allow authors to manually add a link to the VOR, but unfortunately some servers, including one prominent one, do not have a metadata slot for a link to the published, peer-reviewed VOR. Such a disconnect disserves the scientific endeavor: Readers encountering a preprint should be able to see whether it’s been peer-reviewed and published.

  • By Todd Reitzel
  • Apr 18, 2025, 9:32 AM

Hi Todd– Absolutely. We need to make sure that the informed reader can know whether or not the preprint has ultimately passed peer review and was published. Linking the preprint with a VOR is a great step that should be the norm. It does make me wonder what happens to the preprints that are posted but never peer-reviewed and published.

  • By David Green
  • Apr 18, 2025, 3:31 PM

David, now that you’ve broken the ice as a guest chef, I’d suggest another post on Stacks Journal(s). It might be a bit awkward writing about one’s own endeavor, but SK has had other posts along these lines. Stacks does seem different in a few key regards:
– peer reviewers are listed below the authors as “collaborators “ (looks like they have to opt in and can remain anonymous if desired )
– peer review by committee. How does that work if double blind? Do reviewers all review independently, and then get a chance to see each others reviews? Is there an active editor or is it on the reviewers or authors to reconcile?
– looks like reviewers can self invite? Yesterday there was a “join peer review” link on an article sort of close to my domain (hydrology) that I started to click into but paused. Today the link is closed. There is an editor or is it an algorithm for selecting reviewers ?
– your stated reason for starting Stacks was that time and quality matter. How’s it going? Times to publish seem unremarkable in my browse over
– You developed your own backend software? That’s a big lift, but it looks good. Seems like that’s a story worth telling.
– 3 click submissions for authors? Really? Seems too good to be true considering the complexity of article production and what a pain point Editorial Manager or ScholarOne have been for me.
– Lots of irritated authors threaten to start their own journal but few actually pull it off. How’s it going a year in? Looks like you’ve had about 25 articles published or in the queue which seems pretty good for the start of a niche journal but I imagine you’ll need at least 5 to10x that to be sustainable?
– What do you know about your authors’ motivations for taking a chance with an unindexed startup? Shared visions? Cost? Finding you by word of mouth through social media? I can’t imagine Wildlife Society, etc. embracing Stacks since they have their own journals.
Seems like you have a fascinating story which is way too much to bury in the comments on a post in a different topic. Good luck!

  • By Chris Mebane
  • Apr 18, 2025, 9:38 AM

Hi Chris — Thanks for the idea to write a longer post about Stacks. I’ll chat with the folks at SK!

I would be happy to answer your questions and talk more about Stacks. Let’s schedule a time to chat more. Please reach out via the contact form on our website so we can find a time to connect.

As for the peer review in hydrology — we quickly got 5 vetted reviewers to join that one, so we closed that review request.

Thanks for your interest in what we’re building!

  • By David Green
  • Apr 18, 2025, 3:39 PM

This is an important topic, and it’s always good to have debate on the whole peer-review process, and pre-prints place in the publishing ecosystem. As a journal editor, I always feel the peer-review system is under tremendous strain, and while not fully broken, is far from what one hopes it could be. I would even support the judicious use of LLMs to aid the process. Of course the keyword is judicious, and how to actually implement (and enforce) ‘judicious use of LLMs’ is going to be a challenge. Regarding pre-prints, I do think they serve a purpose. In my own case, I’ve put one paper up on BioRxiv that I tried to publish in couple of journals. It was an entirely culture-based population analysis of the microbes living in salt marsh sediments. It represented a huge amount of work, but by the time we submitted it to publish, the molecular pendulum in microbiology had swung so far that it was not possible to publish an entirely culture-based analysis in a reputable microbiology journal. Choosing the pre-print route at least got the work out there. I think its been cited one or two times and there’s some pretty interesting findings that even if no one cites, I can hope it inspires an ‘aha moment’ in some colleague who will follow up. In another case, I worked with colleagues in Germany playing a relatively minor role in a study they were doing. Unfortunately, the student finished, their immediate supervisor moved to another position, and although the work was really good and 90+% complete they could never find the impetus to complete the final paper for submission to a journal, I suggested submitting it as a pre-print but that didn’t happen. Now I am working in an entirely different ecosystem, and sequences we discovered in a German cave have great relevance to the Arctic tundra, the sequences are in Genbank, and I know exactly how valuable they are, but I have nothing to cite, not even a pre-print, frustrating. So personally, I will continue to make ‘judicious use’ of pre-prints, including putting up papers that I are submitted for peer-review, not all of them, but the ones I think are of value.

  • By David Emerson
  • Apr 18, 2025, 11:50 AM

David, great to see you on SK and always a pleasure to engage with you on Bluesky too. I too am a big fan of Stacks and the work you’re doing there.

This article takes me back to a bugaboo that I and Richard Sever share re: peer review. Is it good? Yes. Does it bring value? Yes. But, peer review, as it exists now, doesn’t have nearly the rigor that it used to have when there were fewer papers, less burnt out peer reviewers, and less pressure to publish in high volumes.

It certainly doesn’t have the rigor that internal peer review that the CDC and other govt agencies (used) to have or orgs like OECD etc (as Toby Green rightfully points). Reproducibility, examination of data etc, are much more common in those areas than in traditional publishing peer review (TPPR). TTPR doesn’t have the resources, expertise, time, or infrastructure to bring that kind of rigor and so the work cannot be validated in many ways. I think, at its most valuable, good peer reviewers take the time to assess arguments, push for context, and request clarity. WRT to “rigor,” “research integrity” etc — how many reviewers (excepting @Rick Anderson 😉 ) go line by line and check every citation to make sure each one is appropriate, accurate, etc? How many replicate methods to determine if they’re rigorous? It just can’t be done.

So while your critiques to preprints have merit, I don’t think the strongest argument is that they lack peer review. Or perhaps you’re overstating the value of peer review as “the gold standard.” It’s the best standard we have so far, but not nearly doing enough to slow down the churn of junk science. The publishers themselves are tasked with doing that — to great expense.

That said, your concerns about the discoverability of preprints (given their lack of peer review) and the public/journalists’ lack of understanding around their origins/vetting makes them a real challenge. In the end, depending on the field, I think they’re valuable in replicating the scientific process — scientists sharing findings and engaging in good faith with questions/critique/feedback — that typically goes on in the lab water cooler/at conferences/via shared manuscripts being passed and forth/grad student reading groups etc. Enabling that kind of transparent engagement is foundational to the practice of science (from HHS to STM). Perhaps preprints aren’t the best vehicle but until we uniformly get away from the “paper” paradigm, to more iterative, open forms of communicating findings (registration and open peer review come to mind), I can’t think of another way.

Always welcome this dialogue and thanks for the thought provoking piece. Apologies for typos — not yet fully caffeinated.

That said, your call for scientists to get back to what they exist for — the transparent, good faith search for knowledge

  • By Sara Rouhi
  • Apr 21, 2025, 10:21 AM

Hi Sara — thanks for the note and for adding your voice to the conversation.

Yes — I completely agree with you. Peer review as it exists now doesn’t nearly have the same rigor it used to have when there were fewer papers, less burnt-out reviewers, and less pressure to publish in high volumes. This is very true.

And the increasing volume of junk science is definitely alarming. But I feel like the only way to help distinguish junk from good is to have some level of internal checking by scientists. This is certainly the goal of peer review, and one that I believe most scientists still stand by. I don’t think the answer is to allow for the complete free-flow of information without any sort of vetting.

And yes — preprints are great for repeatability and transparency. To be honest, I think that is one of their greatest strengths. Transparent engagement about new findings is very beneficial, and I know for sure that collaboration like this creates better science (as we’ve seen through Stacks’ peer review process). But I don’t know if I’ve seen the level of engagement that would be required to have this vetting on a preprint server alone. I know that is certainly the goal, and the great cause of organizations like PreReview, but until it happens at scale I think we’re going to see more junk science posted to preprint servers.

Happy to chat more — you know where to find my DMs on Bluesky!

  • By David Green
  • Apr 21, 2025, 11:53 PM

David do you think it is possible that limited engagemnt with pre-print results from them not being seen as valuable enough? Your comparison of a pre-print to a blog entry (although I know many blog entries of better quality and scientific rigour than peer reviewed papers), perhaps unwillingly, but unfortunately contributes to this perception. Signed reviews counted towards a scientists’ publiction record when used for grant applications, work promotions etc. would change the game.

  • By Marta
  • Apr 22, 2025, 5:08 PM

This is super old news, exactly what preprint critics have always said. (Too bad math and physics didn’t learn this lesson decades ago; now they’re just junk disciplines!) I note this piece offers no actual evidence of harm caused by preprints, the benefits of which are well documented (as well as the problems): https://firstmonday.org/ojs/index.php/fm/article/view/12941.

I’m particularly struck by the bizarre use of the 7% comment rate statistic at bioRxiv. Literally no one thinks that this replaces peer review. To be halfway fair in this argument, how about reporting the percentage of bioRxiv papers that have subsequently been published in peer-reviewed journals, and the extent of difference between versions. These issues are well studied.

This piece is counterproductive and its publication here is disappointing. It’s the kind of advice many junior scholars get from their misinformed mentors, to the detriment of their fields and careers, which I have been laboring against for 10 years.

Philip Cohen
Director of SocArXiv

  • By Philip Cohen
  • Apr 22, 2025, 1:43 PM

As the co-founder of protocols.io, I’ve spent the last decade trying to make research communication more reproducible and trust-worthy. So, I think we share the same goals.

That said, I find this piece to be entirely devoid of data that one would hope to see from a scientist. Without data, you can easily make arguments that preprints hurt or help. For example, preprints help to correct science through the following:

1. They help scientific discourse when problematic papers (with or without peer review) are published by giving a forum to quickly rebut faulty publications.
2. They address the well known challenge of a bias against negative results in traditional journals.

In general, any negative impact of preprints must be weighed against the positive of speeding up science communication by 6-12 months, helping research fields move and correct more quickly.

Without data regarding benefit versus harm, this piece is very much a subjective opinion that is not at all persuasive.

Kind regards,

Lenny Teytelman, Ph.D.
President and founder, protocols.io

  • By Lenny Teytelman
  • Apr 22, 2025, 6:08 PM

Lenny’s point is bang on. This piece is lacking the evidence for its core argument. At the ScholCommLab, we have been trying to fill some of these knowledge gaps (such as the cited piece on the public’s misunderstanding of preprints).

For example, we also found evidence that, since the pandemic, there has been a stark decline in the number of preprints mentioned in the media (see https://journals.sagepub.com/doi/10.1177/10755470241285405).

This decline could mean that preprints were often mentioned in the media because the emergency phase of the pandemic required a rapid response, and not because journalists were being irresponsible (as alleged here). It won’t surprise you to read that, as a researcher, I think more work is needed—until then, it might be best to bring some nuance back into the conversation.

For those interested, here are a few other recent pieces we’ve written on preprints recently (although, alas, one of them is _just_ a preprint under review):

– Preprint servers and journals: rivals or allies? https://doi.org/10.1108/JD-09-2024-0215
– “Does it feel like a scientific paper?”: A qualitative analysis of preprint servers’ moderation and quality assurance processes. https://osf.io/drtj6/

Juan Pablo Alperin
Co-Director, ScholCommLab (https://www.scholcommlab.ca/publications/)
Associate Professor, Simon Fraser University

  • By Juan Pablo Alperin
  • Apr 23, 2025, 1:35 AM

I know that this post was written with thought and care, but would it have been so difficult to spend two seconds learning about the ***decades-old*** preprint culture of mathematics and physics before condemning the use of preprints tout court?

  • By Simplicissimus
  • Apr 24, 2025, 7:00 AM

It also might be worth further examining the differences between different fields of research and the cultures in those fields to better understand why preprints have become more acceptable in some areas than others. One thought might be the stakes involved in a patient care preprint versus a theoretical physics preprint. Another might be the costs/effort of verification of a mathematical proof versus a clinical trial or a wet bench study involving model organisms.

Also, it would be interesting to understand why some fields of physics extensively use arxiv and others less so.

  • By David Crotty
  • Apr 24, 2025, 8:36 AM

The idea that peer reviewed journal offer a reliable certificate of sound thinking is the problem.

Many scientists, in rich countries, publish in paper-mill journals to be able to show that they productive to their institutions (how else would these journals stay alive?). Many of the highest ranked journals have published daffy papers and refused to get them retracted when people point out the papers are daffy (arsenic life, stripey nanoparticles etc. etc.). Journals themselves have taken daffy editorial positions based on political alignment:

https://whyevolutionistrue.com/2024/06/13/nature-writes-about-gender-semantics-rather-than-science/

Individuals find good filters for what is true and not , make it cheap and easy to publish and discuss. If journals offer value, people will incorporate their opinions into their own decision making policies.

  • By Maneesh Yadav
  • Apr 25, 2025, 1:23 AM

Comments are closed.

Official Blog of:

Society for Scholarly Publishing (SSP)

The Chefs

  • Rick Anderson
  • Todd A Carpenter
  • Angela Cochran
  • Lettie Y. Conrad
  • David Crotty
  • Joseph Esposito
  • Roohi Ghosh
  • Robert Harington
  • Haseeb Irfanullah
  • Lisa Janicke Hinchliffe
  • Phill Jones
  • Roy Kaufman
  • Scholarly Kitchen
  • Alice Meadows
  • Ann Michael
  • Alison Mudditt
  • Jill O'Neill
  • Charlie Rapple
  • Dianndra Roberts
  • Roger C. Schonfeld
  • Avi Staiman
  • Randy Townsend
  • Tim Vines
  • Jasmine Wallace
  • Karin Wulf
  • Hong Zhou

Interested in writing for The Scholarly Kitchen? Learn more.

Most Recent

  • Guest Post — Beyond Efficiency: Reclaiming Creativity and Wellbeing in the Age of AI and Scholarly Publishing
  • Guest Post — Public Access to the Endless Frontier
  • Language Evolves, or rather, Constantly Cooks New Ways to Pass the Vibe Check

SSP News

Thank you to our 47th Annual Meeting Sponsors!

May 19, 2025

Get Your Tickets to the EPIC Awards!

May 14, 2025

Get Ready for SSP 2025: Innovation, Swag, and Scholarly Networking!

May 13, 2025
Follow the Scholarly Kitchen Blog Follow Us

Related Articles:

  • abstract chart showing increasing trend Guest Post — What’s Wrong with Preprint Citations?
  • old lightbulb replaced by new lightbulb Preprints Are Not Going to Replace Journals
  • diagram of FAST principles Guest Post: Preprint Feedback is Here – Let’s Make it Constructive and FAST

Next Article:

open door in a dark meadow with a sunny sky visible through the doorway Guest Post -- Horizon Shifting, Or, How to be a Human in Modern-day Scholarly Publishing
Society for Scholarly Publishing (SSP)

The mission of the Society for Scholarly Publishing (SSP) is to advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking. SSP established The Scholarly Kitchen blog in February 2008 to keep SSP members and interested parties aware of new developments in publishing.

The Scholarly Kitchen is a moderated and independent blog. Opinions on The Scholarly Kitchen are those of the authors. They are not necessarily those held by the Society for Scholarly Publishing nor by their respective employers.

  • About
  • Archives
  • Chefs
  • Podcast
  • Follow
  • Advertising
  • Privacy Policy
  • Terms of Use
  • Website Credits
ISSN 2690-8085