Editor’s Note: Today’s post is by Michele Avissar-Whiting. Michele is the Editor in Chief at Research Square, a multidisciplinary preprint platform that launched at the end of 2018.

In my role, I often imagine what publishing looks like in a preprint-first world. It’s a world in which researchers are in full control of when and where their stories are shared; the pressure to disseminate findings is relieved, and we rely on the consensus of the world’s experts to understand their validity. It’s a world where we all have equal access to scholarly research, one in which we all face more content than we know what to do with and, thus, tools for filtration and curation are modes of survival. It’s also a world in which the Mertonian norms are reified to the extent that those who would circumvent the basic ethics of scientific conduct may find it more difficult to thrive. That last point is the one I want to elaborate on here; it has emerged a number of times when I haven’t been looking for it.

spot light beam in darkness

Most preprints come to the Research Square platform because an author submitting their paper to a participating journal has opted to also post it as a preprint. Choosing to preprint is sweetened by the prospects of immediate primacy: earlier attention, feedback, and citation. And, in the case of In Review, public transparency of the journal’s editorial process, as well as a number of other features. Unsurprisingly, many authors choose this option, and most first-time preprinters benefit from the experience in ways that surprise even them. But this may only be true for people who are genuinely interested in intellectual pursuits, those who sincerely desire maximum exposure for their work, those who hope that exposure will lead to constructive feedback — in other words, people who are proud of their contribution to science.

In her recent Scholarly Kitchen post, Leslie McIntosh suggests that some authors exploit preprint servers as cover for ethically dubious behavior. But these platforms, in their inherent transparency, make terrible hiding places. Rather than being a venue for misconduct, preprint servers — especially those closely tied to publishers — have the potential to help reveal wrongdoings that have found cover for years in the opaque practices dominating scholarly publishing. Let me explain by way of real-world examples.

Plagiarized figures

About a year ago, we received a message from a concerned author asserting that some of the figures in a preprint on our platform were essentially copied from her published article. On close inspection, I found that the graphs were indeed very similar to hers (error-bars and all). There was also some text overlap, but hardly enough to provoke an editor’s suspicion. Indeed, only the authors of the original figures themselves would be properly equipped to stumble on this anomaly. The preprint had already been desk rejected at the journal to which it was submitted, but I alerted the publisher anyway (lest they encounter this author again) and we withdrew the preprint with a note suggesting ethical concerns. This way, a diligent editor or reviewer checking the preprint while considering the problematic paper for publication would be alerted to the issue. As far as I can tell, the plagiarized paper has not — as yet — been published in a journal.

The topic of figure manipulation is generally interesting because plots and images reflect the underlying research data, which are the ground truth of a study. At the moment, we rely on sharp-eyed editors and reviewers, and sometimes professional sleuths, to catch the most egregious issues. But some deceptions would be unreasonable or impossible for even the most vigilant sentinel to catch; think graphs lifted from other work or fabricated altogether. But by releasing a paper into the public domain immediately, we tap into an entire army of potential scam catchers. And that army is partially made up of tens or hundreds of researchers who are close enough to the subject matter that they take a sincere, personal interest in its legitimacy. They are the most well equipped, not only to understand and validate the work, but to spot fraud within it. Catching these things early may prevent wasted editor and reviewer time downstream and avoid the notoriously cumbersome process of journal retraction. In this sense, a preprint is an opportunity to crowdsource useful information, another tool in an editor’s toolkit.

The stolen paper

In early March, I was contacted by an editor at a journal unaffiliated with our platform, claiming that a reviewer recruited for a submitted manuscript turned around and posted the paper as their own preprint on our platform. This preprint paper included a citation to another work by the real author, which triggered a citation alert and, thereby, exposed the misappropriated preprint. A full investigation on our end confirmed that the document file, which the deceiving reviewer had submitted to an affiliated journal, still bore the name of the original author in its metadata. This particular story got extra weird: the accused reviewer/false author denied his involvement and pinned the illicit submission on an assistant with a grudge. Through some digital sleuthing, we were able to work out that the reviewer used a fake identity to upload the paper.

Ultimately, the reviewer’s institution and the publishers involved were fully apprised of the situation, and I suspect his name has been added to a growing list of charlatans. The preprint was withdrawn, and the rightful author was able to proceed with publication. Unfortunately, the lifted submission had already drawn on the valuable time of several editors and reviewers before the scam was found, but it won’t waste anyone else’s time now.

Tip of the iceberg

This article would get too long if I listed them exhaustively, but the Research Square team has seen all manner of oddities since this project in radical transparency began. Many cases of research misconduct would have likely gone on to quietly publish: paper mills uncovered by virtue of pushing submissions from multiple different journals through a small preprint screening team, near misses of duplicate publications caught on time because of the preprint, authors discovering their inclusion on papers to which they did not contribute. Some predatory journals ask authors to remove their ‘duplicative’ preprints because, as it turns out, they don’t issue a proper DOI that can link the preprint to the journal article. More than a few authors have become aware that they are dealing with a predatory journal simply based on the way in which those journals regard their preprint.

All of this leaves me wondering: If I’m only seeing the cases I accidentally trip over, how often are these things happening in the comfortable shade of the closed-review and closed-access publishing system? How many authors can cheat their way through, “cash out” on their publications with little risk of discovery, and undermine the entire system in the process? To what extent has the closed nature of the system given cover to predatory practices at journals?

Preprints didn’t create these problems. Research misconduct has been happening all along and probably at a scale that we’d all find disturbing. Any attempt to put a number on it is inherently confounded, though there have been admirable pursuits (e.g., example, example and example). As many before me have pointed out, these behaviors are the product of a system where incentives are aligned toward publication rather than true discovery. This, combined with constant growth in research output, ceaseless proliferation of journals, predatory publishing practices, and an inadequate, exhausted, and still opaque peer review system, have brought us to this place. Perhaps, ironically, they’ve also encouraged the popularity of preprints, which may deliver us from the worst of these problems when more widely adopted. Misconduct thrives in darkness, and preprints may end up being the 10,000-watt bulb.

Discussion

17 Thoughts on "Guest Post — The 10,000-watt Bulb: How Preprints Shine a Light on Misconduct"

I hope that your observations are borne out in practice and are rare examples. The examples you cite are of copyright infringement. The other and possibly more common misdeed is publication of fraudulent data. The lawsuit filed against Dr. Elizabeth Bik is distressing, and if successful, renders Ms. Avissar’s thesis mute ( https://www.the-scientist.com/news-opinion/elisabeth-bik-faces-legal-action-after-criticizing-studies-68831 ). Even if unsuccessful, I now question who would be willing to face the expense and social media exposure to discuss possible fraudulent behavior. A method must be developed to address this, at both the preprint and published article level, so junk science does not permeate our various niches of research.

Thank you for the comment. It’s not clear to me why a successful lawsuit against Dr. Bik would render my thesis moot. If you simply mean that the threat of legal retribution will discourage whistle-blowers, I see that as a separate issue and one that needs to be defended vociferously by the institutions dedicated to preserving scientific integrity. I’m absolutely in agreement with you that we need better and more official channels for serious discussions of this nature. Pubpeer is the best we’ve got right now, and it’s a largely anonymous forum, which speaks to the perception of “safety” around calling out problematic behavior in science. We have to do better.

To be expected that those who run preprint servers seek to defend them. What about the following?
Preprints spreading conspiracy theories — see Zenodo
Papers published as preprints that get rejected as papers remain online (without an indication that they have been rejected) and get indexed in PubMed/ ResearchGate/ and others without the rejected label status – completely destroying the idea of peer review and quality.
According to Kent Anderson “Somewhere north of 30% of the research outputs on these big preprint servers are rejects — old papers never published in a journal. But they still have DOIs, PDFs, usage data, and citations gathering views and showing up in searches like any other paper.” Preprints are responsible for confusing and destroying trust in the the literature as well as disseminating false news. Look at the tweet below from an Open Science Project Manager at Wellcome Trust (acknowledgement to The Geyser):
“I don’t think we were ever destined for a stellar right up from this pundit.
Maybe our model isn’t perfect, but neither is the existing system for all it’s “editiorial accountability”. So let’s keep tweaking. And maybe, just maybe, not everything needs to pass peer-review.”
Its fine to say misconduct has always been around — but preprint servers gave these people an excellent forum and showcase! Unless preprint servers tidy up their acts and introduce much more stringent quality standards — the situation will clearly continue to escalate – they have had plenty of time already.

I agree with Shiloh. Dr. Avissar-Whiting seems to be supporting preprint servers because they serve as a place that attracts misconduct so that we may then catch those who perpetrate the misconduct. A preprint server seems like bait on the end of fish hooks. Dr. Avissar-Whiting may have “stumbled” on a few cases and caught misconduct, but the odds are that the preprint service attracted many more instances of misconduct she hasn’t stumbled on. Seems like pretty slim anecdotal evidence of the value of preprint service for catching misconduct. To catch misconduct, we need stronger peer review, not fishing bait to encourage more of it.

It seems that preprint servers such as Research Square have morphed into something very different from the arXiv and bioRxiv dominated early days of preprint servers. Early on, those posting to preprint servers seemed to do so to foster debate and to establish precedence for papers that might take months to come out in mainstream publications. Now it seems like it serves for some an alternative “publishing” model. Michele, what fraction of postings actually receive substantive comments? What fraction of postings remain unpublished after a year? I suspect the former is a very small fraction and the latter is a much larger fraction.
Why does “Res Sq” and other preprints look like a journals and accrue citations in Google Scholar and get picked up in NIH’s NLM? At one level, shame on GS and NLM for allowing this, but it seems that ‘Res Sq’ considers it desirable. From the website, “posting early lets you … get more citations.” Citations to where? “Res Sq”? Since Research Square is a for profit company, it needs to mostly run in the black. Is the preprint service a freemium loss leader to bring submitters to the allied article editing and article promoting business services that are featured on the Research Square website? Nothing wrong with that, but calling the service a “10,000 watt-bulb” shining a light on misconduct based on a few anecdotes seems a real stretch. Let’s see some data.

I agree with Mebane. Preprint servers seem more like a form of social media, like Twitter and Facebook. We’ve already seen how those worked out. They disseminate information well when used responsibly, but they are probably more effective at spreading untruths and ideology.

There seems to be a misunderstanding here. All the instances provided here are of papers that were in some stage of the peer review process at a journal. These are not authors who are looking to preprints as a final destination. It’s likely that they have opted in without serious consideration of what it means to have their work posted online for anyone to see. I am certain that some of these would have made it all the way to publication had the issues not been caught by virtue of the preprint. The practice I’m championing here is that of preprinting *as part* of the journal publication process, not *in lieu* of it. I have my own personal convictions about whether preprints (with appropriate engagement) can be successful as “an alternative publishing model”, but those are tangential to the points being made here. In the current world, preprints can act as a resource to journal editors and reviewers, and I believe it’s in their best interests to require or at least strongly encourage authors to post them.

Chris – you are right that our business model is around helping authors (preprinters or not) realize their publication goals through the services and products we’ve developed over the past 17 years. More on that here: https://scholarlykitchen.sspnet.org/2020/10/08/preprints-author-services/.

You are championing the responsible use of publishing preprints. I get that. I am saying that while the responsible use occurs, there will be irresponsible use occurring in parallel. Twitter and Facebook are perfect examples I believe. When it comes to the integrity of science, the benefits of responsible use are not worth the costs of the irresponsible use.

In general, research has shown around 70 percent of what is posted is ultimately published. I’m curious to know what fraction of perfectly good research articles DON’T pass through peer review thanks to human factors, like gender and other demographic biases; or overworked, inconsistent, or underprepared reviewers. Research posted on preprint servers is not perfect, nor is it meant to be, but let’s face it: Much research published in journals is not perfect either. Every system has its flaws, and judging the quality of research on a few fallible human beings (what we all are at the end of the day) is not such a glimmering standard. I think we need to allow more involvement with the greater research community – and earlier in the publishing process. Not only would it help guide publication decisions; it can, like Michele A.W. says, shine a really strong light on misconduct. This is misconduct that can more easily thrive in the closed peer-review process and behind paywalls.

I have written a comment on a paper, exposing falsehood after falsehood, plus other research misconduct in a paper on famine, where bad economics can kill millions. The journal’s 49-person editorial board has not challenged my facts or economics in any way, but refuses to publish. (see https://www.timeshighereducation.com/author/peter-bowbrick). I have worked on this particular episode for years, and have a large research library on it. I have published a lot of comments and refutations over the years. In the past it was a matter of honour for an editor to publish a critical comment. Neither the journal, Cambridge Journal of Economics, nor the publisher, Oxford University Press, hold this view. So what alternative do I have to publishing it as a preprint? Or letting people die?

To Shiloh’s comment “ Papers published as preprints that get rejected as papers remain online (without an indication that they have been rejected) and get indexed in PubMed/ ResearchGate/ and others without the rejected label status – completely destroying the idea of peer review and quality.” To me this is not an issue of peer review but simply of journal selection. How many papers get rejected where 2 out 3 peer reviews approve but 1 does not? Where is the transparency? At least the preprint is transparent and open to all. It is much harder to find malfeasance behind a paywall.
I personally think preprints should be listed on all the public servers like PubMed. Why restrict my access to the most recent material, especially when it takes 6 months to a year for stuff to get published? Why not let me decide if the science is worthy. I don’t have to cite it, but to advance my science shouldn’t I have access to it? Maybe if the editors hastened the publication rate we wouldn’t need preprint servers.
To me the major difference between a publication and a preprint is the peer review and $3,000 to $11,000 it costs to publish. Answer seems simple to me. Pay the peer reviewers $2,000 post the paper and peer reviews on a server within a week and be done. Addresses Open access and reviewer burn out, would remove everything that is wrong with the JIF and predatory journals and libraries would save millions. Publishers would suffer, so I can see why they are against companies like Research Square or posts such as this.

You do realize that the majority investor in Research Square is Springer Nature (https://www.aje.com/about/springer-nature/), yes?

Or that biorxiv and medrxiv are both owned and run by a publisher, Cold Spring Harbor Laboratory Press?

Or that publishers across the board have invested heavily in supporting and launching preprint services and infrastructure (https://scholarlykitchen.sspnet.org/2020/05/27/publishers-invest-in-preprints/)?

You may want to rethink your argument.

David, not at all – it does not matter who runs or pays for them – what matters is the standards they use to preserve the scientific record and prevent the dissemination of false news and deception. I am all in favour of having negative results be published – but not false results as is allowed on these servers. Whoever owns these servers – they all need to clean up their acts and get their houses in order – rather than jumping on this wild west bandwagon. Its a spotlight alright – but a spotlight that many perpetrators of false news, lies and conspiracy theories want to step into.

To be clear Shiloh, my comment was in response to Terry Van Raay’s statement that publishers, “are against companies like Research Square or posts such as this.”

You mentioned that some authors have been alerted to the journal they are submitting to being predatory by their attitude toward preprints. Could you elaborate on that? It’s not clear to me what those attitudes would be and why they would have them.

Hi Erinn! I honestly don’t have a great understanding of why, but increasingly, we’re finding that when preprint authors ask us to remove a preprint because the journal doesn’t allow it, that journal ends up being very shady looking. It may be because preprints attempt to link to dois for versions of record, so the fact that some of these journals don’t issue dois may be awkwardly exposed at the point of preprint-VoR linking. We had an author withdraw an already-accepted paper because it came to light that the journal was not operating above board when they wouldn’t provide her with a doi to link to her preprint. Or, perhaps more likely, is that they are misapplying the Ingelfinger rule, as suggested by Lisa Janicke Hinchcliffe in my Twitter thread about this a few months back: https://twitter.com/lisalibrarian/status/1365500693939945472. Would love to hear others’ theories on why predatory journals might not like preprints…

Leave a Comment