Editor’s Note: Today’s post is by Michele Avissar-Whiting. Michele is the Editor in Chief at Research Square, a multidisciplinary preprint platform that launched at the end of 2018.
In my role, I often imagine what publishing looks like in a preprint-first world. It’s a world in which researchers are in full control of when and where their stories are shared; the pressure to disseminate findings is relieved, and we rely on the consensus of the world’s experts to understand their validity. It’s a world where we all have equal access to scholarly research, one in which we all face more content than we know what to do with and, thus, tools for filtration and curation are modes of survival. It’s also a world in which the Mertonian norms are reified to the extent that those who would circumvent the basic ethics of scientific conduct may find it more difficult to thrive. That last point is the one I want to elaborate on here; it has emerged a number of times when I haven’t been looking for it.
Most preprints come to the Research Square platform because an author submitting their paper to a participating journal has opted to also post it as a preprint. Choosing to preprint is sweetened by the prospects of immediate primacy: earlier attention, feedback, and citation. And, in the case of In Review, public transparency of the journal’s editorial process, as well as a number of other features. Unsurprisingly, many authors choose this option, and most first-time preprinters benefit from the experience in ways that surprise even them. But this may only be true for people who are genuinely interested in intellectual pursuits, those who sincerely desire maximum exposure for their work, those who hope that exposure will lead to constructive feedback — in other words, people who are proud of their contribution to science.
In her recent Scholarly Kitchen post, Leslie McIntosh suggests that some authors exploit preprint servers as cover for ethically dubious behavior. But these platforms, in their inherent transparency, make terrible hiding places. Rather than being a venue for misconduct, preprint servers — especially those closely tied to publishers — have the potential to help reveal wrongdoings that have found cover for years in the opaque practices dominating scholarly publishing. Let me explain by way of real-world examples.
About a year ago, we received a message from a concerned author asserting that some of the figures in a preprint on our platform were essentially copied from her published article. On close inspection, I found that the graphs were indeed very similar to hers (error-bars and all). There was also some text overlap, but hardly enough to provoke an editor’s suspicion. Indeed, only the authors of the original figures themselves would be properly equipped to stumble on this anomaly. The preprint had already been desk rejected at the journal to which it was submitted, but I alerted the publisher anyway (lest they encounter this author again) and we withdrew the preprint with a note suggesting ethical concerns. This way, a diligent editor or reviewer checking the preprint while considering the problematic paper for publication would be alerted to the issue. As far as I can tell, the plagiarized paper has not — as yet — been published in a journal.
The topic of figure manipulation is generally interesting because plots and images reflect the underlying research data, which are the ground truth of a study. At the moment, we rely on sharp-eyed editors and reviewers, and sometimes professional sleuths, to catch the most egregious issues. But some deceptions would be unreasonable or impossible for even the most vigilant sentinel to catch; think graphs lifted from other work or fabricated altogether. But by releasing a paper into the public domain immediately, we tap into an entire army of potential scam catchers. And that army is partially made up of tens or hundreds of researchers who are close enough to the subject matter that they take a sincere, personal interest in its legitimacy. They are the most well equipped, not only to understand and validate the work, but to spot fraud within it. Catching these things early may prevent wasted editor and reviewer time downstream and avoid the notoriously cumbersome process of journal retraction. In this sense, a preprint is an opportunity to crowdsource useful information, another tool in an editor’s toolkit.
The stolen paper
In early March, I was contacted by an editor at a journal unaffiliated with our platform, claiming that a reviewer recruited for a submitted manuscript turned around and posted the paper as their own preprint on our platform. This preprint paper included a citation to another work by the real author, which triggered a citation alert and, thereby, exposed the misappropriated preprint. A full investigation on our end confirmed that the document file, which the deceiving reviewer had submitted to an affiliated journal, still bore the name of the original author in its metadata. This particular story got extra weird: the accused reviewer/false author denied his involvement and pinned the illicit submission on an assistant with a grudge. Through some digital sleuthing, we were able to work out that the reviewer used a fake identity to upload the paper.
Ultimately, the reviewer’s institution and the publishers involved were fully apprised of the situation, and I suspect his name has been added to a growing list of charlatans. The preprint was withdrawn, and the rightful author was able to proceed with publication. Unfortunately, the lifted submission had already drawn on the valuable time of several editors and reviewers before the scam was found, but it won’t waste anyone else’s time now.
Tip of the iceberg
This article would get too long if I listed them exhaustively, but the Research Square team has seen all manner of oddities since this project in radical transparency began. Many cases of research misconduct would have likely gone on to quietly publish: paper mills uncovered by virtue of pushing submissions from multiple different journals through a small preprint screening team, near misses of duplicate publications caught on time because of the preprint, authors discovering their inclusion on papers to which they did not contribute. Some predatory journals ask authors to remove their ‘duplicative’ preprints because, as it turns out, they don’t issue a proper DOI that can link the preprint to the journal article. More than a few authors have become aware that they are dealing with a predatory journal simply based on the way in which those journals regard their preprint.
All of this leaves me wondering: If I’m only seeing the cases I accidentally trip over, how often are these things happening in the comfortable shade of the closed-review and closed-access publishing system? How many authors can cheat their way through, “cash out” on their publications with little risk of discovery, and undermine the entire system in the process? To what extent has the closed nature of the system given cover to predatory practices at journals?
Preprints didn’t create these problems. Research misconduct has been happening all along and probably at a scale that we’d all find disturbing. Any attempt to put a number on it is inherently confounded, though there have been admirable pursuits (e.g., example, example and example). As many before me have pointed out, these behaviors are the product of a system where incentives are aligned toward publication rather than true discovery. This, combined with constant growth in research output, ceaseless proliferation of journals, predatory publishing practices, and an inadequate, exhausted, and still opaque peer review system, have brought us to this place. Perhaps, ironically, they’ve also encouraged the popularity of preprints, which may deliver us from the worst of these problems when more widely adopted. Misconduct thrives in darkness, and preprints may end up being the 10,000-watt bulb.