The four years of the Trump administration have been painful — indeed, traumatic — for a great many people, for a great many reasons. One source of distress has been the administration’s unprecedented assault not only on the truth itself, but also on the idea that truth matters more than political expediency. This has created an unusual challenge for journalists, who have always had to deal with politicians whose relationship with factuality is, shall we say, complicated, but who have never encountered an administration that misrepresents facts and actively advances falsehoods so constantly, so brazenly, and so reflexively.
We have to acknowledge, of course, that undermining the authority of “facts” and “objective truth” isn’t a phenomenon that originated with the Trump administration. The propositions that “reality” is whatever we all agree it is, that there is no such thing as historical fact, that “objectivity” is merely a pretense used by the powerful to defend their interests, and that the putative search for “truth” is really just a tool of oppression have been significant currents of postmodern and critical academic discourse and teaching for several decades. (Go back a bit further, of course, and you have Foucault asserting that “reason is the ultimate language of madness”; earlier than that, there’s Nietzche: “The real truth about ‘objective truth’ is that objective truth is a myth”.) As President Trump has constantly attempted to twist or reverse the truth to fit his agenda, it’s been interesting to hear voices from quarters that once characterized objective fact as a myth and reality as a social construct now calling us urgently to stand up against Trumpism’s offenses against objective fact and reality. (To be very clear, none of this is to say that, as some have argued, postmodernism itself is to blame for Trumpism — though the mental image of the President consulting a volume of Derrida or Irigaray while composing his counterfactual tweets is kind of fun.)
Be all that as it may, for the purposes of this post let’s take it as given that there is such a thing as objective truth, and that it matters what the truth is. Furthermore, let’s stipulate that factual claims can generally be confirmed or debunked by appeal to empirical evidence, and that it therefore matters whether evidence supports the claim that America’s voting machines were infected with algorithms created at the behest of the late Hugo Chavez, or the assertion that Republican observers were barred from vote-tallying facilities, or the claim that Hillary Clinton’s campaign manager ran a child sex-trafficking ring out of a pizza parlor. If we can all agree, for the sake of argument, that there is such a thing as objective truth, that it matters, and that it can generally be established by appeals to evidence, then we can proceed with the questions I’d like to address in this post.
The questions are: what is the media’s responsibility to sort truth from error in public discourse, and what does this have to do with preprint servers?
Question 1: What Is the Media’s Responsibility to Sort Truth from Error in Public Discourse?
One thing the past four years have shown us is that if the media simply report things that political leaders say, and leave it at that, they may be doing only part of their job. When the statements of those leaders are expressions of debatable opinion or are factual statements that have some reasonable correspondence to the truth, reporting them without editorial comment has been the traditional journalistic approach, and is arguably the right one; it seems entirely appropriate for journalists, in their role as seekers-out and reporters of facts, to avoid putting their thumbs on the scale of genuine public debate. But what about when a powerful leader is saying things that are patently and dangerously false? President Trump, his proxies, and his formal spokespeople have created this dilemma to an unprecedented degree, and after a certain amount of understandable thrashing around and hand-wringing, the mainstream media have eventually settled on the strategy of characterizing his most blatantly false claims as just that: think about how many times you’ve read or heard sentences in news outlets over the past few years with qualifications like “the president claimed, falsely” or “the president asserted, without evidence.” It seems now to be broadly accepted that even when what’s being reported is not the presumptive truth of the statement but merely the fact that a public figure made the statement, there are circumstances in which the dangerous falsity of the statement itself really does need to be flagged.
But for those of us who accept that premise, a genuinely difficult question arises: whom should we trust as public arbiters of what is and isn’t patently and dangerously false? Once we clear the way for reporters to characterize blatant falsehoods as such, who will draw the line between blatant and dangerous falsehoods and assertions with which the reporter simply strongly disagrees?
That troubling question notwithstanding — and recognizing that everyone won’t agree on where such lines should be drawn — it does seem to me that the line currently drawn by the mainstream news media represents a pretty reasonable distinction between what can be reported without comment and what needs to be flagged as a clear and potentially dangerous falsehood.
So why are we discussing this in The Scholarly Kitchen? That brings us to the second question:
Question 2: What Does This Have to Do with Preprint Servers?
Preprint servers, to which scholars and scientists can post preliminary reports of their research for public comment before submitting them for formal publication, aren’t intended to fill the same function as journalistic venues. While they’re open to the public, submissions to preprint servers are presented not as established science for public consumption, but rather as tentative findings for open discussion, mainly among other experts in the field.
Except when they aren’t.
A growing problem in the scholarly and scientific community is a population of opportunists who try to use preprint servers as a place to post crackpot pseudo-science and misleading public health information, all under the flag of scholarly “publishing.” They submit articles to preprint servers in the hope of publicizing them, counting on both an uninformed public and a too-credulous press to treat the reports as if they were vetted and peer-reviewed science published in a venue that is willing to accept responsibility for them. Just as predatory publishers have recognized in the APC funding model an opportunity to lie and make money, mendacious authors have recognized in the preprint-dissemination model an opportunity to lie and achieve political goals or professional advancement.
Here we see a direct connection to the journalistic issues raised above. Since the difference between publication in a peer-reviewed journal and “publication” in bioRxiv or medRxiv isn’t immediately obvious to non-specialists, journalists are a prime (and intentional) target for what amount to political scams: unscrupulous scholars and scientists (or people posing as such) posting papers to preprint servers and then touting them as having been “published.” Journalists who may or may not know better then report on these studies as if they represented vetted science.
How bad is this problem? A recent search of newspapers of record like the New York Times and the Washington Post suggests that these generally do a good job of identifying posted preprints as such, and making it clear that what they’re citing are unvetted scientific claims. I found that the Fox News website does this less well; articles with references to bioRxiv and arXiv often include qualifiers like “awaiting peer review,” but are just as likely to say things like “published in bioRxiv” or (worse) “published in the preprint journal arXiv.”
The time has come for those who manage preprint servers to take a firmer hand in vetting the claims that are posted there and to consider retracting preprints when the public good requires it.
A more troubling set of data points suggests a larger and deeper problem, though: over the course of several recent posts in his newsletter The Geyser*, Kent Anderson has provided compelling evidence that white nationalists are disproportionately using unvetted preprints to promote pseudo-scientific racism; that alt-Right (and former Trump administration) figure Steve Bannon used CERN’s open-science platform Zenodo to amplify Dr. Li-Meng Yan’s dangerous conspiracy theory about COVID-19; and that other shady figures on the alt-Right have been taking significant advantage of the low barriers to “publication” that are a defining feature of preprint servers, and making disproportionate use of those venues to seed the public conversation with false and misleading claims designed specifically to push hateful and divisive narratives under the guise of “science.” The data and patterns he describes are startling and, in my view, worth serious consideration.
What can be done? I would suggest that just as the mainstream media (and, more reluctantly, social media platforms like Twitter and Facebook) have gradually come to the conclusion that they have a responsibility to flag obvious and potentially dangerous falsehoods as such when they appear on their platforms, the time has come for those who manage preprint servers to take a firmer hand in vetting the claims that are posted there and to consider retracting preprints when the public good requires it — recognizing that while the purpose of a preprint server is not primarily to serve as a dissemination or “publishing” platform, what affects the public welfare is not whether it’s intended to be used that way, but whether it is used in that way. In this context, it’s worth noting that despite multiple calls to do so, Zenodo has never removed (or even flagged) Dr. Yan’s COVID-19 conspiracy theory, despite its thorough, repeated, and public debunking. Similarly, a thoroughly debunked study that purported to find a causal connection between cellphone use and brain cancer remains on the bioRxiv site — where it is presented without editorial comment — and it continues to be cited. A deeply flawed study purporting to show similarities between COVID-19 and HIV was posted on bioRxiv early this year, and was eventually withdrawn by its authors following severe criticism by the scientific community. The term “withdrawn” is rather ambiguous, though, as the article is still on bioRxiv (though flagged with a banner indicating that it has been “withdrawn”).
In fairness, it should be noted that bioRxiv, and medRxiv both currently have banners at the top of their pages, warning users that the preprints do not represent peer-reviewed science and should not be cited as such in the media or used to guide clinical practice — and they’re making efforts to catch bad science before it’s posted. These are steps in the right direction. Given the incredibly high stakes involved during the COVID-19 crisis, however, it does not seem sufficient; on both platforms, all reports are still presented as if they’re on an equal factual footing, regardless of whether they’ve been seriously challenged or even completely debunked since being posted. And Zenodo offers no disclaimer at all — in fact, its main page leans in the other direction by noting, in a sidebar, that Zenodo currently “prioritizes all requested [sic] related to the COVID-19 outbreak” and offering to help researchers with “uploading (their) research data, software, preprints, etc.” Nowhere does it suggest that there will be any attempt either to detect or to flag (let alone retract) dangerous medical misinformation.
I should point out here that I’m actually generally a supporter of preprint servers and of the open and public discussion of preliminary scientific and scholarly findings. (Disclosure: I have served for years as an unpaid member of the advisory board for bioRxiv.) But like all dissemination models and systems, preprint servers don’t only solve problems; inevitably, they also create them. In a circumstance in which science is more highly politicized than normal and the stakes are incredibly high — such as during an unusually dangerous pandemic that is being weaponized by political actors — the problems with “publishing” unvetted science do come into dramatically sharper relief, and raise questions that need urgently to be asked and resolved.
* The Geyser posts to which I’ve linked in this paragraph are normally available only to subscribers, but will be publicly open for 24 hours beginning the evening prior to this post.