Science is a process. We tack towards discovery, towards truth, because the process encourages curiosity, critical thinking, experimentation, correction, and, at least in recent years, competition. When it runs properly, the process as a whole, over the course of time, is trustworthy. To be sure, individual scientists misbehave and scientific works are riddled with problems, but the process seeks truth.
Tacking towards truth through iteration and error correction is a workable model when scientists are talking only to one another. But in today’s environment, openness brings individual scientific works far more readily into the public discourse. And public discourse is intensely politicized, with science serving in turn as an enemy, a scapegoat, a virtue signal, or a vector for misinformation. I believe our sector is overdue for a conversation about whether our model for scientific scholarly communication is fit for today’s environment, or whether it is increasingly leading to an erosion of public trust in science.
In earlier eras, the distinction between scientific communication with peers and public communication of science was greater. Over the past two decades, one of the underlying rationales of the open access movement has been that the general public should have ready and free access to the scientific record. As we begin to better understand the second-order consequences of openness, we must grapple more systematically with how they can be addressed.
One of the most notable effects of the open movement has been an array of mechanisms that provide unmediated access to reviewed — and unreviewed — work. As a result, scientific communications with peers is now not only publicly available but widely exploitable by those who wish to foster misinformation and public discord. A consequence is that the scientific research enterprise — and in particular journal editors and publishers — are becoming responsible not only for facilitating peer to peer communications but also for public access. As such, they are grappling with the upstream exploitations and downstream public communications and misinformation which were previously squarely outside their remit. And, publishers are also getting cut out of the loop altogether, as scientific findings are increasingly being communicated first by press release.
Many Americans despair about whether science is taken seriously enough in public life. Certainly, the pandemic has brought home not only enormous challenges in public communication about science but also the serious consequences of our failures in this respect. We have faced significant challenges in public communications about and trust in science — on topics from masks to snakeoil treatments — that has exacerbated the crisis. The chaos around vaccines is especially demoralizing, given that sizable public investments in scientific research, including vaccine science, have provided pharmacological interventions that are halting a new pandemic’s spread while it is still underway. Likewise, hesitancy to dig fearlessly into the origins of the coronavirus is yet another case of politics interfering with what should be societal and scientific goals to prevent a future pandemic. Even for all the advantages that science has given us in fighting this pandemic, politics has made a mess of it.
To be fair, the divisiveness of scientific communication over the past two years and the abject failures in public communication on scientific issues must be situated in the broader decline in public engagement in and trust of civic institutions. It is related to the crisis of democracy we have seen emerge in a number of North American and European countries in recent years. As the federal government itself warned in the leadup to last year’s election, efforts to target scholarly publishers are in some cases designed to sow public discord [PDF]. The challenges of generating trust in science exist within an environment where actors are intentionally working to divide us. Strategies are needed to resist this division, some of which should be led by the scientific sector itself, including our publishers and libraries.
The problem here is that science itself is not always trustworthy, especially in the role that it is now playing in our society. There are too many examples of scientific misconduct and fraud, and too many failures to prevent them, to ignore. Much of science, perhaps the vast majority of it, is not of concern. But too often, incentives are misaligned with the goal of scientific quality: competition has tremendous benefits in producing excellence, but the downside is the production of fraud as well. The result is that, taken as a whole and given its role in our society, scientific practice and communication are insufficiently trustworthy. The consequences of this failure extend beyond the current pandemic to other global imperatives such as climate change.
One category of problems is scientific misconduct and fraud, which, it is important to note, is perpetuated by scientists themselves. This category includes scientists who use fraudulent data, inappropriately manipulate images, and otherwise fake experimental results. Publishers have been investing increasingly to block bad contributions at the point of submission through editorial review and more is almost certainly needed, likely a combination of automated and human review. Another form of misconduct is the failure to disclose conflicts of interest, which, notwithstanding efforts by publishers to strengthen disclosure guidelines, have continued to be disclosed “too little too late,”
Beyond individual misconduct, there are also organized and systematic challenges. We are seeing “organized fraud” and “industrialized cheating” to manipulate the scientific record to advance self-interests. These choreographed efforts include citation malpractice, paper mills, peer review rings, and guest editor frauds. And, even if it does not rise to the level of misconduct, we have seen the use of methods and practices that make substantial portions of at least some fields impossible to reproduce and therefore of dubious validity. Whether individual, organized, or systematic, all these are threats to scientific integrity.
Overall, it is clear that science is failing to police itself. Some observers hope that “open science” will minimize misconduct and fraud, and as much as it may help it seems unlikely to be sufficient. Indeed, a number of cases have been discovered by an “image detective” who has been profiled not only in Nature but also in the New Yorker. Some egregious misconduct is investigated at a university, funder, or national level. What none of this does however is prevent misconduct. This is all after-the-fact detection. The ultimate solution probably requires incentives that provide enough deterrence to eliminate such misconduct proactively rather than treating it reactively.
When the editorial process fails to detect fraud or other serious problems in submissions, these submissions are issued publicly and in many cases formally published. Preprint services were not prepared to combat their role as vectors of misinformation, generating a series of preprint disappointments that have been extensively chronicled in The Geyser. Peer review has failed on too many occasions, with major journals publishing articles about COVID treatments that turned out to be unsupportable. The effect on the public trust of science has been just as corrosive as the vaccines-cause-autism scandals of years past, in whose shadow we still shiver. And of course, we also have the cases of journals that appear to provide relatively little review or are actually not really providing any review, including those that are sometimes termed predatory.
Once fraudulent or otherwise inappropriate materials are published into the scientific record, a retraction is in one sense a good outcome because the official record is corrected — but in another sense it is evidence of failure. There have been important efforts to improve the retraction process in recent years, such as this proposal recently covered in the Scholarly Kitchen, but ultimately I maintain that we need an external body like an airline accident investigation board to investigate these failures and make public recommendations for process and policy improvements.
While the specific issues vary, and specific solutions therefore do as well, at the highest level we must ensure that the right incentives are provided for all participants in scientific scholarship and scholarly communication. Some observers excuse these failures as limited only to a small portion of science. Others will point to them as the understandable consequences of “scholarship as conversation” in which hypotheses are advanced and tested and rebutted and accepted over time — as if scientists are only speaking to their peers and there is no risk of misunderstanding, misinformation, or politicization. As Ed Yong recently wrote in The Atlantic, “Pundits have urged people to “listen to the science,” as if “the science” is a tome of facts and not an amorphous, dynamic entity, born from the collective minds of thousands of individual people who argue and disagree about data that can be interpreted in a range of ways.” And, if science has flaws in its trustworthiness, it is no surprise that it is challenging to generate public trust in its findings.
Our Sector Must Engage
Let me be clear: The problems I discuss here are caused by a mismatch between the incentives that drive the practice of science and the ways in which openness and politicization are bringing science into the public discourse. While this set of problems is not caused by the publishing sector, in the end scholarly publishing has a responsibility to provide the basis for trust in science.
In recent years, several major publishing houses have aligned much of their effort to support the public good around their support for the UN’s Sustainable Development Goals. Taking nothing away from the SDGs, which represent some of the most urgent and important thematic focus areas for scientists, they do not address the issue of trust in science.
Another question for universities, funders, and publishers to ask is how they can contribute to trust in science, especially if existing models for scientific scholarly communication are no longer fit for the broader purpose they find themselves needing to play. Here are several proposed priorities:
- To provide a scientific record so completely trustworthy that it contributes to rebuilding public trust in the institution of science. This would require an array of changes, including incentives that dramatically reduce incidents of scientific fraud and misconduct.
- To ensure the continuing prioritization of scientific openness even as the research security imperative grows more pronounced. This would require more than just hopefulness about “open science” but rather a realistic appraisal of how the split with China and other geopolitical priorities may affect scientific collaboration and communication.
- To provide a user experience that is so seamless and value added that users choose validated sources of information. This would require that publishers expand their thinking about piracy beyond its place as a business risk and address it as a strategic challenge in ensuring a trusted information environment.
There is opportunity here for the scholarly publishing sector to take strategic leadership for its role in science and scholarship. Doing so would require a vastly different kind of engagement with academia, working arm in arm with senior research officers, policy makers, funding bodies, and libraries. This in turn would require the major publishing houses to speak with a single leadership voice on topics that have thus far been elements of the competitive landscape. The long-term benefits to science and the public would pay substantial returns.