Editor’s note: Today’s guest post is from Steve Smith, founder of STEM Knowledge Partners and an independent consultant with over 25 years of experience in scholarly publishing, including leadership roles at Blackwell Publishers, John Wiley & Sons, Frontiers Media, and AIP Publishing. Reviewer credit to Chef Alice Meadows.
In many ways, science is more transparent than it has ever been. Preprints are shared before peer review is even cold; journals are faster to publish corrections; and AI-powered tools are scrutinizing images and data at a scale that was once impossible.
And yet, we have a paradox: as visibility has gone up, public trust has gone down. Polling from Pew, KFF, and Annenberg consistently shows a sharp decline in trust in scientific institutions over the last decade. This isn’t just a PR problem; it’s a bedrock challenge for everyone in the research community. I recently had the chance to explore this tension as a co-moderator for a joint CSE/ISMPP webinar, “Restoring Trust in Science,” alongside Michele Springer.
The turnout was a signal in itself: over 800 people registered, with 650 joining live from across the globe. Trust is no longer a “side project” for researchers; it has become a core professional concern. If there was one overarching takeaway from our discussion, it was this: trust won’t be restored by “better messaging” alone. It will require better incentives, more disciplined public communication, and a genuine willingness to listen to the people who have walked away from us.

Part 1: Three Voices, One Challenge
Our panelists (Holden Thorp, Editor-in-Chief of Science; Megan Ranney, Dean of the Yale School of Public Health; and Ivan Oransky, Co-founder of Retraction Watch) don’t all subscribe to the same school of thought. But they all sit at the intersection of where scientific knowledge is produced, communicated, and contested.
Holden Thorp: The Case for Disciplined Transparency
Holden’s prescription isn’t to retreat from the public eye, but to change our presence within it. He advocates for “disciplined transparency”, applying the same evidentiary standards to a public tweet or an interview that we demand in a peer-reviewed paper.
More provocatively, he argues that the old posture of “apparent objectivity” — staying out of the political fray to seem neutral — is no longer a defense. In an era where scientific authority is being deliberately undermined, a refusal to take a position can become a form of complicity. As Holden puts it, when we fail to engage, “the enemy is sometimes us”.
Megan Ranney: Trust is Relational, Not Just Informational
Megan brings a “relational” model to the table. Her research shows a vital distinction: institutional trust (trust in “science” as a monolith) is cratering, but interpersonal trust (trust in your own doctor or a researcher you’ve actually met) remains relatively high. People who say they distrust “science” as an abstraction often report deep trust in their own doctors or researchers they’ve encountered directly. For Megan, the current friction is a signal that we must stop defending an outdated status quo and start adapting our principles to a new, messy, and interpersonal reality.
Ivan Oransky: Turning Retractions into Integrity
Ivan occupies a unique and often uncomfortable seat at the table. Retraction Watch, which he co-founded with Adam Marcus in 2010, serves as a vital hybrid of watchdog and living archive. Ivan’s perspective shifts the emphasis from the performance of public communication to the underlying mechanics of professional conduct. He argues that our current landscape frames a retraction as a catastrophic, career-ending failure, when it should instead be celebrated for its true nature: a vital and necessary act of scientific integrity.
His focus on the “bad orchard” (see below) identifies a system where institutional rankings and relentless funding pressures drive researchers toward haste and overstatement. In an effort to begin shifting this culture, he recently inaugurated the “Control Z Award,” providing cash prizes to researchers who demonstrate the courage to voluntarily correct or retract their own findings. It serves as a small, symbolic step toward rewarding the very values the scholarly community claims to uphold.
Part 2: The Bad Orchard Problem
We often talk about research misconduct as a “bad apple” problem. The assumption is that, if we can just identify and remove the dishonest actors, the scientific enterprise will heal. However, Oransky argues that we actually have a “bad orchard” problem. If the soil itself is tainted, you can keep pulling bad apples indefinitely without ever addressing the conditions that produce them.
The “soil” in this case consists of our structural incentives. University and institutional rankings carry massive consequences for funding and prestige, and those rankings are built substantially on publication counts and citations. This creates a downward force that flows through the entire system: governments push universities to climb the rankings, universities push faculties to publish more, and researchers are left to prioritize novelty and speed over rigor and replication.
Publishers are a central part of this loop. We provide the platforms and the Journal Impact Factors that serve as the “coins of the realm”. The proliferation of the pay-to-publish model can inadvertently favor “salience” over the slow work of replication and, in this high-pressure environment, peer review risks shifting from a rigorous filter to a mere professional credential.
The Scale of the System
The numbers bear out these uncomfortable truths. China currently accounts for more than half of the roughly 64,000 recorded retractions in the Retraction Watch database. This is not a story about a specific group of researchers being uniquely dishonest; it is a story about what happens when the pressure to climb global rankings becomes intense enough that gaming the system becomes a rational career strategy. In this context, paper mills and citation buying are not aberrations. They are predictable outputs of a system operating at scale.
The AI Accelerator
AI did not create this dynamic, but it has certainly industrialized it. While paper mills previously required human effort, they can now use generative AI to mass-produce plausible manuscripts and synthetic datasets at a volume that our editorial infrastructures were never designed to handle.
This has created a genuinely asymmetric arms race. The cost of generating a fraudulent submission is now approaching zero, while the cost of detecting and investigating that fraud remains substantial in terms of editorial time and institutional resources.
A Collective Challenge
This incentive problem is particularly difficult to solve because no single actor has the power to fix it. Journals cannot change how universities are ranked, and funders cannot easily compel tenure committees to value public engagement over citation counts. Researchers cannot individually opt out of this system without paying a significant personal cost to their careers.
The fundamental question is whether this structure can be reformed from within, or if it will take external pressure from governments and a skeptical public to force a change in how we value scientific work.
Part 3: The Storytelling Paradox
The word “storytelling” often makes researchers uneasy. There is a deep-seated concern that narrative is the enemy of nuance. The fear is that by centering a protagonist or building toward a clear resolution, we inevitably introduce distortions that compromise the scientific integrity of the work. If you have to simplify to be understood, many believe you have already started to mislead.
Ivan pushed back on this idea, pointing to research that suggests the opposite is true. It turns out that nuanced stories — coverage that actually acknowledges uncertainty and competing interpretations — tend to generate more public trust than clean, confident narratives. In this light, oversimplification isn’t just an intellectual shortcut; it is a strategic mistake.
Megan Ranney sees storytelling as a tool for clarity rather than simplification. For her, it is about providing the context that makes complexity legible. This means showing how a scientific problem connects to real lives, being explicit about what we don’t yet know, and offering the audience something actionable.
She argues that storytelling is already baked into the scientific process. Whether you are writing a grant application or running an experiment, you are telling a story about why a problem matters and how your approach might solve it. She even suggested that we can find protagonists in unexpected places. Drawing on Andy Weir’s Project Hail Mary, she noted that a story doesn’t always need a human hero. Sometimes the protagonist can be the objects of our scientific inquiry themselves, like the microbes or the stars.
To make this real, she shared the example of a community science event at Yale, where a PhD immunologist helped students and community members tell five-minute stories about science. The results were moving and grounded: a parent navigating a child’s addiction, a first-generation scientist finding their lab, and a family dealing with a dementia diagnosis. None of these stories required anyone to pretend science is simpler than it is, but they all made it human.
Collaborative Narrative: The Lenacapavir Example
One of the best models for this kind of rigorous storytelling is the Science Breakthrough Award for lenacapavir, a long-acting HIV preventive. Rather than hero-izing a single discoverer, the award recognized three distinct teams: the basic researchers who identified the protein, the Gilead team that designed the breakthrough trials, and a representative of the community groups who participated in the research and advocated for the drug’s availability.
This approach gave us a story with multiple protagonists. It showed that a scientific breakthrough isn’t just a moment in a lab; it is a collaborative arc involving researchers, industry, and the public. It is an exemplar of how we can tell the full story without losing any of the underlying complexity.
Disciplined Transparency
Holden Thorp identified a different but related problem: the gap between the careful language of a paper and the amplified language of public discourse. Researchers often find themselves saying things on social media or in interviews that their own peer-reviewed papers would not support.
He used the COVID origin papers as a cautionary tale. While the published versions were scrupulously edited to avoid overstating the evidence, the public discussion was far more definitive. He even had to go to Congress to explain the difference.
Holden’s solution is what he calls “disciplined transparency.” At Science, he manages a news operation with 30 journalists who hold powerful people to account, and he insists that they are not “state media” for the scientific enterprise. He no longer makes public statements or writes editorials unless they are as carefully documented as the journalism his own reporting team produces. It is a high bar, but it reflects a vital principle: the obligation to have receipts applies to scientists as much as to the media.
Listening as a Diagnostic Tool
Finally, Megan shifted the focus from broadcasting to listening. Engaging with skeptical communities should not be an outreach campaign where we just repeat our facts until the skeptics cave. Instead, it should be a diagnostic practice. We need to listen in order to understand exactly where our communication or our processes have failed the very people they were meant to help.
Her work with grassroots MAHA organizers is a perfect case study. By starting with genuine dialogue, she moved from mutual suspicion to a research partnership and eventually a joint New York Times op-ed. That is what happens when engagement is treated as a two-way street rather than an outreach campaign.
Holden added a final, sobering example: talking to mothers who believed vaccines caused their children’s autism. He noted that showing them a randomized control trial will not change their minds because it does not validate their experience as caring parents. Until we can validate that experience, the conversation remains stuck.
Part 4: What Would Actually Help?
To wrap up our discussion, I asked each of our panelists for a parting shot. If we want to move the needle on public trust, what is the one thing we should start or stop doing? Their answers provided a surprisingly coherent agenda for a more honest, human scientific culture.
Ivan Oransky: The Case for Humility
Ivan’s call was for radical humility. He argued that we need to embrace the idea that being wrong is not a professional failure but an ordinary and expected feature of scientific work.
The real problem is not that scientists make mistakes; it is that our current incentive structure punishes acknowledgment of those mistakes so severely that the only rational response is concealment. To fix this, humility needs to be modeled, mentored, and rewarded at every level of the research enterprise. We have to start treating a corrected record as a sign of strength rather than a mark of shame.
Megan Ranney: The Power of Listening
Megan’s parting shot returned to the theme of listening, but she framed it as an historical necessity. Drawing on her background in the history of science, where she studied under experts like Naomi Oreskes, she argued that we are currently in a “technological rodeo” we have seen before. Every major shift in how we share information, from the printing press to the radio to the internet, has triggered a delegitimization of expertise and the rise of what she called “grifters and snake oil salesmen”.
However, the way through those previous disruptions was never to retreat to older, more comfortable systems. Instead, it was to adapt scientific principles and institutions to a new technological and social reality. Just as the printing press did not destroy the authority of scholarship but eventually transformed it, we must be willing to do the work of transformation today rather than simply defending a status quo against a tide we cannot hold back. For Megan, listening is the diagnostic practice that tells us where that transformation needs to start.
Holden Thorp: Truth Over Performance
Holden’s final point cut through the noise of professional debate. He noted that a debate that has a winner is not a search for the truth; it is merely a performance.
To build real trust, we need to be part of exchanges where the participants are genuinely open to being wrong in public, and where the goal is understanding rather than victory. While this is a long way from where scientific discourse currently sits, he believes it is the only direction of travel that matters for restoring our social contract with the public.
Moving the Needle
It is telling that none of these final suggestions were about “communications strategies” in the traditional sense. There were no requests for new platforms, better press release templates, or more social media training.
These are arguments about values and culture: about what the scientific community decides to reward, what it decides to practice, and what it decides to acknowledge about its own failures. The polling data suggests the public is watching that culture closely and drawing their conclusions based on how we handle our mistakes.
If there is reason for optimism, it was in the audience itself. More than 650 people showed up on a Thursday morning to engage with these uncomfortable questions. That turnout suggests a massive, shared appetite for change. As Megan reminded us, the future of this enterprise is not a foregone conclusion. We can choose to retreat, or we can choose to do the hard work of making science legible and trusted once again.
The technical problems in science communication are easy enough to describe. The cultural ones are much harder to solve, but they are exactly where the work begins.
Discussion
3 Thoughts on "Guest Post — Restoring Trust in Science: What Would Make a Difference?"
If we really want to improve trust, we need to increase and reward meaningful ENGAGEMENT. with society and communities, not just communicating or sharing too them. We saw this in spades with AGU’s Thriving Earth Exchange (https://thrivingearthexchange.org/). The “broader impacts” of grants are currently a throwaway. Put real $ behind them (like require 5% of the grant be for them with a real plan to share that $ with communities to help solve their problems), reward researchers and students with publications (in journals like “community science”), and develop or train scientists in best practices, so that we’re solving community problems with science. not just telling them stories on how research works. This was partly discussed in other sessions at the AAAS meeting. Another 5% or so of indirect should go to rigorous FAIR data curation (e.g., in curated repositories)–another investment that will pay dividends going forward, and will also help build trust as these are increasingly the valuable research output.
How would this apply to basic science research? If I’m looking at a genetic mechanism in Drosophila, what community problem would I be expected to spend 5% of my time/money solving?
Thanks david. the engagement doesn’t have to be directly in drosophila research per se. Understanding genetic diversity in a community ecosystem or park or small restoration effort, etc. think broadly what skills researchers bring, and what resources can be made available to communities that they don’t currently have (e.g. sequencing), then QED. But it starts with developing leading practices in that engagement. If you start developing this awareness in students, it will grow. and the direct engagment with communities builds familiarity and trust.