Today’s guest post is a recap of the recent SSP webinar, Ask the Experts: Trust in Science, by the moderator, Anita de Waard, VP Research Collaborations, Elsevier.

We frequently look to experts to help us in many aspects of life: if you need to do your taxes, you go to an accountant; if you have a headache, you see your doctor; if you want to renovate your kitchen, you talk to a contractor. But what happens when people don’t trust the experts, their processes, or the information they share? Over the past year, as the COVID pandemic took hold, people have found themselves bombarded with information about the virus, its spread, and reports of new treatments and vaccines. How do people navigate this and decide what to believe? And what can we do to help?. The Society for Scholarly Publishing (SSP) recently brought together three experts from our community to consider these issues in a discussion of Trust in Science.

Tracey Brown, OBE, Director of Sense about Science, Richard Sever, PhD, Assistant Director at Cold Spring Harbor Laboratory Press and Co-Founder of bioRxiv and medRxiv, and Eefke Smit, Director of Standards and Technology at the International Association of STM Publishers identified several key issues: understanding and trust in the scientific method; scientist-to-scientist trust; and public trust in science. They also highlighted methods to foster trust in science: transparent peer review; tracking submissions and avoiding paper mills; equipping journalists and the public to ask the right questions and communicate the answers effectively; the need for negative data to be accessible; and starting early — teaching children about science and research.

We asked our experts three questions about ‘trust in science’: 1) how do you define it (whose trust, and in what?); 2) what is your organization doing to enhance this trust?; and 3) what do you think publishers can do to improve trust?. Here’s what they said:

"Trust" in sound bubble

How do you define ‘trust in science’: what is this trust, and whose trust is it?

Eefke Smit (ES), STM: The pandemic has revealed an increasing desire to come to science for rescue on Big Problems. And science has been great this last year, in defining the virus quickly, in developing vaccines at such a high pace, etc. At the same time, people find it difficult to understand that it is an exploratory expedition for truth, into a completely new area of knowledge, where sometimes wrong turns are taken, before turning back on the main road. We see that people want to put their trust in science, and often, it gets mixed up with truth. Their trust in science depends on science results being true, always and everywhere. But scientific truth is a very fluid thing. Science is based on probabilistic thinking, and it’s a discussion, not a consensus: old truths get replaced by newer truths, to be replaced again, while the body of knowledge grows and grows. To the uninitiated, this is difficult to explain and, therefore, they may get discouraged when they look for truth and find this process.

For us, this is a timely topic! The motto of our recently published STM Trends 2025 is: Let’s Go Upstream, Seeking the Sources of Trust and Truth. It conveys the message that you need a good understanding of the research flow in order to be able to understand how trust and integrity can get embedded there. Publishers have an important role to play here, and are key in establishing trust in science. Think of peer review as a strong pillar under trust and truth.

For this effort we defined trust in five dimensions: 1) Transparency, 2) Reliability, 3) Predictability, 4) Responsibility and accountability 5) Self-correction. In all five, it is important to strengthen trust and integrity in the scholarly communication process.

Richard Sever (RS), Cold Spring Harbor Laboratory Press: I agree that trust is a multifaceted concept. I see three aspects: trust in the scientific method; trust in the integrity and honesty of scientists and the belief that they are genuinely seeking truth; and trust in the methods used and data analyses.

As for whose trust you’re talking about, there are two parties: the general public, which will concern some publishers; and other scientists, which should concern all publishers. When it comes to the general public, there’s a lot of misunderstanding about how scientific consensus is arrived at. We need to explain the differences between isolated observations, accumulation of evidence over multiple experiments, and ultimately things like large-scale studies and clinical trials that yield actionable evidence. The public also frequently confuse causation and correlation, a classic example being timing of childhood vaccination and autism. Publishers and journalists need to do a better job of explaining all this and providing context.

Tracey Brown (TB), Sense About Science: I agree with Richard — there are different elements of trust: institutions, methods and data. We need methods to cut through the noise. COVID has shown this, and has galvanized a lot of effort by scientists to help the general public navigate information, in particular data science. When I now talk to my mom about models she understands that I’m talking about data models rather than fashion models or car models!

I think that we are a bit preoccupied with the concept of trust. Recent Pew research showed that trust in science is quite high. But there is a lot of anxiety among scientists about whether scientific information has sufficient purchase with the public. There is a very useful distinction that Matt Bennett talks about: epistemic trust versus trust in a recommendation. The first type of trust refers to whether I believe that what you say is true, the second refers to whether I trust your advice on how to run my life. The trust in political systems is not very high but scientists shouldn’t over-internalize that or become defensive. Scientists sometimes engage in too much “public blaming”: the public has a right to be suspicious of authorities.

Sense about Science focuses on equipping the public — and media and policymakers — to understand how science develops, as the basis for trust. We have spent a decade popularizing an understanding of peer review, for example. We encourage people to question how reliable scientific claims are.

How does your organization support trust in science?

ES: As I mentioned; publishers have a key role to play in my view. STM is a member organization of publishers worldwide: about 75% of the peer-reviewed literature is published by our members. We think it’s very important that the public understands peer review. One of our current cross-industry projects is about more transparent peer review. It is part of a number of efforts that have to do with improving trust in science and research. They also include: integrity check tools for publishers so articles can be tested on methods and data; research data best practices; a taxonomy to define peer review practices; and a program to track duplicate submissions, which take up a lot of time and effort across publishers. We are part of the NISO reproducibility badging project. We also work on image manipulation detection, and are part of a multi-stakeholder initiative that’s developing a taxonomy for retraction policies.

RS: Cold Spring Harbor Laboratory does a variety of things. We have a large initiative called the DNA Learning Center that teaches high school kids about molecular biology and genetics. We also have a number of public engagement initiatives. I myself recently did an online event organized by one of the New York State Senators in which I explained to the local New York community how COVID vaccines work and why people should get them.

When it comes to trust amongst scientists, one concern that’s emerged amid the reproducibility debate is that there is a bias towards positive results and less interest in more pedestrian findings that replicate what is already known. We run a preprint server, bioRxiv, where we do not give any judgment on the quality of the work, so people are able to post replications and contradictory results to try to address this bias. We also believe that the decoupling of dissemination from peer review that bioRxiv provides will allow peer review to become more multidimensional. We already aggregate the conversations about preprints and post them alongside the papers. Hopefully this is just the beginning of a system in which a constellation of trust signals can emerge around a paper.

TB: We have a campaign called ‘Ask for Evidence’. A particular focus at the moment is the need to make sense of claims from scenario models and data. For instance, when the pandemic started, politicians didn’t understand why scientists couldn’t make a COVID model. It was because the data to build one didn’t exist yet!

We distinguish between a motivated versus an unmotivated audience: journalists are a motivated audience, but the general public often less so. Sometimes scientists get asked: “if you don’t have the answer why are you here?” A scientist at that point should say: “To make sure someone else doesn’t pretend that they do!”

How do you think science publishers can contribute to improving trust in science?

TB: I think that publishers can do four things: first, expand the commentary about peer review. We need to make sure that there is more conversation around providing better reviews of prior work and explain how much weight can be put on the discourse. Secondly, we need more support for public education in the sector. Third, I would like to start a campaign for understanding epistemology. Finally, we need a new word for epistemology! The best I can come up with is ‘evidence know-how’, but I’d love to hear other ideas.

RS: Publishers are stewards, certifiers, and correctors of the record. They can provide transparency around the process, and work on documentation and standards. Journals, for instance, can require and confirm accession numbers for DNA sequences deposited in NCBI databases. We also need to be open to results that aren’t perceived as novel and exciting. The replication of experimental findings is critical, and we need to find homes for replications and discussion of conflicting findings. Finally, it’s important that we spend time correcting the record where we have certified it in some way. Everyone knows the number of retractions is lower than it should be. There are often indications that people don’t believe certain papers, but because of the stigma around retractions this is not indicated.

I also think it’s important that publishers explain the many things that they do. For instance, verifying nomenclature and deposition of source data, and assuring appropriate ethics and standards are things good publishers routinely do; it’s not just a thumbs up based on referee comments. These all contribute to that constellation of trust signals around a paper.

ES: I totally agree, and I think that it is important that we are more transparent about everything in the scholarly communications process. First of all, we need to have transparent peer review processes, including for the underlying data and methods. We also need clear retraction policies: the taboo on retractions should go, they are a normal part of well-managed self-correction processes.  The body of knowledge grows, truth evolves, and it is important that we show the reader clearly if findings are found to no longer be correct — they need not be associated with malign intent. Also, we need to work on a clear distinction between preprints —  published or not-published. For instance, if preprints are still available for papers that have been rejected or retracted, it’s important that this is noted. We do not want false knowledge to be floating around. We must keep the waters clean and pure. Likewise for avoiding image manipulation, duplicate submissions, and fake paper submissions from paper mills.This will make the whole process of scholarly communications more transparent, more reliable, and more predictable — and in that way, hopefully, we can help to underpin trust in science.


As experts continue to research and publishers strive to communicate their findings, the public’s access to information will continue to grow. Providing them with educational resources and tools to understand this information, ask appropriate questions, and discern truth from falsehoods will achieve better outcomes for everyone. Please share your thoughts — including any questions you have for our experts — in the comment section below.

Further Reading 

PNAS article on Signaling The Trustworthiness of Science


8 Thoughts on "Trust in Science: Views from Three Experts"

“How do you think science publishers can contribute to improving trust in science?”

One basic step would seem to be for publishers to require authors to make the data and calculation tools used in their analyses public, and to down rank those journals that don’t. In my field, environmental science, I don’t know if any journals actually require this, they just encourage it. For publishers, the holdup seems to be why require something authors don’t like if they can just get a pass from a competing journal? Not sure what it would take to get publishers up off the collective floor, but it seems like the only entities with real pull are the indexing services. If Scopus, Clarivate, et al were to put out the word that they were reviewing journal data integrity policies, publishers might do more than exhortations.

Just curious: who should pay for the creation and ongoing maintenance of this capability?

To answer both your question and Joe’s question, there is an obvious need for collective action to create a level playing field where individual journals will not be punished for enforcing high standards of rigor. The key players here will be research funders, who have a strong interest in increasing the return on investment for their funding, both through improved reliability of results generated, but also through the efficiency that reuse of data and methods will provide. As those funders will be setting the rules, it is incumbent upon them to pay for the services necessary to meet those requirements (data and methods deposition and compliance efforts by journals, etc.). Given that the US puts tens of billions of dollars annually toward funding research, the amounts necessary seem a prudent investment. I’ve been a coauthor on two policy proposals along these lines in the US which you can see at the links below:

Re ‘who should pay’, to answer a question with a question, well, who should pay for managing peer review, editors, CrossRef, Portico, doi, and the other 100+ things involved in publishing science that Kent Anderson used to tally up periodically? It’s just rolled up in the cost of publishing. The point is, I think the expectation that authors show their work regarding data and calculation tools should just be part of the publication expectation. There’s definitely a cost to authors and funders to data management, curation, and publishing, just as there are costs to quality assurance, safety, training, fair pay of technicians…. Sure, if authors could skip those costs, we might have yet more articles published, but would that be a good thing? Maintaining the data repositories, the Pangaea, Dryad, figshare, and such definitely have costs but those are their own topic.

Regarding the idea that the key players are funders, I agree – somewhat. In some fields the US government or trusts are big players, but not so much in the corner of environmental sciences where I work. Mixed funding is the norm, and data transparency is optional and not the norm. Even for government scientists, there are lots of loopholes to wiggle out of data publishing for those who can’t be bothered. For all the hoorah on Plan S, I never saw much concern there in providing open access to the meat of science articles – their data. Yes, there are the vanity publishers who will publish data-free articles from authors with a valid card number and other publishers that will undercut, but if all the major science publishers stiffened their spines on data transparency, might that help trust in science? At least for scientists to trust each other more? Fewer, better papers would be OK in my book. But I think it would take a third-party nudge from say, the major indexing services for any movement by the major publishers.

PLOS already does this and their CEO Alison Muddit is a writer for this blog. Alison may want to comment and could also address Joe’s point about cost – mainly editorial time, which varies considerably among journals. It’s also worth noting that third-party verification services could play a role here as part of article badging.

I too hoped that Alison Muddit might weigh in on the costs and perceived benefits of requiring authors to submit publicly accessible supporting data at the time of publication. I recall Phil Davis speculating in a post a few years ago that one of the reasons Scientific Reports was growing at the expense of PLOS ONE was that the former had lax data requirements making it less hassle for authors seeking an easy path to publish (plus fickle authors chasing impact factors).

Absent Alison, I’ll offer a n=1 observation. I finally sent an article to PLOS ONE just because I liked the idea of what they were trying to be about, and I wanted to see how their process actually worked. My impression is that enforcing the data transparency policy is not a big lift for their editorial office. Seems like they clicked the links to see if they worked and if there was anything there, and that was about it. The peer reviewers were explicitly asked to judge whether the authors complied with the public data requirements. I think both the editorial office and peer reviewer data checks are likely superficial. In contrast, where data checks sometimes can extensive is if post-publication, someone tries to replicate or extend an analysis. Authors, and (depending if major) publishers, and the twitterosphere may hear about it if they can’t.
I had hopes that data badging might become a thing, but it seems like badges are just one more thing competing for the readers’ attention along with all the ads and links. Maybe a publisher will try the reverse: a default caution on data-free articles. “Readers should be advised that no raw data were made available during the peer review process and it may be difficult to verify conclusions reported herein.”

Good post—thanks authors. A couple additional ideas: (1) Tracey mentioned that in the US, trust in science is quite high. That’s true—as an institution, it ranks second only to trust in the military. That said, since the early 1970s the percent of the US population with a great deal of trust in science has never gone above 50% (which is lower than in many European countries). Part of this deficit reflects healthy skepticism, part is due to a vaguely-worded survey question. But part is also, I think, due to poor understanding of what “science” means. The word has been misappropriated everywhere, and there is genuine confusion about what science is and does, which seeps into everything from vaccine skepticism to climate change denial. To help remedy this, a public information campaign might help, like Schoolhouse Rock. “I’m just a Theory…”; (2) Publishers should consider taking a larger role in policing their own. And here, I mean helping create and police industry standards that can help universities and scholars identify predatory publishers and other actors who aren’t fit to publish research. There isn’t evidence to-date that these fake publishers have polluted the scientific record—readership is simply too low. However, I think it’s likely these publishers have diluted trust in science by feeding the “infodemic” we saw last year with Covid research; and finally, (3) Science communication as a field has never received the funding and attention it deserves. It can sit at the center of a lot of these conversations, which are part marketing, part public relations, part publishing, part science. If we’re serious about improving the future of science, we need to fund science communication as more than just an afterthought.

Bravo RS for encouraging progress in post-publication review: “the decoupling of dissemination from peer review that bioRxiv provides will allow peer review to become more multidimensional. We already aggregate the conversations about preprints and post them alongside the papers. Hopefully this is just the beginning of a system in which a constellation of trust signals can emerge around a paper” When carried out non-anonymously by accredited reviews (as in the now defunct PubMed Commons) this will be an important advance. To further encourage, we should get into the habit of referring to “preprints” as “preprint publications”.

Comments are closed.