(This is Part 2 of an essay based on a presentation I gave in the OpenAthens Access Lab series on February 24, 2025, titled “Misinformation, Disinformation, and Trust in Scholarly Communication: Challenges and Strategies.” Part 1 is available here.)

Research Integrity and Misinformation

When we talk about “research integrity,” we’re talking fundamentally about truthfulness, and about the important and essential distinction between true and false claims. Researchers demonstrate integrity by both conducting and reporting on their research in ways that increase its reliability as a reflection of reality, taking measures like registering their trials; controlling for intervening variables; making only claims supported by high-quality evidence; making their data available for independent review; etc. These and similar measures are not important because the community of scholars and scientists has simply decided to agree that they’re important; they’re important because they reflect the application of universal laws of logic and reason and increase the likelihood of arriving at objectively true conclusions.

Collage image of various cutouts of eyes, ears, mouths, newspapers with "fake" written on them, torn paper, etc.

Now, I anticipate an objection here – someone, I suspect, has just read the above paragraph and is snorting to him- or herself something along the lines of “’Universal laws’? ‘Objectively true’? Come on.” But let’s get real for a moment. None of us really believes that there’s no such thing as objective truth, or that there are no universal principles of logic and reason that apply to its pursuit. For example: American readers will likely remember a moment during the 2016 US presidential election, when a claim arose that there was a child sex trafficking ring being run out of a pizza parlor in Washington, DC. To say that there was no objective truth regarding this claim – that it was neither objectively true nor objectively false – would be absurd. (And, of course, to claim that the truth or falsity of that claim didn’t matter would be obscene.) The trafficking was either happening or it wasn’t – and the only way to establish the truth or falsity of the claim was by appealing to empirical evidence and drawing logical conclusions from it. In the case of the basement-sex-trafficking hypothesis, its truth or falsity could be established relatively simply, by… investigating the basement.

For other hypotheses, of course, establishing truth or falsity is more complex and challenging, and not all hypotheses can be definitively established or disproved either logically or experimentally. But this doesn’t mean that those hypotheses are neither true nor false, nor does it mean that their truth or falsity doesn’t matter.

A fundamental purpose of scholarly communication is to amplify claims supported by solid evidence and reasoning, and decline to amplify those that are not.

This, obviously, is where scholarship and science come in – and it’s why the concepts of misinformation and disinformation are unusually difficult in the context of scholarly communication. Because unfortunately, it has become too easy to characterize claims as “misinformation” or “disinformation”, not based on the evidence for or against them, but rather on one’s assessment of the motivations of those making the claims – and on whether they advance one’s preferred scholarly or social or political narrative or discourse. This happens across the political spectrum. Think of a politician reflexively characterizing any news report that is not congenial to his interests as “fake news” and threatening legal or political action against the journalists who purvey it. Think also of the scientist who does research into controversial medical or psychological questions, and then deliberately suppresses their findings to prevent those they disagree with politically from “weaponizing” their data.

The fact is, there is no way to be a social or applied scientist without working to establish the truth of scientific propositions. If you aren’t trying to figure out what is and isn’t true, and taking positions (however contingent) based on the evidence you gather and evaluate, then you aren’t doing scholarship or science. Furthermore, one of the most fundamental purposes of scholarly publishing is to amplify claims the truthfulness of which is well supported by rigorous evidence and reasoning, and decline to amplify those that are not; scientists, editors, and publishers who don’t make such discriminations are in breach of the most fundamental contract they have with the society that underwrites their work. We may never know everything there is to know about what is true and what is false in the realm of, say, population genetics, but we hold geneticists to universal standards of reason and objective truthfulness: they must carry out their studies with logical rigor and report on their outcomes truthfully, thereby moving us closer to an understanding of whether the tested hypotheses are true or false. And one of the most basic purposes of librarianship is to help equip students with the intellectual tools necessary for effective information seeking and critical thinking — activities and skills applied in discriminating between true and false claims.

Disinformation, Disagreement, and Intellectual Humility

Now obviously, none of this is to deny that scholarly and scientific conclusions about what is true and what is false are always contingent on the development and emergence of further evidence. And although I am taking it as given that truth exists and can be distinguished from falsehood, I mentioned earlier that there also exist significant areas of inquiry in which it may not be possible to determine absolute truth in a scientific or strictly rational way. Furthermore, even our ability to reason correctly from complete and high-quality information is limited – and we very often have no choice but to reason as best we can from incomplete or questionable information. And, of course, there are whole fields of inquiry in which objectively true conclusions are simply not available because they rely on values: the effectiveness of an abortion drug is a scientific question; what rules (if any) society should place around abortion is not.

All of this suggests that the line between disinformation and disagreement is not always clear, particularly in the realm of scholarly communication. This, in turn, suggests the importance of intellectual humility when assessing (and characterizing) the behavior and motivations of others: those who conflate honest and principled disagreement with disinformation are purveying falsehood themselves – it’s misinformation if they’re doing so in good faith and disinformation if they’re doing it on purpose.

In this context it’s important to bear in mind that it’s not only disinformation that is weaponized – so, often, are accusations of disinformation; such accusations are easily weaponized by people with economic, political, or cultural power to silence those who disagree with them. If you believe a proposition is untrue, it’s all too easy to immediately dismiss it as misinformation (or, worse, disinformation) – and if you’re in a position of power, it’s then both easy and tempting to use that power to suppress what you disagree with. Think again of politicians shouting “fake news!” to stir up resentment against those who expose their malfeasance – or, worse, using accusations of “disinformation” as a pretext for jailing opponents.

And yet, at the same time, it’s certainly true that some propositions are simply false, and some of them cause tremendous harm, and those with power to counteract them may do the world a tremendous service by doing so.

So in this context, what does “counteract” mean? In rare occasions, it might appropriately mean “suppress”; much more often, it means “engage with.” The former approach is censorship (which is usually a bad idea but may sometimes be justified); the latter approach is debate (which is usually a good idea but sometimes a waste of time). Navigating these complexities is, unfortunately, one of those endeavors with which science itself may not be able to help us much. Doing so requires an appeal to moral reasoning from values rather than scientific reasoning from objective evidence.

Conclusion

As scholars, scientists, publishers, and librarians, we can’t escape the obligation to distinguish between truth and falsity. We owe an allegiance to truth that is greater and deeper than our allegiance to political agendas or particular schools of social thought. No program, organization, or movement that requires our dishonesty is one that deserves our loyalty.

Because truth matters, both avoiding and exposing disinformation is a worthy and important endeavor. And also because truth matters, it’s essential that we pursue this endeavor with intellectual humility, recognizing that objective truth isn’t equally possible to establish in every domain, that our own ability to discern truth from error will always be limited, and that we will always face the temptation to dismiss or fight truth that is socially, politically, or professionally inconvenient for us.

This is hard work, and it’s not always rewarding. Sometimes it’s professionally and/or socially dangerous. But it’s work the world needs us to do.

Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.

Discussion

6 Thoughts on "Misinformation, Disinformation, and Scholarly Communication (Part 2)"

Hi Rick, thanks for your thoughts!
As I was reading this, the article “On Bullshit” (by Harry G. Frankfurt) was ringing in the back of my head. First published in 1986, but timelessly relevant.
He defines “bullshitting” as not caring whether there is any truth (as opposed to lying and unintentional error). The intention of the bullshitter is to make an impression on others. Politically, in his or her professional endeavours, in defining social rank, etc.
And every one of us is of course guilty of this – one time or another… Just saying something we are not really sure about. – And then we have to be corrected by others.
I think one of the basic aims of education should be to understand that “knowing” and “discovering” is always a collective endeavour.
And the little that we as publishers/librarians/scholars can contribute to this is what you suggest in the conclusion: keep on working on both avoiding and correcting/exposing disinformation.

I’m reading the Art of Uncertainty, How to Navigate Chance, Ignorance, Risk and Luck by David Spiegelhalter. If certainty needs to be 100% to have the “truth” that avoids misinformation, we might as well give up. Even with an accurate account of any results, what they mean is easy to misinterpret. I’ll repeat that I’ll accept a high degree of probability as good enough to make decisions.

I have three further comments. Aids is deemed to be impossible to cure, and rabies is always fatal, but two cases have occurred when this wasn’t true according to reliable sources. Is this enough to make it mandatory to change the general statements above?

My second comment involves astronomy. I believe that enough proof exists to say with complete certainty that Earth has only one moon. Recently, scientists changed the number of moons for Saturn from the results of a probe. The most interesting case is that the “true” answer of how many planets exist is now eight instead of nine, not because of any change in “reality” but because of a change in the definition of “planet.” Has the old answer for my lifetime until just recently now become “misinformation”?

Finally, a major principle of science is that the old “truth” changes as new and better information is discovered. Do the old results then become “misinformation” or is some other term needed to to describe such changes?

There’s a big difference between saying “I’m always open to the idea that what I believe to be true may not actually be true” (which, I think you and I agree, is always the stance of wise scientists and scholars) and saying either “objective truth doesn’t exist” or “truth is unknowable.” When someone holds an opinion or arrives at a scientific/scholarly conclusion on a question, but remains open to the possibility of compelling evidence that might change his or her mind, what that person is doing is precisely in harmony with the idea that objective truth exists and can be known. As I say in the piece: if we don’t believe that there is such a thing as objective truth, or that it’s possible to distinguish it from falsehood, then there’s no basis for any conversation about misinformation or disinformation. All we have are competing subjectivities, no one of which can reasonably be privileged above another.

Furthermore, multiplying examples of scientists arriving at incorrect conclusions no more proves that science can’t establish the truth of any proposition than multiplying examples of people dying from AIDS or rabies proves that AIDS and rabies are incurable.

I believe that, as you say above, we’re much more in agreement but are having difficulties with semantics.

“When someone holds an opinion or arrives at a scientific/scholarly conclusion on a question, but remains open to the possibility of compelling evidence that might change his or her mind, what that person is doing is precisely in harmony with the idea that objective truth exists and can be known.”

What do you call the statement that was considered “objective truth” in the past, but the new evidence has shown to not be today’s “objective truth”? I’m thinking in part of Taleb’s book “The Black Swan” where everyone knew that all swans were white until black swans were discovered in Australia. Perhaps an elegant solution would be to say that they weren’t swans in the same way that Pluto stopped being a planet, but this solution doesn’t seem right either. It appears that I would like to say highly accurate according to what is known today while you would prefer saying objective truth subject to no longer being objective truth tomorrow with the arrival of new information. Finally, what happens when two “objective truths” collide when both are supported by evidence that is not conclusive enough to choose one or the other?

What do you call the statement that was considered “objective truth” in the past, but the new evidence has shown to not be today’s “objective truth”?

I would call that statement a factual error. (Of course, what today’s consensus considers an “objective truth” could be a factual error too. What makes a statement true or false is not the number of people who agree with it.)

I’m thinking in part of Taleb’s book “The Black Swan” where everyone knew that all swans were white until black swans were discovered in Australia.

The truth was that black swans existed. Those who believed they did not exist were mistaken. (Again: there’s often a big difference between what “everyone” accepts as the truth and what actually is the truth.)

Finally, what happens when two “objective truths” collide when both are supported by evidence that is not conclusive enough to choose one or the other?

What should happen, I think, is that we conclude we don’t have enough information to say which of them is true (if either of them is).

Leave a Comment