In an era of declining journalism and a soundbite culture, comedians seem to be the only ones willing to take on serious issues in a thoughtful manner. Last week’s post featured Jimmy Kimmel taking on climate change denialism, and this week John Oliver takes an in-depth look at the enormous problems the media cause in undermining the credibility of science. As Oliver notes, “science is by its nature imperfect, but it is hugely important,” and the sensationalistic nature of modern news reporting seems to have no ability to recognize either concept. Oliver’s criticism isn’t limited to “news” programs, and in particular, he goes after TED talks, which offer little, if any, vetting of scientific content presented.

Great stuff, but be warned, the video does contain language that may not be safe for work.

***For those outside of the US, I’m told that the video can be seen here. Trust me, it’s worth the extra work***

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.


6 Thoughts on "Science Credibility: The Media's Role"

I have not seen it. My connection is too slow for streaming, but it seems a sense of humor is a prerequisite.

Should we really blame the messenger for “sensationalistic nature of modern news reporting”? The selective gate known as peer review brings us the researchers who will make the news. And peer review is a fierce competition – with an around 20% success rate – that selects for those who can best market their projects. The “best” projects, almost by definition, are unmarketable. Best ideas are usually those other researchers have not cottoned on to. Indeed, they cannot. At what should be the cutting edge of scientific advance, best ideas are difficult to understand even by their originators high on the mountainside, and even more difficult to communicate to their ‘peers’ on the slopes below.

So for the best scientists the task is to abandon their best ideas and pick ones that their credulous ‘peers’ will clue in to. It is a PR problem more than an originality problem. For the average ‘peer’ such picking is not necessary. The task is far lighter. His/her ideas are obvious and he/she that markets best will garner the funding fruit! No wonder that, when and if that fruit blossoms, the results are hyped to the press!

Elsevier points out that exaggeration does not begin with the press. The press report lies at the end of a long, highly competitive process that begins with funders competing for funding. There is pressure to exaggerate every step of the way. It is a feature of the modern system of funded science.

It is much more complex than “blame the press.” I have identified 15 distinct types of what I call, broadly speaking, funding-induced bias. Press exaggeration is one, but only one. There is even the potential for cascading, where the press can certainly play a major role.
See “A Taxonomy to Support the Statistical Study of Funding-induced Biases in Science.”
THe reviewers hated it so maybe I am on to something.

After having this video sent to me from several sources, I finally invested the 19 minutes needed to go through the full thing.

Their team brilliantly skewers nearly the entire scientific integrity debate agenda in short order, including: Selective interpretation of the data to get flashy but unreliable positive results, designing studies to produce negative results to confound an issue, peer review games, poor replicability & bias against funding validation studies, confirmation bias and selective/slanted syntheses and assessments, pressures on scientists to feed punchy press releases, and tweetable sound-bite science vs. nuanced incremental advances.

If one follows the scientific integrity debates, this 19 minutes is well worth the investment. A bit exaggerated, but as noted by David Wojik noted, exaggeration in science publishing/promoting is much of the issue.

Comments are closed.